diff mbox

arm64: mm: Fix horrendous config typo

Message ID 1403682105-7919-1-git-send-email-steve.capper@linaro.org
State Accepted
Commit f3b766a26dd490026b9eb91a9136ade9f49fc674
Headers show

Commit Message

Steve Capper June 25, 2014, 7:41 a.m. UTC
The define ARM64_64K_PAGES is tested for rather than
CONFIG_ARM64_64K_PAGES. Correct that typo here.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
 arch/arm64/include/asm/pgtable.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Will Deacon June 25, 2014, 9:12 a.m. UTC | #1
On Wed, Jun 25, 2014 at 08:41:45AM +0100, Steve Capper wrote:
> The define ARM64_64K_PAGES is tested for rather than
> CONFIG_ARM64_64K_PAGES. Correct that typo here.

Whilst I agree that this is a bad typo, the existing behaviour would
still return false all the time for 64K-page configs, right?

The bigger problem here is testing the hugepage support with all the
wonderful page sizes we've grown. Are there any targetted tests to know
that we're actually exercising 1GB mappings with 4k granules? Certainly,
our stress tests don't appear to be catching this at the moment.

Will
Steve Capper June 25, 2014, 10:34 a.m. UTC | #2
On Wed, Jun 25, 2014 at 10:12:26AM +0100, Will Deacon wrote:
> On Wed, Jun 25, 2014 at 08:41:45AM +0100, Steve Capper wrote:
> > The define ARM64_64K_PAGES is tested for rather than
> > CONFIG_ARM64_64K_PAGES. Correct that typo here.
> 
> Whilst I agree that this is a bad typo, the existing behaviour would
> still return false all the time for 64K-page configs, right?

Yes. The net effect of this typo is that pud_sect returns spuriously
for 64K granule when it should always return zero instead.
pud_sect has one user:  kern_addr_valid. 

For a 64K granule we have the following arrangement:
	PMD folded into PUD folded into PGD.

Thus kern_addr_valid returns slightly earlier from the walker, but
returns the correct result. The tests ran against kern_addr_valid got
the correct expected result, and the correct behaviour was observed.

> 
> The bigger problem here is testing the hugepage support with all the
> wonderful page sizes we've grown. Are there any targetted tests to know
> that we're actually exercising 1GB mappings with 4k granules? Certainly,
> our stress tests don't appear to be catching this at the moment.

Yes. libhugeltbfs is used to test 1GB mappings, 512MB mappings and 2MB
mappings. LTP exercises 512MB and 2MB THP mappings depending on granule.

Testing in general can always be improved. I have contributed test code
to libhugetlbfs, and will contribute more tests as and when I think of
them.

There are benchmarks being run on workloads backed by 1GB mappings too.

As mentioned above, the only code path affected by this typo exercised
the correct behaviour due to the folding of pgd, pud, pmd. It was
tested and the test passed as the correct behaviour was observed.

Cheers,
diff mbox

Patch

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 5797020..e0ccceb 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -292,7 +292,7 @@  extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
 				 PMD_TYPE_SECT)
 
-#ifdef ARM64_64K_PAGES
+#ifdef CONFIG_ARM64_64K_PAGES
 #define pud_sect(pud)		(0)
 #else
 #define pud_sect(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \