diff mbox

arm64: account for sparsemem section alignment when choosing vmemmap offset

Message ID 1457446169-23099-1-git-send-email-ard.biesheuvel@linaro.org
State Accepted
Commit 36e5cd6b897e17d03008f81e075625d8e43e52d0
Headers show

Commit Message

Ard Biesheuvel March 8, 2016, 2:09 p.m. UTC
Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear
region") fixed an issue where the struct page array would overflow into the
adjacent virtual memory region if system RAM was placed so high up in
physical memory that its addresses were not representable in the build time
configured virtual address size.

However, the fix failed to take into account that the vmemmap region needs
to be relatively aligned with respect to the sparsemem section size, so that
a sequence of page structs corresponding with a sparsemem section in the
linear region appears naturally aligned in the vmemmap region.

So round up vmemmap to sparsemem section size. Since this essentially moves
the projection of the linear region up in memory, also revert the reduction
of the size of the vmemmap region.

Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

---
 arch/arm64/include/asm/pgtable.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
1.9.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Comments

Ard Biesheuvel March 8, 2016, 2:21 p.m. UTC | #1
On 8 March 2016 at 21:18, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Tue, Mar 08, 2016 at 09:09:29PM +0700, Ard Biesheuvel wrote:

>> Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear

>> region") fixed an issue where the struct page array would overflow into the

>> adjacent virtual memory region if system RAM was placed so high up in

>> physical memory that its addresses were not representable in the build time

>> configured virtual address size.

>>

>> However, the fix failed to take into account that the vmemmap region needs

>> to be relatively aligned with respect to the sparsemem section size, so that

>> a sequence of page structs corresponding with a sparsemem section in the

>> linear region appears naturally aligned in the vmemmap region.

>>

>> So round up vmemmap to sparsemem section size. Since this essentially moves

>> the projection of the linear region up in memory, also revert the reduction

>> of the size of the vmemmap region.

>>

>> Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")

>> Tested-by: Mark Langsdorf <mlangsdo@redhat.com>

>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

>> ---

>>  arch/arm64/include/asm/pgtable.h | 5 +++--

>>  1 file changed, 3 insertions(+), 2 deletions(-)

>

> Why no "Cc: stable" in the signed-off-by area?

>


Because Will nor Catalin have responded yet to the proposed fix, nor
to the bug report itself. They may prefer another approach to fixing
the original problem rather than putting this on top, so I don't feel
it is up to me to add a cc stable at this time.

-- 
Ard.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Catalin Marinas March 8, 2016, 3:12 p.m. UTC | #2
On Tue, Mar 08, 2016 at 09:09:29PM +0700, Ard Biesheuvel wrote:
> Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear

> region") fixed an issue where the struct page array would overflow into the

> adjacent virtual memory region if system RAM was placed so high up in

> physical memory that its addresses were not representable in the build time

> configured virtual address size.

> 

> However, the fix failed to take into account that the vmemmap region needs

> to be relatively aligned with respect to the sparsemem section size, so that

> a sequence of page structs corresponding with a sparsemem section in the

> linear region appears naturally aligned in the vmemmap region.

> 

> So round up vmemmap to sparsemem section size. Since this essentially moves

> the projection of the linear region up in memory, also revert the reduction

> of the size of the vmemmap region.

> 

> Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")

> Tested-by: Mark Langsdorf <mlangsdo@redhat.com>

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  arch/arm64/include/asm/pgtable.h | 5 +++--

>  1 file changed, 3 insertions(+), 2 deletions(-)

> 

> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h

> index f50608674580..819aff5d593f 100644

> --- a/arch/arm64/include/asm/pgtable.h

> +++ b/arch/arm64/include/asm/pgtable.h

> @@ -40,7 +40,7 @@

>   * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,

>   *	fixed mappings and modules

>   */

> -#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)

> +#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)


I think we could have extended the existing halved VMEMMAP_SIZE by
PAGES_PER_SECTION * sizeof(struct page) to cope with the alignment but I
don't think it's worth.

>  

>  #ifndef CONFIG_KASAN

>  #define VMALLOC_START		(VA_START)

> @@ -52,7 +52,8 @@

>  #define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)

>  

>  #define VMEMMAP_START		(VMALLOC_END + SZ_64K)

> -#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))

> +#define vmemmap			((struct page *)VMEMMAP_START - \

> +				 SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))


It looks fine to me:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>


Will would probably pick it up tomorrow (and add a cc stable as well).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Ard Biesheuvel March 9, 2016, 1:19 a.m. UTC | #3
On 8 March 2016 at 22:12, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Mar 08, 2016 at 09:09:29PM +0700, Ard Biesheuvel wrote:

>> Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear

>> region") fixed an issue where the struct page array would overflow into the

>> adjacent virtual memory region if system RAM was placed so high up in

>> physical memory that its addresses were not representable in the build time

>> configured virtual address size.

>>

>> However, the fix failed to take into account that the vmemmap region needs

>> to be relatively aligned with respect to the sparsemem section size, so that

>> a sequence of page structs corresponding with a sparsemem section in the

>> linear region appears naturally aligned in the vmemmap region.

>>

>> So round up vmemmap to sparsemem section size. Since this essentially moves

>> the projection of the linear region up in memory, also revert the reduction

>> of the size of the vmemmap region.

>>

>> Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")

>> Tested-by: Mark Langsdorf <mlangsdo@redhat.com>

>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

>> ---

>>  arch/arm64/include/asm/pgtable.h | 5 +++--

>>  1 file changed, 3 insertions(+), 2 deletions(-)

>>

>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h

>> index f50608674580..819aff5d593f 100644

>> --- a/arch/arm64/include/asm/pgtable.h

>> +++ b/arch/arm64/include/asm/pgtable.h

>> @@ -40,7 +40,7 @@

>>   * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,

>>   *   fixed mappings and modules

>>   */

>> -#define VMEMMAP_SIZE         ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)

>> +#define VMEMMAP_SIZE         ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)

>

> I think we could have extended the existing halved VMEMMAP_SIZE by

> PAGES_PER_SECTION * sizeof(struct page) to cope with the alignment but I

> don't think it's worth.

>


Indeed. But it is only temporary anyway, since the problem this patch
solves does not exist anymore in for-next/core, considering that
memstart_addr itself should be sufficiently aligned by construction.
So I intend to propose a revert of this change, after -rc1 perhaps?

>>

>>  #ifndef CONFIG_KASAN

>>  #define VMALLOC_START                (VA_START)

>> @@ -52,7 +52,8 @@

>>  #define VMALLOC_END          (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)

>>

>>  #define VMEMMAP_START                (VMALLOC_END + SZ_64K)

>> -#define vmemmap                      ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))

>> +#define vmemmap                      ((struct page *)VMEMMAP_START - \

>> +                              SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))

>

> It looks fine to me:

>

> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

>

> Will would probably pick it up tomorrow (and add a cc stable as well).

>


Thanks

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Catalin Marinas March 9, 2016, 11:54 a.m. UTC | #4
On Wed, Mar 09, 2016 at 08:19:55AM +0700, Ard Biesheuvel wrote:
> On 8 March 2016 at 22:12, Catalin Marinas <catalin.marinas@arm.com> wrote:

> > On Tue, Mar 08, 2016 at 09:09:29PM +0700, Ard Biesheuvel wrote:

> >> Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear

> >> region") fixed an issue where the struct page array would overflow into the

> >> adjacent virtual memory region if system RAM was placed so high up in

> >> physical memory that its addresses were not representable in the build time

> >> configured virtual address size.

> >>

> >> However, the fix failed to take into account that the vmemmap region needs

> >> to be relatively aligned with respect to the sparsemem section size, so that

> >> a sequence of page structs corresponding with a sparsemem section in the

> >> linear region appears naturally aligned in the vmemmap region.

> >>

> >> So round up vmemmap to sparsemem section size. Since this essentially moves

> >> the projection of the linear region up in memory, also revert the reduction

> >> of the size of the vmemmap region.

> >>

> >> Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")

> >> Tested-by: Mark Langsdorf <mlangsdo@redhat.com>

> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> >> ---

> >>  arch/arm64/include/asm/pgtable.h | 5 +++--

> >>  1 file changed, 3 insertions(+), 2 deletions(-)

> >>

> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h

> >> index f50608674580..819aff5d593f 100644

> >> --- a/arch/arm64/include/asm/pgtable.h

> >> +++ b/arch/arm64/include/asm/pgtable.h

> >> @@ -40,7 +40,7 @@

> >>   * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,

> >>   *   fixed mappings and modules

> >>   */

> >> -#define VMEMMAP_SIZE         ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)

> >> +#define VMEMMAP_SIZE         ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)

> >

> > I think we could have extended the existing halved VMEMMAP_SIZE by

> > PAGES_PER_SECTION * sizeof(struct page) to cope with the alignment but I

> > don't think it's worth.

> 

> Indeed. But it is only temporary anyway, since the problem this patch

> solves does not exist anymore in for-next/core, considering that

> memstart_addr itself should be sufficiently aligned by construction.

> So I intend to propose a revert of this change, after -rc1 perhaps?


Sounds fine.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Will Deacon March 9, 2016, 3:07 p.m. UTC | #5
On Tue, Mar 08, 2016 at 09:09:29PM +0700, Ard Biesheuvel wrote:
> Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear

> region") fixed an issue where the struct page array would overflow into the

> adjacent virtual memory region if system RAM was placed so high up in

> physical memory that its addresses were not representable in the build time

> configured virtual address size.

> 

> However, the fix failed to take into account that the vmemmap region needs

> to be relatively aligned with respect to the sparsemem section size, so that

> a sequence of page structs corresponding with a sparsemem section in the

> linear region appears naturally aligned in the vmemmap region.

> 

> So round up vmemmap to sparsemem section size. Since this essentially moves

> the projection of the linear region up in memory, also revert the reduction

> of the size of the vmemmap region.

> 

> Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")

> Tested-by: Mark Langsdorf <mlangsdo@redhat.com>

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  arch/arm64/include/asm/pgtable.h | 5 +++--

>  1 file changed, 3 insertions(+), 2 deletions(-)


Sorry for the slight delay, I was flying back from the US and didn't
read mail for a couple of days. Anyway, I'll send this for 4.5 along
with a CC stable and the various tested-bys. Hopefully it won't be too
late for final.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox

Patch

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index f50608674580..819aff5d593f 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -40,7 +40,7 @@ 
  * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
  *	fixed mappings and modules
  */
-#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
+#define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
 
 #ifndef CONFIG_KASAN
 #define VMALLOC_START		(VA_START)
@@ -52,7 +52,8 @@ 
 #define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define VMEMMAP_START		(VMALLOC_END + SZ_64K)
-#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
+#define vmemmap			((struct page *)VMEMMAP_START - \
+				 SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))
 
 #define FIRST_USER_ADDRESS	0UL