diff mbox

arm64: kasan: Use actual memory node when populating the kernel image shadow

Message ID 1457636243-17382-1-git-send-email-catalin.marinas@arm.com
State Accepted
Commit 2f76969f2eef051bdd63d38b08d78e790440b0ad
Headers show

Commit Message

Catalin Marinas March 10, 2016, 6:57 p.m. UTC
With the 16KB or 64KB page configurations, the generic
vmemmap_populate() implementation warns on potential offnode
page_structs via vmemmap_verify() because the arm64 kasan_init() passes
NUMA_NO_NODE instead of the actual node for the kernel image memory.

Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Reported-by: James Morse <james.morse@arm.com>
---
 arch/arm64/mm/kasan_init.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Comments

Mark Rutland March 10, 2016, 7:50 a.m. UTC | #1
On Fri, Mar 11, 2016 at 09:31:02AM +0700, Ard Biesheuvel wrote:
> On 11 March 2016 at 01:57, Catalin Marinas <catalin.marinas@arm.com> wrote:

> > With the 16KB or 64KB page configurations, the generic

> > vmemmap_populate() implementation warns on potential offnode

> > page_structs via vmemmap_verify() because the arm64 kasan_init() passes

> > NUMA_NO_NODE instead of the actual node for the kernel image memory.

> >

> > Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")

> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

> > Reported-by: James Morse <james.morse@arm.com>

> 

> I still think using vmemmap_populate() is somewhat of a hack here, and

> the fact that we have different versions for 4k pages and !4k pages,

> while perhaps justified for the actual real purpose of allocating

> struct page arrays, makes this code more fragile than it needs to be.


One of the things I had hoped to look into was having a common p?d block
mapping aware vmemmap_populate that we could use in all cases, so we could
minimise TLB pressure for vmemmap regardless of SWAPPER_USES_SECTION_MAPS (when
we can allocate sufficiently aligned physical memory).

[...]

> Regardless,

> 

> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>


Likewise:

Acked-by: Mark Rutland <mark.rutland@arm.com>


> 

> > ---

> >  arch/arm64/mm/kasan_init.c | 3 ++-

> >  1 file changed, 2 insertions(+), 1 deletion(-)

> >

> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c

> > index 56e19d150c21..a164183f3481 100644

> > --- a/arch/arm64/mm/kasan_init.c

> > +++ b/arch/arm64/mm/kasan_init.c

> > @@ -152,7 +152,8 @@ void __init kasan_init(void)

> >

> >         clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);

> >

> > -       vmemmap_populate(kimg_shadow_start, kimg_shadow_end, NUMA_NO_NODE);

> > +       vmemmap_populate(kimg_shadow_start, kimg_shadow_end,

> > +                        pfn_to_nid(virt_to_pfn(_text)));

> >

> >         /*

> >          * vmemmap_populate() has populated the shadow region that covers the

> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Ard Biesheuvel March 11, 2016, 2:31 a.m. UTC | #2
On 11 March 2016 at 01:57, Catalin Marinas <catalin.marinas@arm.com> wrote:
> With the 16KB or 64KB page configurations, the generic

> vmemmap_populate() implementation warns on potential offnode

> page_structs via vmemmap_verify() because the arm64 kasan_init() passes

> NUMA_NO_NODE instead of the actual node for the kernel image memory.

>

> Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

> Reported-by: James Morse <james.morse@arm.com>


I still think using vmemmap_populate() is somewhat of a hack here, and
the fact that we have different versions for 4k pages and !4k pages,
while perhaps justified for the actual real purpose of allocating
struct page arrays, makes this code more fragile than it needs to be.
How difficult would it be to simply have a kasan specific
vmalloc_shadow() function that performs a
memblock_alloc/create_mapping, and does the right thing wrt aligning
the edges, rather than putting knowledge about how vmemmap_populate
happens to align its allocations into the kasan code?

Regardless,

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>


> ---

>  arch/arm64/mm/kasan_init.c | 3 ++-

>  1 file changed, 2 insertions(+), 1 deletion(-)

>

> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c

> index 56e19d150c21..a164183f3481 100644

> --- a/arch/arm64/mm/kasan_init.c

> +++ b/arch/arm64/mm/kasan_init.c

> @@ -152,7 +152,8 @@ void __init kasan_init(void)

>

>         clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);

>

> -       vmemmap_populate(kimg_shadow_start, kimg_shadow_end, NUMA_NO_NODE);

> +       vmemmap_populate(kimg_shadow_start, kimg_shadow_end,

> +                        pfn_to_nid(virt_to_pfn(_text)));

>

>         /*

>          * vmemmap_populate() has populated the shadow region that covers the


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox

Patch

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 56e19d150c21..a164183f3481 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -152,7 +152,8 @@  void __init kasan_init(void)
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
-	vmemmap_populate(kimg_shadow_start, kimg_shadow_end, NUMA_NO_NODE);
+	vmemmap_populate(kimg_shadow_start, kimg_shadow_end,
+			 pfn_to_nid(virt_to_pfn(_text)));
 
 	/*
 	 * vmemmap_populate() has populated the shadow region that covers the