diff mbox series

arm64: kaslr: ensure randomized quantities are clean also when kaslr is off

Message ID 20190127082942.21998-1-ard.biesheuvel@linaro.org
State Accepted
Commit 8ea235932314311f15ea6cf65c1393ed7e31af70
Headers show
Series arm64: kaslr: ensure randomized quantities are clean also when kaslr is off | expand

Commit Message

Ard Biesheuvel Jan. 27, 2019, 8:29 a.m. UTC
Commit 1598ecda7b23 ("arm64: kaslr: ensure randomized quantities are
clean to the PoC") added cache maintenance to ensure that global
variables set by the kaslr init routine are not wiped clean due to
cache invalidation occurring during the second round of page table
creation.

However, if kaslr_early_init() exits early with no randomization
being applied (either due to the lack of a seed, or because the user
has disabled kaslr explicitly), no cache maintenance is performed,
leading to the same issue we attempted to fix earlier, as far as the
module_alloc_base variable is concerned.

Note that module_alloc_base cannot be initialized statically, because
that would cause it to be subject to a R_AARCH64_RELATIVE relocation,
causing it to be overwritten by the second round of KASLR relocation
processing.

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # v4.6+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

---
 arch/arm64/kernel/kaslr.c | 1 +
 1 file changed, 1 insertion(+)

-- 
2.20.1

Comments

Catalin Marinas Jan. 29, 2019, 6:18 p.m. UTC | #1
On Sun, Jan 27, 2019 at 09:29:42AM +0100, Ard Biesheuvel wrote:
> Commit 1598ecda7b23 ("arm64: kaslr: ensure randomized quantities are

> clean to the PoC") added cache maintenance to ensure that global

> variables set by the kaslr init routine are not wiped clean due to

> cache invalidation occurring during the second round of page table

> creation.

> 

> However, if kaslr_early_init() exits early with no randomization

> being applied (either due to the lack of a seed, or because the user

> has disabled kaslr explicitly), no cache maintenance is performed,

> leading to the same issue we attempted to fix earlier, as far as the

> module_alloc_base variable is concerned.

> 

> Note that module_alloc_base cannot be initialized statically, because

> that would cause it to be subject to a R_AARCH64_RELATIVE relocation,

> causing it to be overwritten by the second round of KASLR relocation

> processing.

> 

> Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")

> Cc: <stable@vger.kernel.org> # v4.6+

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  arch/arm64/kernel/kaslr.c | 1 +

>  1 file changed, 1 insertion(+)

> 

> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c

> index ba6b41790fcd..b09b6f75f759 100644

> --- a/arch/arm64/kernel/kaslr.c

> +++ b/arch/arm64/kernel/kaslr.c

> @@ -88,6 +88,7 @@ u64 __init kaslr_early_init(u64 dt_phys)

>  	 * we end up running with module randomization disabled.

>  	 */

>  	module_alloc_base = (u64)_etext - MODULES_VSIZE;

> +	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));


Do we need something similar of memstart_offset_seed? If yes, you could
as well change the returns to a goto out.

-- 
Catalin
Ard Biesheuvel Jan. 29, 2019, 9:55 p.m. UTC | #2
On Tue, 29 Jan 2019 at 19:18, Catalin Marinas <catalin.marinas@arm.com> wrote:
>

> On Sun, Jan 27, 2019 at 09:29:42AM +0100, Ard Biesheuvel wrote:

> > Commit 1598ecda7b23 ("arm64: kaslr: ensure randomized quantities are

> > clean to the PoC") added cache maintenance to ensure that global

> > variables set by the kaslr init routine are not wiped clean due to

> > cache invalidation occurring during the second round of page table

> > creation.

> >

> > However, if kaslr_early_init() exits early with no randomization

> > being applied (either due to the lack of a seed, or because the user

> > has disabled kaslr explicitly), no cache maintenance is performed,

> > leading to the same issue we attempted to fix earlier, as far as the

> > module_alloc_base variable is concerned.

> >

> > Note that module_alloc_base cannot be initialized statically, because

> > that would cause it to be subject to a R_AARCH64_RELATIVE relocation,

> > causing it to be overwritten by the second round of KASLR relocation

> > processing.

> >

> > Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")

> > Cc: <stable@vger.kernel.org> # v4.6+

> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> > ---

> >  arch/arm64/kernel/kaslr.c | 1 +

> >  1 file changed, 1 insertion(+)

> >

> > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c

> > index ba6b41790fcd..b09b6f75f759 100644

> > --- a/arch/arm64/kernel/kaslr.c

> > +++ b/arch/arm64/kernel/kaslr.c

> > @@ -88,6 +88,7 @@ u64 __init kaslr_early_init(u64 dt_phys)

> >        * we end up running with module randomization disabled.

> >        */

> >       module_alloc_base = (u64)_etext - MODULES_VSIZE;

> > +     __flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));

>

> Do we need something similar of memstart_offset_seed? If yes, you could

> as well change the returns to a goto out.

>


No, that gets initialized to zero statically, so it isn't affected by this.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index ba6b41790fcd..b09b6f75f759 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -88,6 +88,7 @@  u64 __init kaslr_early_init(u64 dt_phys)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
+	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
 
 	/*
 	 * Try to map the FDT early. If this fails, we simply bail,