From patchwork Tue Oct 18 11:04:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 616261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66077C433FE for ; Tue, 18 Oct 2022 11:05:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229862AbiJRLFF (ORCPT ); Tue, 18 Oct 2022 07:05:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229943AbiJRLFA (ORCPT ); Tue, 18 Oct 2022 07:05:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CCAF24940 for ; Tue, 18 Oct 2022 04:04:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9324561512 for ; Tue, 18 Oct 2022 11:04:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47ED4C43470; Tue, 18 Oct 2022 11:04:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666091093; bh=fOnIdv0Xyy55Ww2CKD9LOztyluMCPv+3nJyI3YpsVF0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HdFogkM/YvR0ICZqmNuKMnJRUxIg05fqAyNB8X53twaIVCGy0rpvrArb+RjqxP0qQ yWosl6raCPKzhdJYhFxPxn8upliX5U7PdmU4Hew+GXC9HkMziehh8/joi4MRjKyYyp 0rNAW3WUH9pGE/x8KwfI4B3yO9pUOW6msC921tl99n54CVnjD3SIVZrS7rytWeFfSf PBcwCFosvZ6XatU1M3EBVQcPHlyqKXDOxaraurQde+BnZLUw5CPaXNTWktS5qXEh7Q saPYDWvCN3NUpZ1AFSmBDxgE89e8ITDr9kbtcbcNdC8rL2PYWcb8VO7KJv7jA/3kW7 IC/XATeC4e+5w== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v4 1/6] arm64: lds: reduce effective minimum image alignment to 64k Date: Tue, 18 Oct 2022 13:04:36 +0200 Message-Id: <20221018110441.3855148-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221018110441.3855148-1-ardb@kernel.org> References: <20221018110441.3855148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5081; i=ardb@kernel.org; h=from:subject; bh=fOnIdv0Xyy55Ww2CKD9LOztyluMCPv+3nJyI3YpsVF0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjTohAHq6+KfeoPUmam6DRiCZqZah9LWBjKGkmSMCj lX/kwg2JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY06IQAAKCRDDTyI5ktmPJDZ9DA C/xJIV+LWKu7IwFMZnGt8Sb7I5n0+z0K/H30/w9+y9UNKq/u1NK41hFtPAUwW9LX8Ol3vTpKNVsZyn s/Rz/zbNWjA3awb81VPt9fr3J0FSI9lG1yQd8RzEUoU514h1Wh9jwPyM45rDbDo0ZhU2YbPDNFvvGK 68xPQoUT6ez1Q4L7Rf8tyYPUc9PlMJy7Eh2JwPOP+j8Garmmv78wx7EORedFadIGfaSySh6zIRDLpV uGrECP75CJojvdDU5sVl04CVF/h5kkLP9ZCd1fD8WwrhxdORWLW6RH54H9qzJ6Lds64FDWZo5PRn1I E/fpZB89cyAuPZe8Z6EGN5QdiM1g5X5zqqbpKdJMCdc362MuiJ/lq8knJLMzpNScQeR49HO6/K4rTH BG9gwuNAzwXUAqsy7m5DAfBv6Xg0x2jRSR6GUrjAI2tvV93aOXnMN0tnXVO8Tt93EhQlc73VYqN7Il vzsxEJh6oKy+XoeLoiVnMUG6hot29+cOyNcyv6BzQNFFA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Our segment alignment is 64k for all configurations, and coincidentally, this is the largest alignment supported by the PE/COFF executable format used by EFI. This means that generally, there is no need to move the image around in memory after it has been loaded by the firmware, which can be advantageous as it also permits us to rely on the memory attributes set by the firmware (R-X for [_text, __inittext_end] and RW- for [__initdata_begin, _end]. However, the minimum alignment of the image is actually 128k on 64k pages configurations with CONFIG_VMAP_STACK=y, due to the existence of a single 128k aligned object in the image, which is the stack of the init task. Let's work around this by adding some padding before the init stack allocation, so we can round down the stack pointer to a suitably aligned value if the image is not aligned to 128k in memory. Note that this does not affect the boot protocol, which still requires 2 MiB alignment for bare metal boot, but is only part of the internal contract between the EFI stub and the kernel proper. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/efi.h | 7 ------- arch/arm64/kernel/head.S | 3 +++ arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++- drivers/firmware/efi/libstub/arm64-stub.c | 2 +- include/linux/efi.h | 6 +----- 5 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 439e2bc5d5d8..3177e76de708 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -54,13 +54,6 @@ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...); /* arch specific definitions used by the stub code */ -/* - * In some configurations (e.g. VMAP_STACK && 64K pages), stacks built into the - * kernel need greater alignment than we require the segments to be padded to. - */ -#define EFI_KIMG_ALIGN \ - (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN : THREAD_ALIGN) - /* * On arm64, we have to ensure that the initrd ends up in the linear region, * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 2196aad7b55b..f168e3309704 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -398,6 +398,9 @@ SYM_FUNC_END(create_kernel_mapping) msr sp_el0, \tsk ldr \tmp1, [\tsk, #TSK_STACK] +#if THREAD_ALIGN > SEGMENT_ALIGN + bic \tmp1, \tmp1, #THREAD_ALIGN - 1 +#endif add sp, \tmp1, #THREAD_SIZE sub sp, sp, #PT_REGS_SIZE diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 45131e354e27..0efccdf52be2 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -274,7 +274,16 @@ SECTIONS _data = .; _sdata = .; - RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) +#if THREAD_ALIGN > SEGMENT_ALIGN + /* + * Add some padding for the init stack so we can fix up any potential + * misalignment at runtime. In practice, this can only occur on 64k + * pages configurations with CONFIG_VMAP_STACK=y. + */ + . += THREAD_ALIGN - SEGMENT_ALIGN; + ASSERT(. == init_stack, "init_stack not at start of RW_DATA as expected") +#endif + RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, SEGMENT_ALIGN) /* * Data written with the MMU off but read with the MMU on requires diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index e767a5ac8c3d..6229f42c797f 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -97,7 +97,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, * 2M alignment if KASLR was explicitly disabled, even if it was not * going to be activated to begin with. */ - u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN; + u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : SEGMENT_ALIGN; if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { efi_guid_t li_fixed_proto = LINUX_EFI_LOADED_IMAGE_FIXED_GUID; diff --git a/include/linux/efi.h b/include/linux/efi.h index 256e70e42114..1a395c24fdc0 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -422,11 +422,7 @@ void efi_native_runtime_setup(void); /* * This GUID may be installed onto the kernel image's handle as a NULL protocol * to signal to the stub that the placement of the image should be respected, - * and moving the image in physical memory is undesirable. To ensure - * compatibility with 64k pages kernels with virtually mapped stacks, and to - * avoid defeating physical randomization, this protocol should only be - * installed if the image was placed at a randomized 128k aligned address in - * memory. + * and moving the image in physical memory is undesirable. */ #define LINUX_EFI_LOADED_IMAGE_FIXED_GUID EFI_GUID(0xf5a37b6d, 0x3344, 0x42a5, 0xb6, 0xbb, 0x97, 0x86, 0x48, 0xc1, 0x89, 0x0a) From patchwork Tue Oct 18 11:04:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 616551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C27DC43217 for ; Tue, 18 Oct 2022 11:05:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229943AbiJRLFG (ORCPT ); Tue, 18 Oct 2022 07:05:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229717AbiJRLFB (ORCPT ); Tue, 18 Oct 2022 07:05:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CF1027B23 for ; Tue, 18 Oct 2022 04:04:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CCEB0614AB for ; Tue, 18 Oct 2022 11:04:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 751DBC4347C; Tue, 18 Oct 2022 11:04:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666091095; bh=l+Ib5JlcWADbO+SzTyZ0TnTZ/81bpJ2Jk1D2qiqFglo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EQ/jw6CZYv+s+QnYSrhQoA0DNCqJ+AaP/d14+Sxt1WAvkjSRU5JyJkWG0XylcPDw4 Qk6chAYst49OVVV5kTHeuoWGi8gqjZUE15E/5gqeOPZGB54BWo71XffMxJSBUAo2bJ 1rW05umb8xy5VEM2GYEcrmOIbltThOgFWbJ0SUqU44VFzTIRpr1mhWGHi/pC638wN0 19UNMewgHfAMyR9ysROLAL1P+a7cKcqpN5RMYXFO034NP7AIQ/iAiOe3tgzUcLjpet 86D/qJS5IoR/U8N2WzxWJLrEhx2+ag1HWW5wQ9opAiRutLK2PCsHYbXJ6eOfE9lb56 1WlWpBYcEiKIQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v4 2/6] arm64: kernel: move identity map out of .text mapping Date: Tue, 18 Oct 2022 13:04:37 +0200 Message-Id: <20221018110441.3855148-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221018110441.3855148-1-ardb@kernel.org> References: <20221018110441.3855148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3452; i=ardb@kernel.org; h=from:subject; bh=l+Ib5JlcWADbO+SzTyZ0TnTZ/81bpJ2Jk1D2qiqFglo=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjTohCAxt1u0J8PSJyJC+4Lo0AnHH1KMjtFWBjsIF5 fS3LieCJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY06IQgAKCRDDTyI5ktmPJDF4DA CP/5Lod03+XvYrymRfDaR+IKb3ht94JXA3lDWb9aiVZYRaOd3RAyw22ezpOGsZxNKWcgqavJ0MRxb1 h8u+YuqRL3eIE+h6IVWf+lZpEOtH2GUIYC0uDw12P7EQKP/tp45oNFDPXhvd3ynAkeg3Z9lKGTKvxJ WhOqSKiTjE1PkHvA7vJmJOT292assjIZtJMLA2uMnQ48iW8Amunhaq14M24JXqwfpiyGTDTftmi7KQ x3dkPa4nMWbvA+c5jAyp1RI1CfFz5nL1PObg8mFuDCr9LzwgTRtRKYTpiBSqhlIjkyRLLNLWH4rtZf T5BiYvpTSR1hF9sID5FA+KtWocxK3PV42rkOiVRWLsNL93iYysp0BHD6Rfye07rwnTL54HYsGl3ilc NT8D6POJOxSVHCkE1PpDHa6unuJ1hr42KGBCZ+NtDlSS+CF8lYR6K+/YnFkrdlvjaHICeOUOSZmFKq Az2g02DRPKN8Lr1LpQFmGtBQ0OI/d3VK02mNkZTl+qpws= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Reorganize the ID map slightly so that only code that is executed with the MMU off or via the 1:1 mapping remains. This allows us to move the identity map out of the .text segment, as it will no longer need executable permissions via the kernel mapping. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 28 +++++++++++--------- arch/arm64/kernel/vmlinux.lds.S | 2 +- arch/arm64/mm/proc.S | 2 -- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index f168e3309704..25a84ce1700c 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -543,19 +543,6 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) eret SYM_FUNC_END(init_kernel_el) -/* - * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed - * in w0. See arch/arm64/include/asm/virt.h for more info. - */ -SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) - adr_l x1, __boot_cpu_mode - cmp w0, #BOOT_CPU_MODE_EL2 - b.ne 1f - add x1, x1, #4 -1: str w0, [x1] // Save CPU boot mode - ret -SYM_FUNC_END(set_cpu_boot_mode_flag) - /* * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. @@ -600,6 +587,7 @@ SYM_FUNC_START_LOCAL(secondary_startup) br x8 SYM_FUNC_END(secondary_startup) + .text SYM_FUNC_START_LOCAL(__secondary_switched) mov x0, x20 bl set_cpu_boot_mode_flag @@ -628,6 +616,19 @@ SYM_FUNC_START_LOCAL(__secondary_too_slow) b __secondary_too_slow SYM_FUNC_END(__secondary_too_slow) +/* + * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed + * in w0. See arch/arm64/include/asm/virt.h for more info. + */ +SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) + adr_l x1, __boot_cpu_mode + cmp w0, #BOOT_CPU_MODE_EL2 + b.ne 1f + add x1, x1, #4 +1: str w0, [x1] // Save CPU boot mode + ret +SYM_FUNC_END(set_cpu_boot_mode_flag) + /* * The booting CPU updates the failed status @__early_cpu_boot_status, * with MMU turned off. @@ -659,6 +660,7 @@ SYM_FUNC_END(__secondary_too_slow) * Checks if the selected granule size is supported by the CPU. * If it isn't, park the CPU */ + .section ".idmap.text","awx" SYM_FUNC_START(__enable_mmu) mrs x3, ID_AA64MMFR0_EL1 ubfx x3, x3, #ID_AA64MMFR0_EL1_TGRAN_SHIFT, 4 diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 0efccdf52be2..5002d869fa7f 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -168,7 +168,6 @@ SECTIONS LOCK_TEXT KPROBES_TEXT HYPERVISOR_TEXT - IDMAP_TEXT *(.gnu.warning) . = ALIGN(16); *(.got) /* Global offset table */ @@ -195,6 +194,7 @@ SECTIONS TRAMP_TEXT HIBERNATE_TEXT KEXEC_TEXT + IDMAP_TEXT . = ALIGN(PAGE_SIZE); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index b9ecbbae1e1a..d7ca6f23fb0d 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -110,7 +110,6 @@ SYM_FUNC_END(cpu_do_suspend) * * x0: Address of context pointer */ - .pushsection ".idmap.text", "awx" SYM_FUNC_START(cpu_do_resume) ldp x2, x3, [x0] ldp x4, x5, [x0, #16] @@ -166,7 +165,6 @@ alternative_else_nop_endif isb ret SYM_FUNC_END(cpu_do_resume) - .popsection #endif .pushsection ".idmap.text", "awx" From patchwork Tue Oct 18 11:04:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 616260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CE09C4332F for ; Tue, 18 Oct 2022 11:05:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230006AbiJRLFJ (ORCPT ); Tue, 18 Oct 2022 07:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229871AbiJRLFF (ORCPT ); Tue, 18 Oct 2022 07:05:05 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5A2B3B461 for ; Tue, 18 Oct 2022 04:05:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9A93AB81D4C for ; Tue, 18 Oct 2022 11:04:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A332CC43150; Tue, 18 Oct 2022 11:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666091097; bh=1QTkaT4iHMGgddft3nM2F14cfzLMiHsK/itOeD3mJxI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JF1jxCToj6+Ejp1PGlriyLvKJ6WX2APs4H1HQ+/CDc59zCq7KrVf90yxVWv9eEjmQ RDM6qnzcBWBXKSO5MIFYnVONwLvFQYKHrb/CpqALJh/8cCkypEujSf+A9rUpyDp8w5 3zpjYdeEmm1RKiyGjJQkegOO/UAXMXLiz4yRGhnQt8MiKM1Df26i8kQ42Njl149ZRR irt+LAH0hm1nPbniFX6H26g9l+vNx3OHtBexsLiRIYhR/IaV+1RvyDG9gsukZfVzC4 Fxr+F0kpj7/EOmESltIK2sA4ljyDn41yW/yqNf98FFsZO8MyeI6z03kO6pEju6fsBm HYRXufmmXfSLA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v4 3/6] arm64: head: record the MMU state at primary entry Date: Tue, 18 Oct 2022 13:04:38 +0200 Message-Id: <20221018110441.3855148-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221018110441.3855148-1-ardb@kernel.org> References: <20221018110441.3855148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2430; i=ardb@kernel.org; h=from:subject; bh=1QTkaT4iHMGgddft3nM2F14cfzLMiHsK/itOeD3mJxI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjTohDgqXMMD4mKkQSjAgLNYwoP57fkVrc7w4Ft/Gp ZphNEoyJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY06IQwAKCRDDTyI5ktmPJFKTC/ wOWqe/dRAf/kQER4xUM8QKLKEG0aM1mkAslFsVmDxCwJU6xuWIcEagW8STn/2auhyJDTFqxYZEE6QZ AW3WtSKTHqzCNt0+NMoKPEyFRWkbvjvp7almBkmLFWJ2M4KkhRdYZ+fIfIKjWBiOQJygKTc/xMz/FX Z63UcvhuiOQxw/jmd4ohEm+CAiohxDKUIyyOoIsdvbQYjJuQ13KAa0bMTjJOwqtqSRog1bfhnJFCHw ln3rQ6zhDqCnDRHpaairVWBet7/L/t6nU7z5+JyDZLfkIDIo6Rmhht1wBiCbQH4Cp4u28Ms590mBoC RngYas8/PEqwElIvpPrPtSc1kRJTKil+48S+nFv0B798nXUn0LhasEYKOzynuPfbJj+Hmgglplwk0g w/Ko9o7FcK7OOe19LIL/N42BrI9n6l+D0vqz/wek2JljrBOh2lZWdNdCE1P7U+0eti77MXIrikb2uE Alr2sJaZNO3wQPPFfvjm0nOq6D+VcPHUYhcXPxNxUl9qw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Prepare for being able to deal with primary entry with the MMU and caches enabled, by recording whether or not we entered with the MMU on in register x19. While at it, add disable_mmu_workaround macro invocations to init_kernel_el, as its manipulation of SCTLR_ELx may amount to disabling of the MMU after subsequent patches. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 25a84ce1700c..643797b21a1c 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -77,6 +77,7 @@ * primary lowlevel boot path: * * Register Scope Purpose + * x19 primary_entry() .. start_kernel() whether we entered with the MMU on * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob @@ -86,6 +87,7 @@ * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) + bl record_mmu_state bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -109,6 +111,19 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) +SYM_CODE_START_LOCAL(record_mmu_state) + mrs x19, CurrentEL + cmp x19, #CurrentEL_EL2 + mrs x19, sctlr_el1 + b.ne 0f + mrs x19, sctlr_el2 +0: tst x19, #SCTLR_ELx_C // Z := (C == 0) + and x19, x19, #SCTLR_ELx_M // isolate M bit + ccmp x19, xzr, #4, ne // Z |= (M == 0) + cset x19, ne // set x19 if !Z + ret +SYM_CODE_END(record_mmu_state) + /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ @@ -497,6 +512,7 @@ SYM_FUNC_START(init_kernel_el) SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr sctlr_el1, x0 isb mov_q x0, INIT_PSTATE_EL1 @@ -529,11 +545,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) cbz x0, 1f /* Set a sane SCTLR_EL1, the VHE way */ + pre_disable_mmu_workaround msr_s SYS_SCTLR_EL12, x1 mov x2, #BOOT_CPU_FLAG_E2H b 2f 1: + pre_disable_mmu_workaround msr sctlr_el1, x1 mov x2, xzr 2: From patchwork Tue Oct 18 11:04:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 616550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78FEBC433FE for ; Tue, 18 Oct 2022 11:05:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229871AbiJRLFK (ORCPT ); Tue, 18 Oct 2022 07:05:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229942AbiJRLFG (ORCPT ); Tue, 18 Oct 2022 07:05:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3E9B3B949 for ; Tue, 18 Oct 2022 04:05:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2ABD961522 for ; Tue, 18 Oct 2022 11:05:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D03C0C433D6; Tue, 18 Oct 2022 11:04:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666091099; bh=XyJL/8SEF32qYGNUVw1eQPMZ3OctMavcgBrjB2KO9Fw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=drHA4IKwrRO/yg42Rr+HJDZB5FhJHBzm4Yl/1inBn+YGl8WafkIB3yGPVcUX73y5H bY39w39IvRY4dkWTEU3zA68cWrId5E3wETGVMD14g15sYLezxUhRiQTy/1HaY02ALM OIsyu9K3UV/UFDeoVxezyBkWjhzwEahTGA3fUHNH0iZ2B6uihnXna5gGQxn7Yf+RQZ CTFf7u7kKfLsVppXSZYIAThV4NEqvF7fru2U9pFXABqLTI4MXoMMzjfEUIX2vGmd6G QblFIzyCv8x8CgiKkZi1VlkwqMxNs6GPmqDfH38lRLfdQ6IpRVOV4/anKjbZb5zf+5 8UuSPWflP4Dvg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v4 4/6] arm64: head: avoid cache invalidation when entering with the MMU on Date: Tue, 18 Oct 2022 13:04:39 +0200 Message-Id: <20221018110441.3855148-5-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221018110441.3855148-1-ardb@kernel.org> References: <20221018110441.3855148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1782; i=ardb@kernel.org; h=from:subject; bh=XyJL/8SEF32qYGNUVw1eQPMZ3OctMavcgBrjB2KO9Fw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjTohF5kWHwyv/QI5b27eYUOy19fcWWtdWHZdU2aUQ R7bmEyqJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY06IRQAKCRDDTyI5ktmPJAxOC/ 0cT2Sj7Y9pvPUeeCLm4X+r7iMyGxhGtcUTjlruFJR/cPWBaCIYM5DICNFPFzznQOTsjozpSG3jd7Bp c9lD/woQ724tptXFWBaxQlCBOllqrwE7ndgMxO5OgfkA9pK4xJFer0exBk3hgdGGULdegnQ1hnHztz ALmab4kiINJf8N/mjjm6PFNYdGCl1s52lMhk6N8oW+9OFhjHWN3SGwuXrNAA5sqwwEG2bac8NYHMSR lbqBabpx64PWZ8BrTX/UTp1sBenuVLNvdReHOwMq0HV4kGHY5m5LQgkHGW6NF0lBJDL6KhTReNhIic h7vy+xJYfp3vngb+VKp/zZeVW7lSTTKZTZrYfqTG11sANrBtLPtRRBhuBu0SnnsuGLqNNm5MRGw9KF Huyew7mEiq4iIKY+OD20z0DsOgnjR0bz2iMmmFcdj7HdeNWKRgusOZ/sGhdE7TUS/4AOm4bE3Ogklb S8VFjP+DXyXOWSkPBXh8P7KL1KaaVyaHfJMHempQ4SbcU= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU on, there is no need for explicit cache invalidation for stores to memory, as they will be coherent with the caches. Let's take advantage of this, and create the ID map with the MMU still enabled if that is how we entered, and avoid any cache invalidation calls in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 643797b21a1c..5de2ba3539a8 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -89,9 +89,9 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args + bl create_idmap bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 - bl create_idmap /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -134,11 +134,13 @@ SYM_CODE_START_LOCAL(preserve_boot_args) stp x21, x1, [x0] // x0 .. x3 at kernel entry stp x2, x3, [x0, #16] + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy // needed before dc ivac with // MMU off add x1, x0, #0x20 // 4 x 8 bytes b dcache_inval_poc // tail call +0: ret SYM_CODE_END(preserve_boot_args) SYM_FUNC_START_LOCAL(clear_page_tables) @@ -375,12 +377,13 @@ SYM_FUNC_START_LOCAL(create_idmap) * accesses (MMU disabled), invalidate those tables again to * remove any speculatively loaded cache lines. */ + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy adrp x0, init_idmap_pg_dir adrp x1, init_idmap_pg_end bl dcache_inval_poc - ret x28 +0: ret x28 SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) From patchwork Tue Oct 18 11:04:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 616259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DE73C433FE for ; Tue, 18 Oct 2022 11:05:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229665AbiJRLFN (ORCPT ); Tue, 18 Oct 2022 07:05:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229752AbiJRLFJ (ORCPT ); Tue, 18 Oct 2022 07:05:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F20E27B23 for ; Tue, 18 Oct 2022 04:05:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 01145B81E87 for ; Tue, 18 Oct 2022 11:05:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0957CC4347C; Tue, 18 Oct 2022 11:04:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666091101; bh=Y4hjTTjb2Eqy7jU72ay8krSE7XbZyj+VAYhAE2r2cFA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fmbA85qU8MKRkAoqWGnAAkHhJekpys4NZeX3ee36b6KcC424IMxb9dfUdajocgeri 2Dr4dVBs3T2yXfctf045k4L6Gwwn13lSeTgA+x17fy/DaV9Sm2lKI3OdVJWpX9fyYJ c399UQKMIknEwhLnFl6bwrf9HUjgLBhgcopXCxW2Xbpnuu1xyUeMZXNNUN4rViY9uq M8UqO1ii/x+zk+39dxuC9y8kQs3pHrLLtENbnR1PwKHCASmAZPCSFEpVnGu+TO98Dw OLRHJtEbcJ9Bz0ZNwQlVP3AiO5ncOOfE7UX8wyy5X47/ca5APkrgU+qCSDmKmKXYJH bUJ7P3As0oNEQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v4 5/6] arm64: head: clean the ID map page to the PoC Date: Tue, 18 Oct 2022 13:04:40 +0200 Message-Id: <20221018110441.3855148-6-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221018110441.3855148-1-ardb@kernel.org> References: <20221018110441.3855148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1571; i=ardb@kernel.org; h=from:subject; bh=Y4hjTTjb2Eqy7jU72ay8krSE7XbZyj+VAYhAE2r2cFA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjTohHVkL7JW14y8qKMeV5U+XdPVHM6T5CMZ5c5gl4 Wc39K6qJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY06IRwAKCRDDTyI5ktmPJMNtC/ 9hg0J4BHdgyXhsetzPiK5Sh/Ccq1qIsTokdsq7EreoMAhoDNTCxhvaFnC/KBssLR+doXlzjs2Bof7a R6R0IjCzfAZN+DKpwKwGLaqsgX/q8k8rQdkzgSZVkGe84otGxMFbkvFnsQ+rBfwsHGNRT+IQ7GKSzZ DKzCZoJuj6L8xWhATue97hitmd+GCpjLNIuWXGeGt99BO8tqYtdElAFZrQvVq8LMvWRc02FOcGkVkS FfbB6iv1618jvXVkdfbvPIuKT7fn+INMfbm5VlaGiYt2MMZfbw/31H2423LO8suWFhD0bJz7ZQpkXX Vwx45Jyl/Ol6+NLT1R8Y0zYrvsV439WiugY/R4eX8aixrWW8AcuSt6mdbXRsdDHgflFfG+TblYbs45 xgrPaQ7GPOwloHluT4Bms2Bne2EcwJ5gWngbmmgOB/0USUsLWqi8a1oYrei0qfAvCX1L5FUDoRjl+O CUflkuUYiY8mX/svV3QDkbazOILSOjZR5Ks8/I3d0tAOQ= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU and caches enabled, the caller may not have performed any cache maintenance. So clean the ID mapped page to the PoC, to ensure that instruction and data accesses with the MMU off see the correct data. Note that this means primary_entry() itself needs to be moved into the ID map as well, as we will return from init_kernel_el() with the MMU and caches off. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 5de2ba3539a8..c8b8ed8477c1 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -70,7 +70,7 @@ __EFI_PE_HEADER - __INIT + .section ".idmap.text","awx" /* * The following callee saved general purpose registers are used on the @@ -90,6 +90,17 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args bl create_idmap + + /* + * If we entered with the MMU and caches on, clean the ID mapped part + * of the primary boot code to the PoC so we can safely execute it with + * the MMU off. + */ + cbz x19, 0f + adrp x0, __idmap_text_start + adr_l x1, __idmap_text_end + bl dcache_clean_poc +0: bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -111,6 +122,7 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) + __INIT SYM_CODE_START_LOCAL(record_mmu_state) mrs x19, CurrentEL cmp x19, #CurrentEL_EL2 From patchwork Tue Oct 18 11:04:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 616549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CAB3C4332F for ; Tue, 18 Oct 2022 11:05:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230026AbiJRLFQ (ORCPT ); Tue, 18 Oct 2022 07:05:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229846AbiJRLFJ (ORCPT ); Tue, 18 Oct 2022 07:05:09 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F9224CA22 for ; Tue, 18 Oct 2022 04:05:04 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8FA0761523 for ; Tue, 18 Oct 2022 11:05:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3709FC43470; Tue, 18 Oct 2022 11:05:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666091104; bh=ajAZKUqLaOvnZYGz9N1ipr/BfWU81j4u/S3wIzL9ky0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tPXcrYiRpc+6OGEB9DCY5ERG2qV3gbhByKGda0loszkEvMKjEOBcGUUAU2GJ+usvw G5FIE8mfd1PswWH6imoDM3+aLwLC0CAT6TjiHGkmG+tSuLb0dcAiD9FGUrUAmczaEu wLiOXno/hv+LEY/d+7OR2NGw2iwQs7Npgan1WW0O3FUk5V7tVGzBXW34qEDsOYZJKG tHqaDvcYS9EJe1cXYbHsQTFGfsGsoBklfzzbZRmGeP/EfazRsDYCGKWwgu8hxwAHc6 rO77oSfzPNf3m7QY5Gw4p2j93gyM8HwBttx2AUrO4N4l/Of8qGtBug1LT75nu0tRjU gojJzD48bx5ZQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v4 6/6] arm64: efi/libstub: enter with the MMU on Date: Tue, 18 Oct 2022 13:04:41 +0200 Message-Id: <20221018110441.3855148-7-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221018110441.3855148-1-ardb@kernel.org> References: <20221018110441.3855148-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5695; i=ardb@kernel.org; h=from:subject; bh=ajAZKUqLaOvnZYGz9N1ipr/BfWU81j4u/S3wIzL9ky0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjTohIXzaJn/IlJ+kFr1EYdYg4nuTbXt53uRA+Daku FmMybzSJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY06ISAAKCRDDTyI5ktmPJI/BDA C2Wgvhwat6TNEwsKTKdXKsg0noF3KpdReIyTepvXE8UBG0XgnocFlw6jD1dNkY2lM8Pdmr3xHBjjJF IGJ6jwgtT6vUPlR2r1pQ7/UwD4QWqNdz6O6pR5H37vBRnWL+R2sVEr1XR1r9BLn3nkmaQYtcNHNCum ek1TOBMnDFNYXxMUW4pQYA395zUDv2GJt451/Y2kSTjMG0m2iK/SrSOpC+Ns2a7roDvBNA5h2+l3ot sPg5L+rlWBsWzqUb48Fs8giS9oh+A3df5EXaat7mOMKjc9cIp+UYMJva/VKerTyRHzLRfir+K8wvbH SswaTYcCX8gm8OVikTCAfD8SBJ+wbEKZ2SybzXw89b0nwA/a57v1Fmx7YpBXMqZM+6qHxZ4pkGnRgA pU68xLvz7tBO4tiA10RDplYzE+qcyzlIhBhwWK8ZXwJ5FKvByVEKWIWPYoeYGdvjNDxCL5p0nlcyhc Mm35VGTHNtZtJymkk+Vb+yLCdgBwH7j/HpAV/4E3048z4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Instead of disabling the MMU and caches before jumping to the kernel's entry point, just call it directly, and keep the MMU and caches enabled. This removes the need for any unconditional cache invalidation to the PoC in the entry path (although cache maintenance of the code portion of the image is still necessary for I/D coherency if the image was moved around in memory). It also allows us to get rid of the asm routine, as doing the jump is easily done from C code. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/image-vars.h | 5 +- arch/arm64/mm/cache.S | 5 +- drivers/firmware/efi/libstub/arm64-entry.S | 57 -------------------- drivers/firmware/efi/libstub/arm64-stub.c | 17 +++++- 4 files changed, 22 insertions(+), 62 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 74d20835cf91..13e082e946c5 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -10,7 +10,7 @@ #error This file should only be included in vmlinux.lds.S #endif -PROVIDE(__efistub_primary_entry_offset = primary_entry - _text); +PROVIDE(__efistub_primary_entry = primary_entry); /* * The EFI stub has its own symbol namespace prefixed by __efistub_, to @@ -28,10 +28,11 @@ PROVIDE(__efistub_strnlen = __pi_strnlen); PROVIDE(__efistub_strcmp = __pi_strcmp); PROVIDE(__efistub_strncmp = __pi_strncmp); PROVIDE(__efistub_strrchr = __pi_strrchr); -PROVIDE(__efistub_dcache_clean_poc = __pi_dcache_clean_poc); +PROVIDE(__efistub_caches_clean_inval_pou = __pi_caches_clean_inval_pou); PROVIDE(__efistub__text = _text); PROVIDE(__efistub__end = _end); +PROVIDE(__efistub___inittext_end = __inittext_end); PROVIDE(__efistub__edata = _edata); PROVIDE(__efistub_screen_info = screen_info); PROVIDE(__efistub__ctype = _ctype); diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 081058d4e436..8c3b3ee9b1d7 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -52,10 +52,11 @@ alternative_else_nop_endif * - start - virtual start address of region * - end - virtual end address of region */ -SYM_FUNC_START(caches_clean_inval_pou) +SYM_FUNC_START(__pi_caches_clean_inval_pou) caches_clean_inval_pou_macro ret -SYM_FUNC_END(caches_clean_inval_pou) +SYM_FUNC_END(__pi_caches_clean_inval_pou) +SYM_FUNC_ALIAS(caches_clean_inval_pou, __pi_caches_clean_inval_pou) /* * caches_clean_inval_user_pou(start,end) diff --git a/drivers/firmware/efi/libstub/arm64-entry.S b/drivers/firmware/efi/libstub/arm64-entry.S deleted file mode 100644 index 4524525ab314..000000000000 --- a/drivers/firmware/efi/libstub/arm64-entry.S +++ /dev/null @@ -1,57 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * EFI entry point. - * - * Copyright (C) 2013, 2014 Red Hat, Inc. - * Author: Mark Salter - */ -#include -#include - -SYM_CODE_START(efi_enter_kernel) - /* - * efi_pe_entry() will have copied the kernel image if necessary and we - * end up here with device tree address in x1 and the kernel entry - * point stored in x0. Save those values in registers which are - * callee preserved. - */ - ldr w2, =primary_entry_offset - add x19, x0, x2 // relocated Image entrypoint - - mov x0, x1 // DTB address - mov x1, xzr - mov x2, xzr - mov x3, xzr - - /* - * Clean the remainder of this routine to the PoC - * so that we can safely disable the MMU and caches. - */ - adr x4, 1f - dc cvac, x4 - dsb sy - - /* Turn off Dcache and MMU */ - mrs x4, CurrentEL - cmp x4, #CurrentEL_EL2 - mrs x4, sctlr_el1 - b.ne 0f - mrs x4, sctlr_el2 -0: bic x4, x4, #SCTLR_ELx_M - bic x4, x4, #SCTLR_ELx_C - b.eq 1f - b 2f - - .balign 32 -1: pre_disable_mmu_workaround - msr sctlr_el2, x4 - isb - br x19 // jump to kernel entrypoint - -2: pre_disable_mmu_workaround - msr sctlr_el1, x4 - isb - br x19 // jump to kernel entrypoint - - .org 1b + 32 -SYM_CODE_END(efi_enter_kernel) diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 6229f42c797f..9c7e2c1aace2 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -86,7 +86,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, efi_handle_t image_handle) { efi_status_t status; - unsigned long kernel_size, kernel_memsize = 0; + unsigned long kernel_size, kernel_codesize, kernel_memsize; u32 phys_seed = 0; /* @@ -130,6 +130,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, SEGMENT_ALIGN >> 10); kernel_size = _edata - _text; + kernel_codesize = __inittext_end - _text; kernel_memsize = kernel_size + (_end - _edata); *reserve_size = kernel_memsize; @@ -173,6 +174,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, *image_addr = *reserve_addr; memcpy((void *)*image_addr, _text, kernel_size); + caches_clean_inval_pou(*image_addr, *image_addr + kernel_codesize); clean_image_to_poc: /* @@ -184,3 +186,16 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, return EFI_SUCCESS; } + +asmlinkage void primary_entry(void); + +void __noreturn efi_enter_kernel(unsigned long entrypoint, + unsigned long fdt_addr, + unsigned long fdt_size) +{ + void (* __noreturn enter_kernel)(u64, u64, u64, u64); + u64 offset = (char *)primary_entry - _text; + + enter_kernel = (void *)entrypoint + offset; + enter_kernel(fdt_addr, 0, 0, 0); +}