From patchwork Sat Aug 27 15:58:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 601273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 903F4C0502A for ; Sat, 27 Aug 2022 15:59:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232868AbiH0P7J (ORCPT ); Sat, 27 Aug 2022 11:59:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbiH0P7I (ORCPT ); Sat, 27 Aug 2022 11:59:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 768B529CBD for ; Sat, 27 Aug 2022 08:59:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 21CACB80959 for ; Sat, 27 Aug 2022 15:59:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A659C4347C; Sat, 27 Aug 2022 15:59:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615944; bh=siT/ql7gc1x2QdNaLu6Pss33SOZRPfGyw8QLuQoqcv4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q5huSXpLvs4TbpjScyZPpBiQAaYsy1sut2YJwq72sIUkrkMjRBkv8+1nKX6MTcklQ f+zazf5f+sN1m0DY/Kdrjko+z5TK2MBuIGI+12pb4oVmYxB1GRstpO/XitaQK/uNfK Uml4lGNuL1PRpDvwczlEprpr3OTT80eNw7SjP+pgG1ksEf6cfn5UQJ8G3TzPh0DT0x X5+N247nSRF4c6/x+GLEOCpjqL9Ze1mmS4Ehf4z8jfzs+mPLH5TMKIeMyAaVx9DZpS h8qM6foC28+a1NvuQUBCzqmDoi8mzF9Bk18ZYSpQTXaPVoVh+IcobaFpIBXD+Q6eXI PckJNfoPn8aVw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 1/7] arm64: lds: reduce effective minimum image alignment to 64k Date: Sat, 27 Aug 2022 17:58:46 +0200 Message-Id: <20220827155852.3338551-2-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5081; i=ardb@kernel.org; h=from:subject; bh=siT/ql7gc1x2QdNaLu6Pss33SOZRPfGyw8QLuQoqcv4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj8wJkgAhKnWWn6Od9QZrhbzX/chuxukNvLC+4o6 9If29VWJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/MAAKCRDDTyI5ktmPJPi0C/ 9CqbgCFVBCZsomLQOe7YC+pYbRO7rjWFqdAMjrRSb95dqSvAPVK1WqPJwCcyfmpUntiXAZv/T7XTME DvvOTF1QrqeYmEGdfGS/Ze7Vd20j025WFvBT0Y04DaaM6Y5TWqDwyMf6Ww7Gm2i/wlRG/7H1+QYkDT RtHC2sa/XKrv1qIp0ua7NtiYv+I1rqq+r6+Z9jmfxgBVjikthi+FxhUOvGhwxfRqPj2ec9ypau97hI 9PtUvqCGyk+yKv8kEwMEa7cJPNGvhlYIs9oum6X3xaUNPdQd4fGEZ44OzY5GrbWY1HltRTBf9CW+i6 RBDGup+Qg+HS1NGLEwvQNZoy0EsxGaqcHs9KEgtdpohS9KvxA9VRxEAFt0kCRhLReRz2ZxWnSoaTZp EPfeOA+la/yWscShFZyY0naVe9NUNgul8+UiqrMY79WGjv78tN8n0ORypXWVeeqANoS8538e0ducgY Ml1kbSqSHH9Ibcb4FTp9vb5ZRomZ5Bw+J533rQ3abygWE= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Our segment alignment is 64k for all configurations, and coincidentally, this is the largest alignment supported by the PE/COFF executable format used by EFI. This means that generally, there is no need to move the image around in memory after it has been loaded by the firmware, which can be advantageous as it also permits us to rely on the memory attributes set by the firmware (R-X for [_text, __inittext_end] and RW- for [__initdata_begin, _end]. However, the minimum alignment of the image is actually 128k on 64k pages configurations with CONFIG_VMAP_STACK=y, due to the existence of a single 128k aligned object in the image, which is the stack of the init task. Let's work around this by adding some padding before the init stack allocation, so we can round down the stack pointer to a suitably aligned value if the image is not aligned to 128k in memory. Note that this does not affect the boot protocol, which still requires 2 MiB alignment for bare metal boot, but is only part of the internal contract between the EFI stub and the kernel proper. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/efi.h | 7 ------- arch/arm64/kernel/head.S | 3 +++ arch/arm64/kernel/vmlinux.lds.S | 11 ++++++++++- drivers/firmware/efi/libstub/arm64-stub.c | 2 +- include/linux/efi.h | 6 +----- 5 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 439e2bc5d5d8..3177e76de708 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -54,13 +54,6 @@ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...); /* arch specific definitions used by the stub code */ -/* - * In some configurations (e.g. VMAP_STACK && 64K pages), stacks built into the - * kernel need greater alignment than we require the segments to be padded to. - */ -#define EFI_KIMG_ALIGN \ - (SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN : THREAD_ALIGN) - /* * On arm64, we have to ensure that the initrd ends up in the linear region, * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index cefe6a73ee54..bd7c04f1f993 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -396,6 +396,9 @@ SYM_FUNC_END(create_kernel_mapping) msr sp_el0, \tsk ldr \tmp1, [\tsk, #TSK_STACK] +#if THREAD_ALIGN > SEGMENT_ALIGN + bic \tmp1, \tmp1, #THREAD_ALIGN - 1 +#endif add sp, \tmp1, #THREAD_SIZE sub sp, sp, #PT_REGS_SIZE diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 45131e354e27..0efccdf52be2 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -274,7 +274,16 @@ SECTIONS _data = .; _sdata = .; - RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) +#if THREAD_ALIGN > SEGMENT_ALIGN + /* + * Add some padding for the init stack so we can fix up any potential + * misalignment at runtime. In practice, this can only occur on 64k + * pages configurations with CONFIG_VMAP_STACK=y. + */ + . += THREAD_ALIGN - SEGMENT_ALIGN; + ASSERT(. == init_stack, "init_stack not at start of RW_DATA as expected") +#endif + RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, SEGMENT_ALIGN) /* * Data written with the MMU off but read with the MMU on requires diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 577173ee1f83..ad7392e6c200 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -98,7 +98,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, * 2M alignment if KASLR was explicitly disabled, even if it was not * going to be activated to begin with. */ - u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN; + u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : SEGMENT_ALIGN; if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { efi_guid_t li_fixed_proto = LINUX_EFI_LOADED_IMAGE_FIXED_GUID; diff --git a/include/linux/efi.h b/include/linux/efi.h index d2b84c2fec39..d7c87666baf9 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -416,11 +416,7 @@ void efi_native_runtime_setup(void); /* * This GUID may be installed onto the kernel image's handle as a NULL protocol * to signal to the stub that the placement of the image should be respected, - * and moving the image in physical memory is undesirable. To ensure - * compatibility with 64k pages kernels with virtually mapped stacks, and to - * avoid defeating physical randomization, this protocol should only be - * installed if the image was placed at a randomized 128k aligned address in - * memory. + * and moving the image in physical memory is undesirable. */ #define LINUX_EFI_LOADED_IMAGE_FIXED_GUID EFI_GUID(0xf5a37b6d, 0x3344, 0x42a5, 0xb6, 0xbb, 0x97, 0x86, 0x48, 0xc1, 0x89, 0x0a) From patchwork Sat Aug 27 15:58:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 600629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9625FC0502A for ; Sat, 27 Aug 2022 15:59:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233308AbiH0P7L (ORCPT ); Sat, 27 Aug 2022 11:59:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbiH0P7K (ORCPT ); Sat, 27 Aug 2022 11:59:10 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A81CF29CBD for ; Sat, 27 Aug 2022 08:59:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 61A4AB8095C for ; Sat, 27 Aug 2022 15:59:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 541A5C433C1; Sat, 27 Aug 2022 15:59:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615947; bh=EELVaB3ZR/jpo4zRd3Ymjq9znWczOd8+mFr1T7WKKt0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LECZGd37FjsRjHykiiYP+EFHoP4NcJ6GJICN/vLo5GH256/NMgYVccbJjvPAc//g7 LOX3TrJjdeUnyyJOVC9y4fKD3reZ4w7rSVNfeyQTiNbn2P+t8ctck7oNTEnBXbIzOs chIDobQdZAjYFaIJAiF/wp8hJujBIGHzpodndNlO6sRgyKFj+UthgyM1HAZ4PhLQPR Dqo6r34++KL6NoMeh0Dd1Kt4UL7O+x/pVm+F67ILd2tAZp5QBxr8etBn4R9GPRvMqt PcygDu/jjpSPj2rTFCqxw1Z0ZAKHepaxN0F7BMX+c5L2FvNzZUnIQf3miscXTYeXcE khc4jHGVqD6Xw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 2/7] arm64: kernel: move ID map out of .text mapping Date: Sat, 27 Aug 2022 17:58:47 +0200 Message-Id: <20220827155852.3338551-3-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2791; i=ardb@kernel.org; h=from:subject; bh=EELVaB3ZR/jpo4zRd3Ymjq9znWczOd8+mFr1T7WKKt0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj8ykRuUlmyplpxxrhzH1nHa6o9t8ko1i1s9Uzgd Q2h+opOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/MgAKCRDDTyI5ktmPJP1DDA CaZSf8/Y4ae6WIzGW6lHxuxLH4RWWJkUfzMoxcY1HI3xxjvPSfqiIha4Z/nl5SmHFRBe/E8iyVJ8xk XnIHxjU9JX7QQoTLRCqhO5eNIa94/B+7NHmLnYM3XsWYIiUqamA34KFq1ugg9Dbkr4PHlleeHv8m55 4+mdRns6bExG73HTqJDTlnfSLpE3NYNj5oA5UE8sUmI3VC6+4llEBnG687pRGcfjQTgL21ODR3yMeO 3xZWvZIjV5S9toODsK0zl9G56OktFGJ4IBhPJeIUhTo3cN0Xs6RRbd50O6tD9c3RZeAatHIh2kKX86 Jg4FXX8a4UlAflLx+x5e7L3mAuwS4JSa+zAKiEypkUBHhTuarAGpbxlesHLwPJQVeK27ZXexw/kfd8 Iy9lANVx20ekLbsg0CilQ+SpyOV8yFEFxB92JryA9b/nXf9agUhKWKVCoPKmef7uiX+Xk1DDftCu/+ CCZn86rWOJ+Jj7C/rGN8GHnzeveb0tK+XAoytHnA4maV4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Reorganize the ID map slightly so that only code that is executed via the 1:1 mapping remains. This allows us to move the ID map out of the .text segment, as it will no longer need executable permissions via the kernel mapping. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 5 ++++- arch/arm64/kernel/vmlinux.lds.S | 2 +- arch/arm64/mm/proc.S | 2 -- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index bd7c04f1f993..cfc7ba25bf87 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -474,7 +474,7 @@ SYM_FUNC_END(__primary_switched) * end early head section, begin head code that is also used for * hotplug and needs to have the same protections as the text region */ - .section ".idmap.text","awx" + .text /* * Starting from EL2 or EL1, configure the CPU to execute at the highest @@ -554,6 +554,7 @@ SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) ret SYM_FUNC_END(set_cpu_boot_mode_flag) + .section ".idmap.text","awx" /* * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. @@ -598,6 +599,7 @@ SYM_FUNC_START_LOCAL(secondary_startup) br x8 SYM_FUNC_END(secondary_startup) + .text SYM_FUNC_START_LOCAL(__secondary_switched) mov x0, x20 bl set_cpu_boot_mode_flag @@ -657,6 +659,7 @@ SYM_FUNC_END(__secondary_too_slow) * Checks if the selected granule size is supported by the CPU. * If it isn't, park the CPU */ + .section ".idmap.text","awx" SYM_FUNC_START(__enable_mmu) mrs x3, ID_AA64MMFR0_EL1 ubfx x3, x3, #ID_AA64MMFR0_TGRAN_SHIFT, 4 diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 0efccdf52be2..5002d869fa7f 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -168,7 +168,6 @@ SECTIONS LOCK_TEXT KPROBES_TEXT HYPERVISOR_TEXT - IDMAP_TEXT *(.gnu.warning) . = ALIGN(16); *(.got) /* Global offset table */ @@ -195,6 +194,7 @@ SECTIONS TRAMP_TEXT HIBERNATE_TEXT KEXEC_TEXT + IDMAP_TEXT . = ALIGN(PAGE_SIZE); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 7837a69524c5..113a4fedf5b8 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -107,7 +107,6 @@ SYM_FUNC_END(cpu_do_suspend) * * x0: Address of context pointer */ - .pushsection ".idmap.text", "awx" SYM_FUNC_START(cpu_do_resume) ldp x2, x3, [x0] ldp x4, x5, [x0, #16] @@ -163,7 +162,6 @@ alternative_else_nop_endif isb ret SYM_FUNC_END(cpu_do_resume) - .popsection #endif .pushsection ".idmap.text", "awx" From patchwork Sat Aug 27 15:58:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 601272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BA19ECAAD2 for ; Sat, 27 Aug 2022 15:59:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233333AbiH0P7N (ORCPT ); Sat, 27 Aug 2022 11:59:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232887AbiH0P7L (ORCPT ); Sat, 27 Aug 2022 11:59:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 233382A403 for ; Sat, 27 Aug 2022 08:59:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B1DE960DEF for ; Sat, 27 Aug 2022 15:59:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6DC3DC433D6; Sat, 27 Aug 2022 15:59:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615949; bh=Zm8J5/cqVtkdAMXupnDeC+5FKe4l7eT8igiM+HWg5NE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=thGwgM8EVfqVGHZFBn4Oz2eLdFO1UoC/13nxKh1WZRXpN2z41WiAtz9syd83cLnt6 byS1fFIyScIPjNbzwiFydSCWmSXVHfzhmJe/P/BdhUy3c/RdSE/hzeZe8WCtZqiqGO lxdscTMzOTmaLLeyPMylgR1xfm1CHrhgGhFTyJnbLkC2TGHiIlrkWnLic5FALWmCpZ tFEVO0yDbrf0nMsSpd727pZERu4quOxBg8rT+1IHPIFXC2g/t7xrxOJW5v5S+5y8Ao JBocTRiKJYpvp7f9xEgQBGsybRoSEjMdoiT6BLJ3w+KFLD93SffOVAsAknNr0nHY7d InE7xlbAVi9AQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 3/7] arm64: head: record the MMU state at primary entry Date: Sat, 27 Aug 2022 17:58:48 +0200 Message-Id: <20220827155852.3338551-4-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2382; i=ardb@kernel.org; h=from:subject; bh=Zm8J5/cqVtkdAMXupnDeC+5FKe4l7eT8igiM+HWg5NE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj8zAZwuW9Myc6nO4ilEtMFyuXz8XDbSK93Z/j3m hTvwHJiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/MwAKCRDDTyI5ktmPJGnrC/ 9+E5WECdGU9TKope+yAgIyCSMXAA0BR9rHSw/EmHWGu7OkyItsHT8RWx8zIIuoFKWmIYp+3w6aOoO5 aSjG8isgfKDQuU1JHl2z6p+/sMIiZQwk7yHcrQTlBHvMKw5w1z2rvCS+dRMdPeuWwUwgRZ9nU+QEvR /ew3qjg9d0Zr1AZRXgJSHv0dsm4TPZqFvhqzzj5EfUps2gNib8v5Ei8JEo3bnelFHjCvtyG3Ehs60m E+DlQLa0u/6kemXK8EfH8HfyLaKkqz+DLbcPihch/4MjGBTGDpf5QYYW2TFOrZ3wAi9gV+AkRM+vEA 5o4ykJ9d7anQWFvLTuUOc1ikD21gKffpCCaEnQNRCtwaZqpjrc+2wC+f4DN2eZgld5PS+daO4rDWo2 IuZ9hvPvpbjtMzuOii6EICKvb99srexaoCiwIIKJ2DmcwoUugkCp1lwHCqxFrvKDTmpdSBZmpvAP1g NzB7jR7ehNaxnoJxtAtOM7VwkwfmZ7gy84o+I1dAcodnU= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Prepare for being able to deal with primary entry with the MMU and caches enabled, by recording whether or not we entered with the MMU on in register x19. While at it, add disable_mmu_workaround macro invocations to init_kernel_el, as its manipulation of SCTLR_ELx may amount to disabling of the MMU after subsequent patches. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index cfc7ba25bf87..8e26f2deb78b 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -77,6 +77,7 @@ * primary lowlevel boot path: * * Register Scope Purpose + * x19 primary_entry() .. start_kernel() whether we entered with the MMU on * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob @@ -86,6 +87,7 @@ * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) + bl record_mmu_state bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -109,6 +111,18 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) +SYM_CODE_START_LOCAL(record_mmu_state) + mrs x19, CurrentEL + cmp x19, #CurrentEL_EL2 + mrs x19, sctlr_el1 + b.ne 0f + mrs x19, sctlr_el2 +0: and x19, x19, x19, lsr #2 // BIT(n) &= BIT(n + 2) + tst x19, #SCTLR_ELx_M // M(0) and C(2) both set? + cset w19, ne + ret +SYM_CODE_END(record_mmu_state) + /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ @@ -495,6 +509,7 @@ SYM_FUNC_START(init_kernel_el) SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr sctlr_el1, x0 isb mov_q x0, INIT_PSTATE_EL1 @@ -527,11 +542,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) cbz x0, 1f /* Set a sane SCTLR_EL1, the VHE way */ + pre_disable_mmu_workaround msr_s SYS_SCTLR_EL12, x1 mov x2, #BOOT_CPU_FLAG_E2H b 2f 1: + pre_disable_mmu_workaround msr sctlr_el1, x1 mov x2, xzr 2: From patchwork Sat Aug 27 15:58:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 600628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D57BCC0502E for ; Sat, 27 Aug 2022 15:59:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232887AbiH0P7N (ORCPT ); Sat, 27 Aug 2022 11:59:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbiH0P7M (ORCPT ); Sat, 27 Aug 2022 11:59:12 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D42C29CBD for ; Sat, 27 Aug 2022 08:59:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CB7DB60DED for ; Sat, 27 Aug 2022 15:59:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87074C4347C; Sat, 27 Aug 2022 15:59:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615951; bh=rduofxKv06cjLI775l5wG9qfruh5afXftQCd8hJnRpA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EBnMsf305e96EibBb7MqOJmdqTgJs+79l/VLq9kQIMnJOQr/b2pzI1PrvsM03oEHc PyvyfS7CqYvwCgAOnxN0ZzJc3O2ozZVS3DgG76xsuB0EBrdDh5u20zLajvO8Z4SLOn tWK0HkzaDB334DXVX98iu2eC165TVUY1sT/+Sl3RVK2Rc7Yje/aoUTsr5iN+B0cbnN UVr+WRanhQGM1WSFBJhbTrmcLE+Xr0oFOQJHN+U7k/xGMKw05imRjisE4YvXdxQR+j 1fmdLidlHAeMZq6OT6EfpWrBRvw4mYHfx6ji6qklL67+Y1XUSKErGupUNCuuHTCbQC 09JQLG8Zjixaw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 4/7] arm64: head: avoid cache invalidation when entering with the MMU on Date: Sat, 27 Aug 2022 17:58:49 +0200 Message-Id: <20220827155852.3338551-5-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1782; i=ardb@kernel.org; h=from:subject; bh=rduofxKv06cjLI775l5wG9qfruh5afXftQCd8hJnRpA=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj81T3FObq5Tunuz9GbT4RppS6Hs19oKsFHKEq/c JNCiTomJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/NQAKCRDDTyI5ktmPJFjVC/ 9/RUDvV+gb7ZZFRsSJG2OI7+D1DeKpRGa/19kufY5DIHXtbhENCW3RHCnzvotUz7/q68u7SMwro0sY uaUmI+Yqd1eh4RLpX93IC2BbagXvxlD9dI2c8nu6eodia2jpk3omBrWQPWbH9s+llYkowOuMzLvPG/ gFQ4jlgZRM+wz7t3MixevoDZK15EOfqCOB6ry50XXcbqp78P1mQr0Kf7mbphYNzEuW/w1BjTfLROqI MT2MV2/6d8QEAUrff0f22zkimID+0IVBRaaTTJ9iu1oQm7viBb4HryHt4ocTZhkVRHo/DfuAfd56JQ 9kcRsVO9gkRv9E3OUrn+hoNoTGQ8XYMp25sj+OHqfvBCy0dhZie9M2zffTbeYLG+uOF6W5waWaXu9h hywv6Zg3l6n9e6MUMhejMQZnU14h9FZa3rnHsOcienVO04k2TSbLopC3yAVe6kdLhXqXfDFcitaIGk GEmEDv2tgZiaTBSf2v2VScMTtYA46gpx6WIuHOzy0dj+M= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU on, there is no need for explicit cache invalidation for stores to memory, as they will be coherent with the caches. Let's take advantage of this, and create the ID map with the MMU still enabled if that is how we entered, and avoid any cache invalidation calls in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 8e26f2deb78b..4c5a5692c1e4 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -89,9 +89,9 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args + bl create_idmap bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 - bl create_idmap /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -133,11 +133,13 @@ SYM_CODE_START_LOCAL(preserve_boot_args) stp x21, x1, [x0] // x0 .. x3 at kernel entry stp x2, x3, [x0, #16] + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy // needed before dc ivac with // MMU off add x1, x0, #0x20 // 4 x 8 bytes b dcache_inval_poc // tail call +0: ret SYM_CODE_END(preserve_boot_args) SYM_FUNC_START_LOCAL(clear_page_tables) @@ -374,12 +376,13 @@ SYM_FUNC_START_LOCAL(create_idmap) * accesses (MMU disabled), invalidate those tables again to * remove any speculatively loaded cache lines. */ + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy adrp x0, init_idmap_pg_dir adrp x1, init_idmap_pg_end bl dcache_inval_poc - ret x28 +0: ret x28 SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) From patchwork Sat Aug 27 15:58:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 601271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 962F6ECAAD2 for ; Sat, 27 Aug 2022 15:59:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230024AbiH0P7Q (ORCPT ); Sat, 27 Aug 2022 11:59:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233344AbiH0P7P (ORCPT ); Sat, 27 Aug 2022 11:59:15 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50B5629CBD for ; Sat, 27 Aug 2022 08:59:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DF73660DE5 for ; Sat, 27 Aug 2022 15:59:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A09F2C433B5; Sat, 27 Aug 2022 15:59:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615953; bh=u6RnojkdSyuN3TFN0U+5RPR4Ta1CABc7iAB02cEFAMw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EUqQtvLkER8nQswrHYpatGyK71AmimiyYzJd2tPq1BB2F9afjxO23Z4Ty48uWkYpi 8yXfB0vXG4Jl5xSm9C8QGhGqxbcD/cj7+fK4I2r+EMzOItT7j7O5ekGCOYdQTUkofD 2JFLlpPBTzF/wKGHYWGHvMM8Ce2RFdUwb6uWd0imW32XWHZa7B9j5c07IImyMbGbCy qHigdKVLLfgLBsqlTu4ryL0av3bQCOI1SfgkYQkGLrJ8e+iEo35ShWAefhB8oJzoCp bu+pOwfMkFfPt4K3gJn/DQD1/JBx959uvijrJiuViGlFg8P13JhbHCJKIPQCVmQRHv r6uB+nBtfANew== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 5/7] arm64: head: clean the ID map page to the PoC Date: Sat, 27 Aug 2022 17:58:50 +0200 Message-Id: <20220827155852.3338551-6-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1571; i=ardb@kernel.org; h=from:subject; bh=u6RnojkdSyuN3TFN0U+5RPR4Ta1CABc7iAB02cEFAMw=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj84FE3+/GEw5E1rDZD1CjUWauY7HYNl6xDdC7Vp 8Xvx47aJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/OAAKCRDDTyI5ktmPJNewDA CdLdHMVeIhSVFol6jn8/I0Qb0ImgSKF3Cu7Dq5FAWVAmqoSAAV2s/GGDQJtPHHyKnLbnKEskUd2eSr 5bdSGRhDYrM0VSZL1HVLTiDXVzKAvL1DchKZ3iPOtnbx/BLuTWsPc5mWlcyKFT5hUzbmd4ojszyU+U RgOwk22+9aqWfPlMsv6GoaNDobzjE5s1bL0qtKW25AJV2daNcKUJgSzvNrcQgDa9fLBIWt36Fx4GEc v3MX1N7W1Lx0jliTDrbQhK8exp5VenHZVoLwlxkDVNxvBFVztcYYHvhTnudMjbvhMBsp7x+6w27Td5 S1/2wjZshuGQ4SRp9U2MsRm5FLvkT0oRydLAX6tuJZagOBheq3NlUjaOQd/tCHsLYMKCKcEeVma3nv pIWmfjsxysoLpg5RaSVBIM8J8qfqzId/NOHoiWv83xbRC+HjUjFl36Uuyn3TtmWkGwRo0Lx7RYN/5m WtBqiSjHIyjjELINT5n81t2MFXLBqBEYuyyWui1oUp3pg= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU and caches enabled, the caller may not have performed any cache maintenance. So clean the ID mapped page to the PoC, to ensure that instruction and data accesses with the MMU off see the correct data. Note that this means primary_entry() itself needs to be moved into the ID map as well, as we will return from init_kernel_el() with the MMU and caches off. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 4c5a5692c1e4..c8862e4bc45e 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -70,7 +70,7 @@ __EFI_PE_HEADER - __INIT + .section ".idmap.text","awx" /* * The following callee saved general purpose registers are used on the @@ -90,6 +90,17 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args bl create_idmap + + /* + * If we entered with the MMU and caches on, clean the ID mapped part + * of the primary boot code to the PoC so we can safely execute it with + * the MMU off. + */ + cbz x19, 0f + adrp x0, __idmap_text_start + adr_l x1, __idmap_text_end + bl dcache_clean_poc +0: bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -111,6 +122,7 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) + __INIT SYM_CODE_START_LOCAL(record_mmu_state) mrs x19, CurrentEL cmp x19, #CurrentEL_EL2 From patchwork Sat Aug 27 15:58:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 600627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D2C1ECAAD2 for ; Sat, 27 Aug 2022 15:59:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233344AbiH0P7T (ORCPT ); Sat, 27 Aug 2022 11:59:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233379AbiH0P7T (ORCPT ); Sat, 27 Aug 2022 11:59:19 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 007C62A410 for ; Sat, 27 Aug 2022 08:59:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B0383B8095D for ; Sat, 27 Aug 2022 15:59:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9D6AC433D6; Sat, 27 Aug 2022 15:59:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615955; bh=1ugsc/xgrgpTQysZCBr5xGnswtBu7R/DIZa479XsB6M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T2+/fas12iYER7ks/8ZuPmpnd9N8t0noXYE/xKA+3M0FZ1eLoQMYSM765FHN1Sbtb 62yEucqy5g2HrGtIacIn8zI5oJCiNS3tzWDERrZBogFJSXnvjer/7APGX1J3qVJJr2 HYV0QVpO/HZ9KVksE2gCnNVWe82LjtCc5Ofznf+a2J9YN4ChK+Aml0Uve+CebRPosF 5FKMAwaM7o8oka37KY8IPELdW4UbixIn6tQyu5YCRxBuWwluOqfFRCIE3C9XkXMQ8j WC5QoGrvwfOo7Ijy+IxfpLQ8UtzQu3nJr10QggWtxySkNNyv47ZM+BKf4JomkVfix6 vUyF3oSxEERKg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 6/7] arm64: efi/libstub: use EFI_LOADER_CODE region when moving the kernel in memory Date: Sat, 27 Aug 2022 17:58:51 +0200 Message-Id: <20220827155852.3338551-7-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5560; i=ardb@kernel.org; h=from:subject; bh=1ugsc/xgrgpTQysZCBr5xGnswtBu7R/DIZa479XsB6M=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj86KiosZ1MlMTg8yzWqRoDL/573FgoYNU/ofe+u Mjkt1gKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/OgAKCRDDTyI5ktmPJOKwDA CgmQ7US54AsQCSU9snyAXw+JURU90lNCbIk27FnN/kjJnbfa+HRMRYb88FjNmhJLco4hOmjVkqRu0I 5ewvBMDIc7liKX69GvW3jE+MCd8EtJsXkvAu94AgDKiI4ylvsZTWIzMEm9gYNeHwlhnxGwZ9saQdFW 561gsMJx8kQ+pLtjf1HX4n+Du/WYtD23NA+Yy02yhewKP1lQhR+VGPTHTBSq41Hg3fIPUhJ7nlyiWO p8apXrEYuawwIhwx/D4lvGTR+5SwJlVqqP0dtJwx/IhRjUyP5XNIN9A/wpjUh7CbIhwSWFyS3Cot3j G0cQ9g3uuz873Hz8HBPlSNWXDwBiJcprCZGo31fxgcDVc27Tscw1s80oPdRtKTr+GrKmphdEW7dorm TKVexLAT6DBGFggiztSx4EaiNQVrdjkuHofj7vguJr+sBWMoDIrYJPY+Klnqt/9lS0GIsaidgOxHMP 5V5rWLPewVmx9F3YJdUqG+LwR/5RWJxVJoam9lTzQ4Q1o= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org The EFI spec is not very clear about which permissions are being given when allocating pages of a certain type. However, it is quite obvious that EFI_LOADER_CODE is more likely to permit execution than EFI_LOADER_DATA, which becomes relevant once we permit booting the kernel proper with the firmware's 1:1 mapping still active. (Note that GRUB for arm64 uses EFI_LOADER_CODE allocations for its executable modules, and therefore relies on these regions having the right permissions as well) Signed-off-by: Ard Biesheuvel --- drivers/firmware/efi/libstub/alignedmem.c | 5 +++-- drivers/firmware/efi/libstub/arm64-stub.c | 6 ++++-- drivers/firmware/efi/libstub/efistub.h | 6 ++++-- drivers/firmware/efi/libstub/mem.c | 3 ++- drivers/firmware/efi/libstub/randomalloc.c | 5 +++-- 5 files changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/firmware/efi/libstub/alignedmem.c b/drivers/firmware/efi/libstub/alignedmem.c index 1de9878ddd3a..174832661251 100644 --- a/drivers/firmware/efi/libstub/alignedmem.c +++ b/drivers/firmware/efi/libstub/alignedmem.c @@ -22,7 +22,8 @@ * Return: status code */ efi_status_t efi_allocate_pages_aligned(unsigned long size, unsigned long *addr, - unsigned long max, unsigned long align) + unsigned long max, unsigned long align, + int memory_type) { efi_physical_addr_t alloc_addr; efi_status_t status; @@ -36,7 +37,7 @@ efi_status_t efi_allocate_pages_aligned(unsigned long size, unsigned long *addr, slack = align / EFI_PAGE_SIZE - 1; status = efi_bs_call(allocate_pages, EFI_ALLOCATE_MAX_ADDRESS, - EFI_LOADER_DATA, size / EFI_PAGE_SIZE + slack, + memory_type, size / EFI_PAGE_SIZE + slack, &alloc_addr); if (status != EFI_SUCCESS) return status; diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index ad7392e6c200..f32e89b4049f 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -140,7 +140,8 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, * locate the kernel at a randomized offset in physical memory. */ status = efi_random_alloc(*reserve_size, min_kimg_align, - reserve_addr, phys_seed); + reserve_addr, phys_seed, + EFI_LOADER_CODE); if (status != EFI_SUCCESS) efi_warn("efi_random_alloc() failed: 0x%lx\n", status); } else { @@ -161,7 +162,8 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, } status = efi_allocate_pages_aligned(*reserve_size, reserve_addr, - ULONG_MAX, min_kimg_align); + ULONG_MAX, min_kimg_align, + EFI_LOADER_CODE); if (status != EFI_SUCCESS) { efi_err("Failed to relocate kernel\n"); diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h index b0ae0a454404..ab9e990447d3 100644 --- a/drivers/firmware/efi/libstub/efistub.h +++ b/drivers/firmware/efi/libstub/efistub.h @@ -871,7 +871,8 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size, efi_status_t efi_get_random_bytes(unsigned long size, u8 *out); efi_status_t efi_random_alloc(unsigned long size, unsigned long align, - unsigned long *addr, unsigned long random_seed); + unsigned long *addr, unsigned long random_seed, + int memory_type); efi_status_t check_platform_features(void); @@ -895,7 +896,8 @@ efi_status_t efi_allocate_pages(unsigned long size, unsigned long *addr, unsigned long max); efi_status_t efi_allocate_pages_aligned(unsigned long size, unsigned long *addr, - unsigned long max, unsigned long align); + unsigned long max, unsigned long align, + int memory_type); efi_status_t efi_low_alloc_above(unsigned long size, unsigned long align, unsigned long *addr, unsigned long min); diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c index feef8d4be113..1e543c90c0ea 100644 --- a/drivers/firmware/efi/libstub/mem.c +++ b/drivers/firmware/efi/libstub/mem.c @@ -96,7 +96,8 @@ efi_status_t efi_allocate_pages(unsigned long size, unsigned long *addr, if (EFI_ALLOC_ALIGN > EFI_PAGE_SIZE) return efi_allocate_pages_aligned(size, addr, max, - EFI_ALLOC_ALIGN); + EFI_ALLOC_ALIGN, + EFI_LOADER_DATA); alloc_addr = ALIGN_DOWN(max + 1, EFI_ALLOC_ALIGN) - 1; status = efi_bs_call(allocate_pages, EFI_ALLOCATE_MAX_ADDRESS, diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c index 715f37479154..ca859e63bac2 100644 --- a/drivers/firmware/efi/libstub/randomalloc.c +++ b/drivers/firmware/efi/libstub/randomalloc.c @@ -53,7 +53,8 @@ static unsigned long get_entry_num_slots(efi_memory_desc_t *md, efi_status_t efi_random_alloc(unsigned long size, unsigned long align, unsigned long *addr, - unsigned long random_seed) + unsigned long random_seed, + int memory_type) { unsigned long map_size, desc_size, total_slots = 0, target_slot; unsigned long total_mirrored_slots = 0; @@ -127,7 +128,7 @@ efi_status_t efi_random_alloc(unsigned long size, pages = size / EFI_PAGE_SIZE; status = efi_bs_call(allocate_pages, EFI_ALLOCATE_ADDRESS, - EFI_LOADER_DATA, pages, &target); + memory_type, pages, &target); if (status == EFI_SUCCESS) *addr = target; break; From patchwork Sat Aug 27 15:58:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 601270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4173BC0502A for ; Sat, 27 Aug 2022 15:59:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233568AbiH0P7U (ORCPT ); Sat, 27 Aug 2022 11:59:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233379AbiH0P7T (ORCPT ); Sat, 27 Aug 2022 11:59:19 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EF7B2A422 for ; Sat, 27 Aug 2022 08:59:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 19BB060DE5 for ; Sat, 27 Aug 2022 15:59:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2F84C43470; Sat, 27 Aug 2022 15:59:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1661615957; bh=IR05DDycZMbGq8a97hAoQd06qveIIORpmdL4ZSt3KMQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mMKH1fA6gwtK3UwoR8buZRqVmbq7v1VQOUOkf8t4pm9Kg3ZeV9LQ1Qcf2VJ2Jl0Kx cc4ouXm6f2u+A2PKpjyrM5XpUY/dt837vjG2O5LzeJRch3Cxr2ie2n/J/AaFregJS+ tN4SO1GVWmSOs1HyX5uMT+FP81F8Qt8CMgwy73kRRnwbvp/JXW4piM5kOxtagixRpz qXZKGTFGdWqOHVwn1KUQx4AHyXyZlGKsF2r0vMu0b/TTxxz2FU797Kn4/+UDUL8KBA lUzH8sm3xN1hApSy0vPUHfAlt6X5BDK45B1UYoGavCpUkjR3GZdiAGbL7U0LZVAALQ 4eFiYNwWIrhxA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: will@kernel.org, catalin.marinas@arm.com, maz@kernel.org, mark.rutland@arm.com, linux-efi@vger.kernel.org, keescook@chromium.org, Ard Biesheuvel Subject: [PATCH v3 7/7] arm64: efi/libstub: enter with the MMU on Date: Sat, 27 Aug 2022 17:58:52 +0200 Message-Id: <20220827155852.3338551-8-ardb@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220827155852.3338551-1-ardb@kernel.org> References: <20220827155852.3338551-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6846; i=ardb@kernel.org; h=from:subject; bh=IR05DDycZMbGq8a97hAoQd06qveIIORpmdL4ZSt3KMQ=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjCj87bnkRye1v/ryi0LzQuneHcaCQv2/Q1OBlICj9 iOd3JA+JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYwo/OwAKCRDDTyI5ktmPJNCxDA CG+7U2RGlMr7EHFyEs1TGh07arVMaDa1QZuqjdP5MtFlLbG1xAIJbKZ4Nddjx7Nnsm5THJCpVtqW7c ifxIXrqGEl/WwWshroC+xY18Hbs+YPCHyWANnWJgnt55hIRYNgVr/F6BHJTMFBfvnsgZz0KgzHmNMY zZudr7JevCb5AgZ0tYo2rSwIUJXjkqmc4nOuiYP47VVAdkd/uUmHgNt7C4SFzc+cpWIbTi/Z/UbHNC FmMtJx5NEsMdsgM6Xz5qRCCZMWaIml3yDIUnCnSjI0cIxXXGYD3nLK6Pi+h1ilAh09Es7PXkbsQqrl K3H9tIb2bd0snr1XCbmiCsmnlWPTKL5Scml7iU6RBb1Do5rHWcSOj5Zj0Uq0MbLL7YrhX0yJbUDQVV qiCebn+Ed5rY50AUruHtqdBZ+Wh3My1rcC2a2A2yIljffXtZQDqLt8fNBoGm5wFHc9nPkYxHXkXv2x kVVrKkPztXZyAvIKr3Udzhc4NL0oJ7Qf7PdHt5tz6pX14= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Instead of disabling the MMU and caches before jumping to the kernel's entry point, just call it directly, and keep the MMU and caches enabled. This removes the need for any cache invalidation in the entry path. It also allows us to get rid of the asm routine, as doing the jump is easily done from C code. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/Makefile | 9 +-- arch/arm64/kernel/efi-entry.S | 69 -------------------- arch/arm64/kernel/image-vars.h | 6 +- arch/arm64/mm/cache.S | 5 +- drivers/firmware/efi/libstub/arm64-stub.c | 18 ++++- 5 files changed, 24 insertions(+), 83 deletions(-) diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 1add7b01efa7..3c502facb7e1 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -36,12 +36,6 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ syscall.o proton-pack.o idreg-override.o idle.o \ patching.o -targets += efi-entry.o - -OBJCOPYFLAGS := --prefix-symbols=__efistub_ -$(obj)/%.stub.o: $(obj)/%.o FORCE - $(call if_changed,objcopy) - obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ sys_compat.o obj-$(CONFIG_COMPAT) += sigreturn32.o @@ -56,8 +50,7 @@ obj-$(CONFIG_CPU_PM) += sleep.o suspend.o obj-$(CONFIG_CPU_IDLE) += cpuidle.o obj-$(CONFIG_JUMP_LABEL) += jump_label.o obj-$(CONFIG_KGDB) += kgdb.o -obj-$(CONFIG_EFI) += efi.o efi-entry.stub.o \ - efi-rt-wrapper.o +obj-$(CONFIG_EFI) += efi.o efi-rt-wrapper.o obj-$(CONFIG_PCI) += pci.o obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o obj-$(CONFIG_ACPI) += acpi.o diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S deleted file mode 100644 index 61a87fa1c305..000000000000 --- a/arch/arm64/kernel/efi-entry.S +++ /dev/null @@ -1,69 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * EFI entry point. - * - * Copyright (C) 2013, 2014 Red Hat, Inc. - * Author: Mark Salter - */ -#include -#include - -#include - - __INIT - -SYM_CODE_START(efi_enter_kernel) - /* - * efi_pe_entry() will have copied the kernel image if necessary and we - * end up here with device tree address in x1 and the kernel entry - * point stored in x0. Save those values in registers which are - * callee preserved. - */ - ldr w2, =primary_entry_offset - add x19, x0, x2 // relocated Image entrypoint - mov x20, x1 // DTB address - - /* - * Clean the copied Image to the PoC, and ensure it is not shadowed by - * stale icache entries from before relocation. - */ - ldr w1, =kernel_size - add x1, x0, x1 - bl dcache_clean_poc - ic ialluis - - /* - * Clean the remainder of this routine to the PoC - * so that we can safely disable the MMU and caches. - */ - adr x0, 0f - adr x1, 3f - bl dcache_clean_poc -0: - /* Turn off Dcache and MMU */ - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 - b.ne 1f - mrs x0, sctlr_el2 - bic x0, x0, #1 << 0 // clear SCTLR.M - bic x0, x0, #1 << 2 // clear SCTLR.C - pre_disable_mmu_workaround - msr sctlr_el2, x0 - isb - b 2f -1: - mrs x0, sctlr_el1 - bic x0, x0, #1 << 0 // clear SCTLR.M - bic x0, x0, #1 << 2 // clear SCTLR.C - pre_disable_mmu_workaround - msr sctlr_el1, x0 - isb -2: - /* Jump to kernel entry point */ - mov x0, x20 - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x19 -3: -SYM_CODE_END(efi_enter_kernel) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index afa69e04e75e..cb97a9941425 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -10,8 +10,7 @@ #error This file should only be included in vmlinux.lds.S #endif -PROVIDE(__efistub_kernel_size = _edata - _text); -PROVIDE(__efistub_primary_entry_offset = primary_entry - _text); +PROVIDE(__efistub_primary_entry = primary_entry); /* * The EFI stub has its own symbol namespace prefixed by __efistub_, to @@ -32,10 +31,11 @@ PROVIDE(__efistub_strnlen = __pi_strnlen); PROVIDE(__efistub_strcmp = __pi_strcmp); PROVIDE(__efistub_strncmp = __pi_strncmp); PROVIDE(__efistub_strrchr = __pi_strrchr); -PROVIDE(__efistub_dcache_clean_poc = __pi_dcache_clean_poc); +PROVIDE(__efistub_caches_clean_inval_pou = __pi_caches_clean_inval_pou); PROVIDE(__efistub__text = _text); PROVIDE(__efistub__end = _end); +PROVIDE(__efistub___inittext_end = __inittext_end); PROVIDE(__efistub__edata = _edata); PROVIDE(__efistub_screen_info = screen_info); PROVIDE(__efistub__ctype = _ctype); diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 081058d4e436..8c3b3ee9b1d7 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -52,10 +52,11 @@ alternative_else_nop_endif * - start - virtual start address of region * - end - virtual end address of region */ -SYM_FUNC_START(caches_clean_inval_pou) +SYM_FUNC_START(__pi_caches_clean_inval_pou) caches_clean_inval_pou_macro ret -SYM_FUNC_END(caches_clean_inval_pou) +SYM_FUNC_END(__pi_caches_clean_inval_pou) +SYM_FUNC_ALIAS(caches_clean_inval_pou, __pi_caches_clean_inval_pou) /* * caches_clean_inval_user_pou(start,end) diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index f32e89b4049f..eb568ea3120b 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -87,7 +87,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, efi_handle_t image_handle) { efi_status_t status; - unsigned long kernel_size, kernel_memsize = 0; + unsigned long kernel_size, kernel_codesize, kernel_memsize = 0; u32 phys_seed = 0; /* @@ -131,6 +131,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, SEGMENT_ALIGN >> 10); kernel_size = _edata - _text; + kernel_codesize = __inittext_end - _text; kernel_memsize = kernel_size + (_end - _edata); *reserve_size = kernel_memsize; @@ -174,6 +175,21 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, *image_addr = *reserve_addr; memcpy((void *)*image_addr, _text, kernel_size); + caches_clean_inval_pou((void *)*image_addr, + (void *)*image_addr + kernel_codesize); return EFI_SUCCESS; } + +asmlinkage void primary_entry(void); + +void __noreturn efi_enter_kernel(unsigned long entrypoint, + unsigned long fdt_addr, + unsigned long fdt_size) +{ + void (* __noreturn enter_kernel)(u64, u64, u64, u64); + u64 offset = (char *)primary_entry - _text; + + enter_kernel = (void *)entrypoint + offset; + enter_kernel(fdt_addr, 0, 0, 0); +}