From patchwork Wed Jan 11 10:22:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 641480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 262B5C46467 for ; Wed, 11 Jan 2023 10:23:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236132AbjAKKXb (ORCPT ); Wed, 11 Jan 2023 05:23:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238681AbjAKKXI (ORCPT ); Wed, 11 Jan 2023 05:23:08 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF7341121 for ; Wed, 11 Jan 2023 02:23:05 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8EACAB81AD7 for ; Wed, 11 Jan 2023 10:23:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC4A4C433EF; Wed, 11 Jan 2023 10:23:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673432583; bh=lVzL4u7fbFytqZHkA+bhDd+RiaDuxY4ERcF5wmkoAQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iwB9smYWakiTL3SCgfLGlPEht4vl342Alzh3dZm0/sdPEX079eJn1pmxwdJYd6XIB NYqUQmG7FzGGOi+mreT1BtAea+r8HglTBWyhtqx92FAZDCEbiA1c4JdzlD1d7gtOQU DAa7dtJAm6qnFVOsR4tm6ixnsAYgguLeGNz1Wmo1YPSNuXZQR2JXkFyBJgg7kxwuk1 Vum+qy/jMsT7q9ElxDAOp77tIqYShSSGBpyke75nCN+KrDnt7h/lic0PyM+jI7pB/w Jqb4ZIbLO1p61NMeXFetH0bcX25CX8HJMd4Qk6W2So0btrxuAqEjuottnHf2K5ZzaZ 5Nzj1PEIvb+GQ== From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v7 1/6] arm64: head: Move all finalise_el2 calls to after __enable_mmu Date: Wed, 11 Jan 2023 11:22:31 +0100 Message-Id: <20230111102236.1430401-2-ardb@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111102236.1430401-1-ardb@kernel.org> References: <20230111102236.1430401-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2013; i=ardb@kernel.org; h=from:subject; bh=lVzL4u7fbFytqZHkA+bhDd+RiaDuxY4ERcF5wmkoAQU=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjvo3kOZUmyhmGEAzzZbXeZoGW54M8VPLCpKqN6hpB 8njarXeJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY76N5AAKCRDDTyI5ktmPJORNDA CGoLbdCcGsraeabZSNUoMIT80SG3cqt5/WHJW3loxv1IkKtAQNJeupjonQCv+8mhckwXVUaJaBrqFE /1w44iJVECAMsV+xV6rFDBSnhaJ0IWbwSPuvpRIoxqpbUr/ormBUmMZ2ufZnVbRh+Em+Yvk29ooFs0 9VhklOTD7oqkJmRiGqfv01mc1b0zt0dhTwN3X2AqxF0fvPRUQ59ZLnrQGtFJuJnYX8IZInK9zu/Ix7 2KGk7eqnjnfJRp+3O3LTDWoyzeGwiM+K5nQRFHJMlBTwUGaqYjD3OkuVyxDMxV+QYoWYpLrd+Z5tcF 6gpj18hsYMg/eKoC70/T8RqOi5th8u9K+6AX8qWPaD7KbymNotfxkTraD74xYmwXT0RkZ0x60d7xLO q0MaSA6+NpK4Jn+bRcEitqQnDnwgry1dzkYftGbAlx2m4gaa+gBnYfLOPuenGdvHQPGWZcLCGc/fzJ kUWBEDCBqB2zuQSE2TfjQJceKx2E1Xi5UPJPjb5BWq30c= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org In the primary boot path, finalise_el2() is called much later than on the secondary boot or resume-from-suspend paths, and this does not appear to be intentional. Since we aim to do as little as possible before enabling the MMU and caches, align secondary and resume with primary boot, and defer the call to after the MMU is turned on. This also removes the need to clean finalise_el2() to the PoC once we enable support for booting with the MMU on. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 5 ++++- arch/arm64/kernel/sleep.S | 5 ++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 952e17bd1c0b4f91..c4e12d466a5f35f0 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -587,7 +587,6 @@ SYM_FUNC_START_LOCAL(secondary_startup) * Common entry point for secondary CPUs. */ mov x20, x0 // preserve boot mode - bl finalise_el2 bl __cpu_secondary_check52bitva #if VA_BITS > 48 ldr_l x0, vabits_actual @@ -603,6 +602,10 @@ SYM_FUNC_END(secondary_startup) SYM_FUNC_START_LOCAL(__secondary_switched) mov x0, x20 bl set_cpu_boot_mode_flag + + mov x0, x20 + bl finalise_el2 + str_l xzr, __early_cpu_boot_status, x3 adr_l x5, vectors msr vbar_el1, x5 diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 97c9de57725dfddb..7b7c56e048346e97 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -100,7 +100,7 @@ SYM_FUNC_END(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" SYM_CODE_START(cpu_resume) bl init_kernel_el - bl finalise_el2 + mov x19, x0 // preserve boot mode #if VA_BITS > 48 ldr_l x0, vabits_actual #endif @@ -116,6 +116,9 @@ SYM_CODE_END(cpu_resume) .popsection SYM_FUNC_START(_cpu_resume) + mov x0, x19 + bl finalise_el2 + mrs x1, mpidr_el1 adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address From patchwork Wed Jan 11 10:22:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 642770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 504BEC46467 for ; Wed, 11 Jan 2023 10:24:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230282AbjAKKYZ (ORCPT ); Wed, 11 Jan 2023 05:24:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238744AbjAKKXK (ORCPT ); Wed, 11 Jan 2023 05:23:10 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCDA82724 for ; Wed, 11 Jan 2023 02:23:07 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6ED82B81B83 for ; Wed, 11 Jan 2023 10:23:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACE6BC433F0; Wed, 11 Jan 2023 10:23:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673432585; bh=QAVUXxCDRWuwV7aGOrjBVI8sIeEhBbQo1rRgIWpqV0k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZYHWcxhvU7sxe8UWttKhZhadO4zQNcpu/tFm+SOZgWMBVhsLHlxHQSqjSayvIMyve o4fpj9iW6PL86nwgU3R93p0PxoEZcCritnhtnK3W0j0099csusmshRpS2DqEwqzshj qeyB/qmWi92ddFOmyzxLGcxIOR6jhw5orBaCtVPHWV1CW571Q3l/p3cOBuJ/OxQhOn jCptFYks35PQr1Wp5G7dIkvpdNH5cLsreCH6HLy7ygvT6PSIKHq4OCPJ6xZ6VpuO96 NEcX3Ii7QjK7y2RaO852CuYlgOp7qcZXb+zDVyu/yT5t0XGK9ODH8LwY/VIzLhZ3ui o2l8AYE2UynJQ== From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v7 2/6] arm64: kernel: move identity map out of .text mapping Date: Wed, 11 Jan 2023 11:22:32 +0100 Message-Id: <20230111102236.1430401-3-ardb@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111102236.1430401-1-ardb@kernel.org> References: <20230111102236.1430401-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3476; i=ardb@kernel.org; h=from:subject; bh=QAVUXxCDRWuwV7aGOrjBVI8sIeEhBbQo1rRgIWpqV0k=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjvo3l8neFjDYIs7zkDrrSseoYfHhj0YQ3bQELDGjh b6gHl6aJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY76N5QAKCRDDTyI5ktmPJLyeC/ 0bbksN5WN95KA8vJUU9UMI8V8LkF3KYuCEuGcFOZRy09yG9Oe8htDCi+kzZXVdDnWY8NBFCkfQlI7e oErhMDIfQr+yLHrPJzveJ9vgAiTkxB57oo1Z1D1n7BDraRSJ1Lr4tvZIUxhqWkem0ANyWa83auWpnj SUGfwkSy1kWlBeEQeBXt1jiHCJcdWyBx/K9oFv/u7rycX1B3LP4kqd6nYt735M7aSLDR2wKDb0QETL e3RLXv39lfwcG0YRWtmIOWXJTrRn3sxpKNrXPPFSQq3beOYEsxL1MtBFarZhdKIxI59ZywsbEgvhbI paw8tKBzhRzEVvuv0trSAW0LrVILad7x4HITvRuDGf5NpPKItS9mTAMKaJR7RePL3mKg/CMsCu1xc6 bX9EEcqROeRltuEK65haSm+5E597XV1Xv/d7FSamfi6lwln2c3dNPZT5vDwww+xZ5zQq0OhylRoW0/ ufuUlzIQFsJYXribFsu12TLBCsoETTJ4xkncV44yvXdFg= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Reorganize the ID map slightly so that only code that is executed with the MMU off or via the 1:1 mapping remains. This allows us to move the identity map out of the .text segment, as it will no longer need executable permissions via the kernel mapping. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 28 +++++++++++--------- arch/arm64/kernel/vmlinux.lds.S | 2 +- arch/arm64/mm/proc.S | 2 -- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c4e12d466a5f35f0..bec97aad092c2b43 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -543,19 +543,6 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) eret SYM_FUNC_END(init_kernel_el) -/* - * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed - * in w0. See arch/arm64/include/asm/virt.h for more info. - */ -SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) - adr_l x1, __boot_cpu_mode - cmp w0, #BOOT_CPU_MODE_EL2 - b.ne 1f - add x1, x1, #4 -1: str w0, [x1] // Save CPU boot mode - ret -SYM_FUNC_END(set_cpu_boot_mode_flag) - /* * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. @@ -599,6 +586,7 @@ SYM_FUNC_START_LOCAL(secondary_startup) br x8 SYM_FUNC_END(secondary_startup) + .text SYM_FUNC_START_LOCAL(__secondary_switched) mov x0, x20 bl set_cpu_boot_mode_flag @@ -631,6 +619,19 @@ SYM_FUNC_START_LOCAL(__secondary_too_slow) b __secondary_too_slow SYM_FUNC_END(__secondary_too_slow) +/* + * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed + * in w0. See arch/arm64/include/asm/virt.h for more info. + */ +SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) + adr_l x1, __boot_cpu_mode + cmp w0, #BOOT_CPU_MODE_EL2 + b.ne 1f + add x1, x1, #4 +1: str w0, [x1] // Save CPU boot mode + ret +SYM_FUNC_END(set_cpu_boot_mode_flag) + /* * The booting CPU updates the failed status @__early_cpu_boot_status, * with MMU turned off. @@ -662,6 +663,7 @@ SYM_FUNC_END(__secondary_too_slow) * Checks if the selected granule size is supported by the CPU. * If it isn't, park the CPU */ + .section ".idmap.text","awx" SYM_FUNC_START(__enable_mmu) mrs x3, ID_AA64MMFR0_EL1 ubfx x3, x3, #ID_AA64MMFR0_EL1_TGRAN_SHIFT, 4 diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 4c13dafc98b8400f..407415a5163ab62f 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -179,7 +179,6 @@ SECTIONS LOCK_TEXT KPROBES_TEXT HYPERVISOR_TEXT - IDMAP_TEXT *(.gnu.warning) . = ALIGN(16); *(.got) /* Global offset table */ @@ -206,6 +205,7 @@ SECTIONS TRAMP_TEXT HIBERNATE_TEXT KEXEC_TEXT + IDMAP_TEXT . = ALIGN(PAGE_SIZE); } diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 066fa60b93d24827..91410f48809000a0 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -110,7 +110,6 @@ SYM_FUNC_END(cpu_do_suspend) * * x0: Address of context pointer */ - .pushsection ".idmap.text", "awx" SYM_FUNC_START(cpu_do_resume) ldp x2, x3, [x0] ldp x4, x5, [x0, #16] @@ -166,7 +165,6 @@ alternative_else_nop_endif isb ret SYM_FUNC_END(cpu_do_resume) - .popsection #endif .pushsection ".idmap.text", "awx" From patchwork Wed Jan 11 10:22:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 641479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2040EC5479D for ; Wed, 11 Jan 2023 10:23:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231635AbjAKKXp (ORCPT ); Wed, 11 Jan 2023 05:23:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238775AbjAKKXT (ORCPT ); Wed, 11 Jan 2023 05:23:19 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B229C1086 for ; Wed, 11 Jan 2023 02:23:09 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4B490B81AD7 for ; Wed, 11 Jan 2023 10:23:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8B8B7C433D2; Wed, 11 Jan 2023 10:23:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673432587; bh=BOt6c+2RF+84vPxtyGYUj3cIVhtd3pC3TsOWFZyZWWM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oEVmFygHLf2zDgGohb5tjUkZJVs1u3d62/BF3cawnzKYk4Q3SJ3fYetC0JUQjz3mf 8HD4MYV7+uKlAsQnNd9Kd3n1qogryWyJ2+z9yryIh9pfffHqD2jjEU3lRKfPlxewPt vMVcwFK8L59fkFfzu0lOL7mu9gqlVpMdhthechh8wC6WfJQLMtmdu82bc/oALkkUvw 0BftcXO5o0VMs8viQtSm6Paux5jrv8Gm9LbxMJ6lwnAZ+wbaJ096Xgy8Sr9m3JLSE7 BueLYXkUE8BkVfPjmFRlEzuHnkDGW9CJ4SZSNR5AjOGAAxOb+ZVFq40lTYFJcuuShX rTvb1OaoFC8JA== From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v7 3/6] arm64: head: record the MMU state at primary entry Date: Wed, 11 Jan 2023 11:22:33 +0100 Message-Id: <20230111102236.1430401-4-ardb@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111102236.1430401-1-ardb@kernel.org> References: <20230111102236.1430401-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5007; i=ardb@kernel.org; h=from:subject; bh=BOt6c+2RF+84vPxtyGYUj3cIVhtd3pC3TsOWFZyZWWM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjvo3ntYaSlD+pKyfs1qnjPSt2PzDsxSuPbWQBDYry lNrDEK6JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY76N5wAKCRDDTyI5ktmPJPHCDA Cu9HzgHD2ViP4G9cBd7wVsqRKp2doAbT/XdkBTViBD37152aTkcy31c64hmc69yPaQ+g0EQqgG1ylf r93s5kPamXDvPoLORdorZe+LTF8hNiaIdA0xR0jNVIBciZGg2Frndx5iQE+NsxJEcDEbGGa1drQyaM XfC5WJNaemnzVlYyNb01Cj/9fkkByoJ25MlwpTrl4Zofh8oQAWqbbH+eOjVYAS2+/y9eYj0fJYzUW2 tSzqwEvW/S7QXEEiKtZMiebLYuT/mHn0W9SwSXhE7dZGUlRf4ncnFohYRqoYJhLiDhKk2zbkyHNf10 i+wVsVUeIFeC4ADd2Pb2tyhda5W+D1pka1ucGT2UDahaqpeA+a1UK/bcRwT+wFdHljg07EPIS2pR4K 0f45sZlj/8Y73GpYXk6sIb0vtY+eiDoWH/oP3sB+XrVasm7UDQukaS6nVqI8md7DLJe/1nhhOKgsS3 qSpBZuFE17EEA4jGUvAhACL/RE6Ma48yHLR0Wn3APzZ4o= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Prepare for being able to deal with primary entry with the MMU and caches enabled, by recording whether or not we entered with the MMU on in register x19 and in a global variable. (Note that setting this variable to '1' does not require cache invalidation, nor is it required for storing the bootargs in that case, so omit the cache maintenance). Since boot with the MMU and caches enabled is not permitted by the bare metal boot protocol, ensure that a diagnostic is emitted and a taint bit set if the MMU was found to be enabled on a non-EFI boot, and panic() once the console is likely to be up. We will make an exception for EFI boot later, which has strict requirements for the mapping of system memory, permitting us to relax the boot protocol and hand over from the EFI stub to the core kernel with MMU and caches left enabled. While at it, add 'pre_disable_mmu_workaround' macro invocations to init_kernel_el, as its manipulation of SCTLR_ELx may amount to disabling of the MMU after subsequent patches. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 20 ++++++++++++++++++++ arch/arm64/kernel/setup.c | 17 +++++++++++++++-- 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index bec97aad092c2b43..c3b898efd3b5288d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -77,6 +77,7 @@ * primary lowlevel boot path: * * Register Scope Purpose + * x19 primary_entry() .. start_kernel() whether we entered with the MMU on * x20 primary_entry() .. __primary_switch() CPU boot mode * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0 * x22 create_idmap() .. start_kernel() ID map VA of the DT blob @@ -86,6 +87,7 @@ * x28 create_idmap() callee preserved temp register */ SYM_CODE_START(primary_entry) + bl record_mmu_state bl preserve_boot_args bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -109,6 +111,18 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) +SYM_CODE_START_LOCAL(record_mmu_state) + mrs x19, CurrentEL + cmp x19, #CurrentEL_EL2 + mrs x19, sctlr_el1 + b.ne 0f + mrs x19, sctlr_el2 +0: tst x19, #SCTLR_ELx_C // Z := (C == 0) + and x19, x19, #SCTLR_ELx_M // isolate M bit + csel x19, xzr, x19, eq // clear x19 if Z + ret +SYM_CODE_END(record_mmu_state) + /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ @@ -119,11 +133,14 @@ SYM_CODE_START_LOCAL(preserve_boot_args) stp x21, x1, [x0] // x0 .. x3 at kernel entry stp x2, x3, [x0, #16] + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy // needed before dc ivac with // MMU off add x1, x0, #0x20 // 4 x 8 bytes b dcache_inval_poc // tail call +0: str_l x19, mmu_enabled_at_boot, x0 + ret SYM_CODE_END(preserve_boot_args) SYM_FUNC_START_LOCAL(clear_page_tables) @@ -497,6 +514,7 @@ SYM_FUNC_START(init_kernel_el) SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) mov_q x0, INIT_SCTLR_EL1_MMU_OFF + pre_disable_mmu_workaround msr sctlr_el1, x0 isb mov_q x0, INIT_PSTATE_EL1 @@ -529,11 +547,13 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) cbz x0, 1f /* Set a sane SCTLR_EL1, the VHE way */ + pre_disable_mmu_workaround msr_s SYS_SCTLR_EL12, x1 mov x2, #BOOT_CPU_FLAG_E2H b 2f 1: + pre_disable_mmu_workaround msr sctlr_el1, x1 mov x2, xzr 2: diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 12cfe9d0d3fac10d..b8ec7b3ac9cbe8a8 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -58,6 +58,7 @@ static int num_standard_resources; static struct resource *standard_resources; phys_addr_t __fdt_pointer __initdata; +u64 mmu_enabled_at_boot __initdata; /* * Standard memory resources @@ -332,8 +333,12 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) xen_early_init(); efi_init(); - if (!efi_enabled(EFI_BOOT) && ((u64)_text % MIN_KIMG_ALIGN) != 0) - pr_warn(FW_BUG "Kernel image misaligned at boot, please fix your bootloader!"); + if (!efi_enabled(EFI_BOOT)) { + if ((u64)_text % MIN_KIMG_ALIGN) + pr_warn(FW_BUG "Kernel image misaligned at boot, please fix your bootloader!"); + WARN_TAINT(mmu_enabled_at_boot, TAINT_FIRMWARE_WORKAROUND, + FW_BUG "Booted with MMU enabled!"); + } arm64_memblock_init(); @@ -442,3 +447,11 @@ static int __init register_arm64_panic_block(void) return 0; } device_initcall(register_arm64_panic_block); + +static int __init check_mmu_enabled_at_boot(void) +{ + if (!efi_enabled(EFI_BOOT) && mmu_enabled_at_boot) + panic("Non-EFI boot detected with MMU and caches enabled"); + return 0; +} +device_initcall_sync(check_mmu_enabled_at_boot); From patchwork Wed Jan 11 10:22:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 642772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD1E0C54EBE for ; Wed, 11 Jan 2023 10:23:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231894AbjAKKXp (ORCPT ); Wed, 11 Jan 2023 05:23:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238786AbjAKKXU (ORCPT ); Wed, 11 Jan 2023 05:23:20 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 564331706E for ; Wed, 11 Jan 2023 02:23:12 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id A4129CE1B27 for ; Wed, 11 Jan 2023 10:23:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69DAFC433F0; Wed, 11 Jan 2023 10:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673432588; bh=gHObniuOxNHTAroo4Q9eW2BQ31mqU1jn6IHWAhEPJO4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X2T3GP0kdrUOejzZ6Rw9dKSmjs6oCA4K31twbR5eHEU6fTcIbYY1e73wnUs47IfH/ fOTAfVLhjDlRV3XO/3Nx6ffUCgrMO+kiByfTk0aYqh3Ndo20bnZfn08jBstrmLIKQj KFNQ19k1eUfjVMWJ+dQe1qI24fmxNVmFhmWZdnWn/VVatzneH1JlHbsN06iIPnrRWK LSYC8BceKkWxLj0paWpBTomNArJ64K1AWoVFVtKpGpxsAo55FwK7d0zPNy3Pt+VU1s 6YX1d5r8eAB8aei6DwrH/pxJcOSifsTt2hlfVTgl1WlXYdDjSrET3XO5ZvIXfCw+Wa kdIAmLzgkGOjg== From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v7 4/6] arm64: head: avoid cache invalidation when entering with the MMU on Date: Wed, 11 Jan 2023 11:22:34 +0100 Message-Id: <20230111102236.1430401-5-ardb@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111102236.1430401-1-ardb@kernel.org> References: <20230111102236.1430401-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1358; i=ardb@kernel.org; h=from:subject; bh=gHObniuOxNHTAroo4Q9eW2BQ31mqU1jn6IHWAhEPJO4=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjvo3oq/GwOjCdOMkP3gzv7fJ4nXBKTAGCeax1eg/T OafiT/OJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY76N6AAKCRDDTyI5ktmPJJT4DA DIhUCCSPUhAfdUmZLkKmUK4uxiTEC7gAAz+p49jCcl2rNkS6tBQLbwPOCAyt1oCPW+ccXyZZ/IrlWA DsjndoORpkrpPn7+16ZOU034eBEocejFANuzCuPiIZ3j7ZYiW9KrjylFaHXGYq+W+veYvLD0EaYeK2 5WOaZ18gqr5tq1Qd8sE0YXZiHuKhzXFM/QeWhPp8hyAj0MYwfYMy5pBQb5xJEgXsCG1mtqUYE40iVJ 1gKTPxen16zyidI/YEkrekj+hmgllEIHSzDDnM6laIliJaKrWqtDx9AXtxvtOf1Hg5QmQKu0GCdF/c qes/wjZggseoIu11f7PRucv04md0tiuAJirNsTsAgqYs+QlndFsW8rr9eYZ542mOF1TcT3V9n3Bqpn a4XRJembfXsB1xR83cS+qSte0LpovrO+XA3jM0jKpNlOAxuBnJfdgcLCJJVXpczOAc2GoP063gx5p0 dp+M4uLMzmDlmB0hoExAltsL+7mu6PN97kwJlfEZdMQmo= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU on, there is no need for explicit cache invalidation for stores to memory, as they will be coherent with the caches. Let's take advantage of this, and create the ID map with the MMU still enabled if that is how we entered, and avoid any cache invalidation calls in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c3b898efd3b5288d..d75f419206451d07 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -89,9 +89,9 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args + bl create_idmap bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 - bl create_idmap /* * The following calls CPU setup code, see arch/arm64/mm/proc.S for @@ -377,12 +377,13 @@ SYM_FUNC_START_LOCAL(create_idmap) * accesses (MMU disabled), invalidate those tables again to * remove any speculatively loaded cache lines. */ + cbnz x19, 0f // skip cache invalidation if MMU is on dmb sy adrp x0, init_idmap_pg_dir adrp x1, init_idmap_pg_end bl dcache_inval_poc - ret x28 +0: ret x28 SYM_FUNC_END(create_idmap) SYM_FUNC_START_LOCAL(create_kernel_mapping) From patchwork Wed Jan 11 10:22:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 642771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2917AC46467 for ; Wed, 11 Jan 2023 10:23:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232155AbjAKKXq (ORCPT ); Wed, 11 Jan 2023 05:23:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238851AbjAKKXV (ORCPT ); Wed, 11 Jan 2023 05:23:21 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 824D917428 for ; Wed, 11 Jan 2023 02:23:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2E99CB81B82 for ; Wed, 11 Jan 2023 10:23:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 504E9C433D2; Wed, 11 Jan 2023 10:23:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673432590; bh=vxndewG4Tj6a3zoR49hNOFsyaNKddlGPUkIlzPtr5LI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iy21w4MqXjpOqGNqTDxKLxdLYdU8JAtDtXaLVO1faa7tlafLz65oSMP9fEbDIjo5W wWbxSoInWS/F3u3shz3lMRqu1ybFuSe0JcwEhYCyW+Y2tNNEbOV7DwxswEWKpkgZNe F60r7U9byMQEZX2IxR2XUj/Ka3jO/dgVqNkKNo9XMkQwU27NM4dWSOFr/Bs+zIhvMR BeodmFVnOW69ObFUTX8fGbSAXecUK1GqCtaxNhiqp+u6eWzgvRBJOXQiiusZgRx9Vr FHsJyNCEtxQBF2xdhUa8T3JNEbadG84gGuq3FppGsGzmgtE0YZBpDI3vCwQ5Gw2Hp3 g0gWxrM9ZGpFQ== From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v7 5/6] arm64: head: Clean the ID map and the HYP text to the PoC if needed Date: Wed, 11 Jan 2023 11:22:35 +0100 Message-Id: <20230111102236.1430401-6-ardb@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111102236.1430401-1-ardb@kernel.org> References: <20230111102236.1430401-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3821; i=ardb@kernel.org; h=from:subject; bh=vxndewG4Tj6a3zoR49hNOFsyaNKddlGPUkIlzPtr5LI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjvo3qIy2fiLcaNpqhBs7kgzKB2WYiFltraxihFsv6 FdVp6BOJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY76N6gAKCRDDTyI5ktmPJDiiC/ 9S0Kc1jNy0PH5tcHm7abyJsI2FaRGTRrV8+FZAsvQVgFojXLiQ1NtRui1kbjIVp4zqx3K2oLn8Z9RJ z6RDR+O87Dgftrjb+0xkSSHEHJAWumdlYzTo4ROMMUi2SqIYuez1OF7dHsNYg6Nh+6cRY7UCIY+Uj/ R/VDRKbo03P2tlR8Omyso8RhK0myDqpjHbqaC+DLZycw2rq/nhdppgw34YkXH8mvmdMBY7K4nqMCFx 6TH3UQpxwFHzSnhV+NdEY9w8aqlSpq/MUbjh2WfAOa1RI8XEwowvrUlPZjgb0hROgsd9fu9j1rOH3m V74srNzSYrZrAAkgaM5koDIiZCZbcn4h+Z1YGtgenEPvxbOetgimLsHEMcBYIu0nSPklQTuyww6/PC utNOLB2puVBiRI5W5Gxhmcii7zUOcZx9QngZ5b6w/XuQ8WB0y8aOjHRpFo7cePK1uOu5QZOBq3KiXA z1MHoSzjDWlCPzJS1A5Lqmw+fozkDPx/XNyXwzW66VcIU= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org If we enter with the MMU and caches enabled, the bootloader may not have performed any cache maintenance to the PoC. So clean the ID mapped page to the PoC, to ensure that instruction and data accesses with the MMU off see the correct data. For similar reasons, clean all the HYP text to the PoC as well when entering at EL2 with the MMU and caches enabled. Note that this means primary_entry() itself needs to be moved into the ID map as well, as we will return from init_kernel_el() with the MMU and caches off. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/head.S | 31 +++++++++++++++++--- arch/arm64/kernel/sleep.S | 1 + 2 files changed, 28 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index d75f419206451d07..dc56e1d8f36eb387 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -70,7 +70,7 @@ __EFI_PE_HEADER - __INIT + .section ".idmap.text","awx" /* * The following callee saved general purpose registers are used on the @@ -90,6 +90,17 @@ SYM_CODE_START(primary_entry) bl record_mmu_state bl preserve_boot_args bl create_idmap + + /* + * If we entered with the MMU and caches on, clean the ID mapped part + * of the primary boot code to the PoC so we can safely execute it with + * the MMU off. + */ + cbz x19, 0f + adrp x0, __idmap_text_start + adr_l x1, __idmap_text_end + bl dcache_clean_poc +0: mov x0, x19 bl init_kernel_el // w0=cpu_boot_mode mov x20, x0 @@ -111,6 +122,7 @@ SYM_CODE_START(primary_entry) b __primary_switch SYM_CODE_END(primary_entry) + __INIT SYM_CODE_START_LOCAL(record_mmu_state) mrs x19, CurrentEL cmp x19, #CurrentEL_EL2 @@ -507,10 +519,12 @@ SYM_FUNC_END(__primary_switched) * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in x0 if * booted in EL1 or EL2 respectively, with the top 32 bits containing * potential context flags. These flags are *not* stored in __boot_cpu_mode. + * + * x0: whether we are being called from the primary boot path with the MMU on */ SYM_FUNC_START(init_kernel_el) - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 + mrs x1, CurrentEL + cmp x1, #CurrentEL_EL2 b.eq init_el2 SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) @@ -525,6 +539,14 @@ SYM_INNER_LABEL(init_el1, SYM_L_LOCAL) eret SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) + msr elr_el2, lr + + // clean all HYP code to the PoC if we booted at EL2 with the MMU on + cbz x0, 0f + adrp x0, __hyp_idmap_text_start + adr_l x1, __hyp_text_end + bl dcache_clean_poc +0: mov_q x0, HCR_HOST_NVHE_FLAGS msr hcr_el2, x0 isb @@ -558,7 +580,6 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL) msr sctlr_el1, x1 mov x2, xzr 2: - msr elr_el2, lr mov w0, #BOOT_CPU_MODE_EL2 orr x0, x0, x2 eret @@ -569,6 +590,7 @@ SYM_FUNC_END(init_kernel_el) * cores are held until we're ready for them to initialise. */ SYM_FUNC_START(secondary_holding_pen) + mov x0, xzr bl init_kernel_el // w0=cpu_boot_mode mrs x2, mpidr_el1 mov_q x1, MPIDR_HWID_BITMASK @@ -586,6 +608,7 @@ SYM_FUNC_END(secondary_holding_pen) * be used where CPUs are brought online dynamically by the kernel. */ SYM_FUNC_START(secondary_entry) + mov x0, xzr bl init_kernel_el // w0=cpu_boot_mode b secondary_startup SYM_FUNC_END(secondary_entry) diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 7b7c56e048346e97..2ae7cff1953aaf87 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -99,6 +99,7 @@ SYM_FUNC_END(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" SYM_CODE_START(cpu_resume) + mov x0, xzr bl init_kernel_el mov x19, x0 // preserve boot mode #if VA_BITS > 48 From patchwork Wed Jan 11 10:22:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 641478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B0BC54EBE for ; Wed, 11 Jan 2023 10:23:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234503AbjAKKXr (ORCPT ); Wed, 11 Jan 2023 05:23:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238852AbjAKKXV (ORCPT ); Wed, 11 Jan 2023 05:23:21 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20A4417058 for ; Wed, 11 Jan 2023 02:23:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3AAFE61BDF for ; Wed, 11 Jan 2023 10:23:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34C90C433F1; Wed, 11 Jan 2023 10:23:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673432592; bh=z+IJugDwIz6LGqYI1Y9gkirEEa0z9XVYfga3ayDUiSE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rAlaDc+OhNccuckjHUzWrhG/Dtjcwg8pALEWWC4y5ED09dWQ+AbgB52D9F/puLufh nnPmmnbiPzouzdbI+RVvnXZlw6U/a5e+vmVDEpOx+LJQy3FGFKal42pH7nKOSeXork 4vhPnaoRKW2RbXTiqS8CNTCciGC+nL2UAxgD/zf6ZEWsj510Dlxi8aOZmOVHEZAPyc XJEb/YMLlAd6+gXcQcAJFpVQ4psaVO0rcDTMWWxX5bGAWy6xT9F+tqKdV52c0JuMv8 m5AEu2dy5/2ikek/QTIapJt7lssI9+IJ4apGTZU1hi4E9vlAUBq7/oLvTBsovDBbvA dL3s5c3TLfbhw== From: Ard Biesheuvel To: linux-efi@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Will Deacon , Catalin Marinas , Marc Zyngier , Mark Rutland Subject: [PATCH v7 6/6] efi: arm64: enter with MMU and caches enabled Date: Wed, 11 Jan 2023 11:22:36 +0100 Message-Id: <20230111102236.1430401-7-ardb@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230111102236.1430401-1-ardb@kernel.org> References: <20230111102236.1430401-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10619; i=ardb@kernel.org; h=from:subject; bh=z+IJugDwIz6LGqYI1Y9gkirEEa0z9XVYfga3ayDUiSE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBjvo3sjVgbIWSbz7EWT1nGRGKRyH3mZC2I8IWhLdhq 5lVqb6+JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCY76N7AAKCRDDTyI5ktmPJEmVC/ 90VAFDt36a7q5KfyClT6sdt0zzRE6LSqdsgG2/mP+9ETAMMBQ4xfW6fq5An1eZjSuMEt6BDnK0Te3b rFTSZdEADWJxTjKa3WKO7+uk8xDUnYDVoJq5q0Y5tQR0Zbl0avitibnjWxeQoCec16jDpj/NqBYCGW fk1to/XIJGkEYT9vSI1dRrM40g3WjeUfczKTlL1FDUbw38rm7q7YdJPX9NjjOgGL8QDPl9iuzi3qNI LUTKbatrVqAkPPDkGVgdmGnla1xTQy+Q5FoQnLlxHd1K8dhjjGHm3cZNUbZ7bDB+BrQUrzHHHxbj/g A2otU67sizGsiuixfUg3ewNTPFvZh87x6ogk6DLo3FOYkoKWmPiYtRbu7nVCGVBnD0CQLVffC6xNJx wkq4wEAi2e4X/JX1QpA6pZQE5Ql1jaNVf3TtkbLejmU2zGQTWKm1Ps/iTKKYFw1w2PCBzJ31dEKgnA NbIPFXE8WhTnbnV3N200BurJbM+2VPyR+KrDbVM0qK6WQ= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-efi@vger.kernel.org Instead of cleaning the entire loaded kernel image to the PoC and disabling the MMU and caches before branching to the kernel's bare metal entry point, we can leave the MMU and caches enabled, and rely on EFI's cacheable 1:1 mapping of all of system RAM (which is mandated by the spec) to populate the initial page tables. This removes the need for managing coherency in software, which is tedious and error prone. Note that we still need to clean the executable region of the image to the PoU if this is required for I/D coherency, but only if we actually decided to move the image in memory, as otherwise, this will have been taken care of by the loader. This change affects both the builtin EFI stub as well as the zboot decompressor, which now carries the entire EFI stub along with the decompression code and the compressed image. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/efi.h | 2 + arch/arm64/kernel/image-vars.h | 5 +- arch/arm64/mm/cache.S | 1 + drivers/firmware/efi/libstub/Makefile | 4 +- drivers/firmware/efi/libstub/arm64-entry.S | 67 -------------------- drivers/firmware/efi/libstub/arm64-stub.c | 26 +++++--- drivers/firmware/efi/libstub/arm64.c | 41 ++++++++++-- 7 files changed, 61 insertions(+), 85 deletions(-) diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h index 31d13a6001df49c4..0f0e729b40efc9ab 100644 --- a/arch/arm64/include/asm/efi.h +++ b/arch/arm64/include/asm/efi.h @@ -105,6 +105,8 @@ static inline unsigned long efi_get_kimg_min_align(void) #define EFI_ALLOC_ALIGN SZ_64K #define EFI_ALLOC_LIMIT ((1UL << 48) - 1) +extern unsigned long primary_entry_offset(void); + /* * On ARM systems, virtually remapped UEFI runtime services are set up in two * distinct stages: diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index d0e9bb5c91fccad6..73388b21d07d5524 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -10,7 +10,7 @@ #error This file should only be included in vmlinux.lds.S #endif -PROVIDE(__efistub_primary_entry_offset = primary_entry - _text); +PROVIDE(__efistub_primary_entry = primary_entry); /* * The EFI stub has its own symbol namespace prefixed by __efistub_, to @@ -21,10 +21,11 @@ PROVIDE(__efistub_primary_entry_offset = primary_entry - _text); * linked at. The routines below are all implemented in assembler in a * position independent manner */ -PROVIDE(__efistub_dcache_clean_poc = __pi_dcache_clean_poc); +PROVIDE(__efistub_caches_clean_inval_pou = __pi_caches_clean_inval_pou); PROVIDE(__efistub__text = _text); PROVIDE(__efistub__end = _end); +PROVIDE(__efistub___inittext_end = __inittext_end); PROVIDE(__efistub__edata = _edata); PROVIDE(__efistub_screen_info = screen_info); PROVIDE(__efistub__ctype = _ctype); diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 081058d4e4366edb..503567c864fde05d 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -56,6 +56,7 @@ SYM_FUNC_START(caches_clean_inval_pou) caches_clean_inval_pou_macro ret SYM_FUNC_END(caches_clean_inval_pou) +SYM_FUNC_ALIAS(__pi_caches_clean_inval_pou, caches_clean_inval_pou) /* * caches_clean_inval_user_pou(start,end) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index be8b8c6e8b40a17d..80d85a5169fb2c72 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -87,7 +87,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB) += efi-stub.o string.o intrinsics.o systable.o \ screen_info.o efi-stub-entry.o lib-$(CONFIG_ARM) += arm32-stub.o -lib-$(CONFIG_ARM64) += arm64.o arm64-stub.o arm64-entry.o smbios.o +lib-$(CONFIG_ARM64) += arm64.o arm64-stub.o smbios.o lib-$(CONFIG_X86) += x86-stub.o lib-$(CONFIG_RISCV) += riscv.o riscv-stub.o lib-$(CONFIG_LOONGARCH) += loongarch.o loongarch-stub.o @@ -141,7 +141,7 @@ STUBCOPY_RELOC-$(CONFIG_ARM) := R_ARM_ABS # STUBCOPY_FLAGS-$(CONFIG_ARM64) += --prefix-alloc-sections=.init \ --prefix-symbols=__efistub_ -STUBCOPY_RELOC-$(CONFIG_ARM64) := R_AARCH64_ABS64 +STUBCOPY_RELOC-$(CONFIG_ARM64) := R_AARCH64_ABS # For RISC-V, we don't need anything special other than arm64. Keep all the # symbols in .init section and make sure that no absolute symbols references diff --git a/drivers/firmware/efi/libstub/arm64-entry.S b/drivers/firmware/efi/libstub/arm64-entry.S deleted file mode 100644 index b5c17e89a4fc0c21..0000000000000000 --- a/drivers/firmware/efi/libstub/arm64-entry.S +++ /dev/null @@ -1,67 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * EFI entry point. - * - * Copyright (C) 2013, 2014 Red Hat, Inc. - * Author: Mark Salter - */ -#include -#include - - /* - * The entrypoint of a arm64 bare metal image is at offset #0 of the - * image, so this is a reasonable default for primary_entry_offset. - * Only when the EFI stub is integrated into the core kernel, it is not - * guaranteed that the PE/COFF header has been copied to memory too, so - * in this case, primary_entry_offset should be overridden by the - * linker and point to primary_entry() directly. - */ - .weak primary_entry_offset - -SYM_CODE_START(efi_enter_kernel) - /* - * efi_pe_entry() will have copied the kernel image if necessary and we - * end up here with device tree address in x1 and the kernel entry - * point stored in x0. Save those values in registers which are - * callee preserved. - */ - ldr w2, =primary_entry_offset - add x19, x0, x2 // relocated Image entrypoint - - mov x0, x1 // DTB address - mov x1, xzr - mov x2, xzr - mov x3, xzr - - /* - * Clean the remainder of this routine to the PoC - * so that we can safely disable the MMU and caches. - */ - adr x4, 1f - dc civac, x4 - dsb sy - - /* Turn off Dcache and MMU */ - mrs x4, CurrentEL - cmp x4, #CurrentEL_EL2 - mrs x4, sctlr_el1 - b.ne 0f - mrs x4, sctlr_el2 -0: bic x4, x4, #SCTLR_ELx_M - bic x4, x4, #SCTLR_ELx_C - b.eq 1f - b 2f - - .balign 32 -1: pre_disable_mmu_workaround - msr sctlr_el2, x4 - isb - br x19 // jump to kernel entrypoint - -2: pre_disable_mmu_workaround - msr sctlr_el1, x4 - isb - br x19 // jump to kernel entrypoint - - .org 1b + 32 -SYM_CODE_END(efi_enter_kernel) diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c index 7327b98d8e3fe961..d4a6b12a87413024 100644 --- a/drivers/firmware/efi/libstub/arm64-stub.c +++ b/drivers/firmware/efi/libstub/arm64-stub.c @@ -58,7 +58,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, efi_handle_t image_handle) { efi_status_t status; - unsigned long kernel_size, kernel_memsize = 0; + unsigned long kernel_size, kernel_codesize, kernel_memsize; u32 phys_seed = 0; u64 min_kimg_align = efi_get_kimg_min_align(); @@ -93,6 +93,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, SEGMENT_ALIGN >> 10); kernel_size = _edata - _text; + kernel_codesize = __inittext_end - _text; kernel_memsize = kernel_size + (_end - _edata); *reserve_size = kernel_memsize; @@ -121,7 +122,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, */ *image_addr = (u64)_text; *reserve_size = 0; - goto clean_image_to_poc; + return EFI_SUCCESS; } status = efi_allocate_pages_aligned(*reserve_size, reserve_addr, @@ -137,14 +138,21 @@ efi_status_t handle_kernel_image(unsigned long *image_addr, *image_addr = *reserve_addr; memcpy((void *)*image_addr, _text, kernel_size); + caches_clean_inval_pou(*image_addr, *image_addr + kernel_codesize); -clean_image_to_poc: + return EFI_SUCCESS; +} + +asmlinkage void primary_entry(void); + +unsigned long primary_entry_offset(void) +{ /* - * Clean the copied Image to the PoC, and ensure it is not shadowed by - * stale icache entries from before relocation. + * When built as part of the kernel, the EFI stub cannot branch to the + * kernel proper via the image header, as the PE/COFF header is + * strictly not part of the in-memory presentation of the image, only + * of the file representation. So instead, we need to jump to the + * actual entrypoint in the .text region of the image. */ - dcache_clean_poc(*image_addr, *image_addr + kernel_size); - asm("ic ialluis"); - - return EFI_SUCCESS; + return (char *)primary_entry - _text; } diff --git a/drivers/firmware/efi/libstub/arm64.c b/drivers/firmware/efi/libstub/arm64.c index ff2d18c42ee74979..f5da4fbccd860ab1 100644 --- a/drivers/firmware/efi/libstub/arm64.c +++ b/drivers/firmware/efi/libstub/arm64.c @@ -56,6 +56,12 @@ efi_status_t check_platform_features(void) return EFI_SUCCESS; } +#ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE +#define DCTYPE "civac" +#else +#define DCTYPE "cvau" +#endif + void efi_cache_sync_image(unsigned long image_base, unsigned long alloc_size, unsigned long code_size) @@ -64,13 +70,38 @@ void efi_cache_sync_image(unsigned long image_base, u64 lsize = 4 << cpuid_feature_extract_unsigned_field(ctr, CTR_EL0_DminLine_SHIFT); - do { - asm("dc civac, %0" :: "r"(image_base)); - image_base += lsize; - alloc_size -= lsize; - } while (alloc_size >= lsize); + /* only perform the cache maintenance if needed for I/D coherency */ + if (!(ctr & BIT(CTR_EL0_IDC_SHIFT))) { + do { + asm("dc " DCTYPE ", %0" :: "r"(image_base)); + image_base += lsize; + code_size -= lsize; + } while (code_size >= lsize); + } asm("ic ialluis"); dsb(ish); isb(); } + +unsigned long __weak primary_entry_offset(void) +{ + /* + * By default, we can invoke the kernel via the branch instruction in + * the image header, so offset #0. This will be overridden by the EFI + * stub build that is linked into the core kernel, as in that case, the + * image header may not have been loaded into memory, or may be mapped + * with non-executable permissions. + */ + return 0; +} + +void __noreturn efi_enter_kernel(unsigned long entrypoint, + unsigned long fdt_addr, + unsigned long fdt_size) +{ + void (* __noreturn enter_kernel)(u64, u64, u64, u64); + + enter_kernel = (void *)entrypoint + primary_entry_offset(); + enter_kernel(fdt_addr, 0, 0, 0); +}