From patchwork Fri Jan 25 18:07:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156631 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp653581jaa; Fri, 25 Jan 2019 10:08:37 -0800 (PST) X-Google-Smtp-Source: ALg8bN7bEReTrFlwiyAjYi1GJt69NZviVDWaTslWikzRMNcd5m/Z3gLJg7IbZNDM7jdXWKRl1KDY X-Received: by 2002:a62:cd1:: with SMTP id 78mr11872614pfm.219.1548439717237; Fri, 25 Jan 2019 10:08:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439717; cv=none; d=google.com; s=arc-20160816; b=Y8rF+rSJ3+FcNGjDnfRKTyWT7ABnKWgi0ECVgGVdCn06DkxT0/gwYVENBAtnWaUwDQ XpALw6hikFJQQm1k/lMl17W8jbQt0Q5wAHFGlG8aSLFg3+DTpdIQhA/6Jq0k2k03IYpu CtCWCkxxpdBH9K17eQf2NXloL9t5tIj9fOTI9I6rduCNNeg2p8pnfti3m3mAL6Y6OJdv +oOp+l7/446uUy8xhLpUVE74fhN5TLHuT+4jrVP6j+58AvoiA9Uio2bRJc7YARGQDBUR C/KY+V/1iJRFFreBhVOmzwM1hWb3IAVH8LifqzbrnHK99Ge14HsCmzAXPR+PrsJWmQjx s3AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=/4Ptp4w8nqAe8iCRwA9SXRWqGsTrOtmSBw+4hbmiLLY=; b=EtUhWM0givPORb2ItorfomC7LXdDuEPdnj1ASWTNoxDcPeaj0ZGDygpdsC61WGRsKR BWJAovAyPYB5O5azHYo2s53ZUAzVwhx1jXXKWwZkXGWLgI+HHW5/Rl8O4Koyc6XPAPkE JXV3VtN1jABWOZhd7HS7aGTgbjuLVH/KY0WmSIOqxiApz4V8U1sdRwW1yevoFv0UtHS8 PwVVWggCfgXJtDyLvC4FUPprIbqrTv1UHPT7m2mNiqiaq9wsPjRvr0n5KUkaVXaMDSjx iluNJ6M/WwZWw+1Cix4KVs1QPST9wdKaagSY4k/2bWsRG26JMUtSnQnkv1IyEbqa+IxD xC3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s4si25097820pga.377.2019.01.25.10.08.36; Fri, 25 Jan 2019 10:08:37 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726630AbfAYSHZ (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:25 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51758 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726329AbfAYSHW (ORCPT ); Fri, 25 Jan 2019 13:07:22 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EA7FE1596; Fri, 25 Jan 2019 10:07:21 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 329EC3F5AF; Fri, 25 Jan 2019 10:07:21 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton , Jonathan Corbet , linux-doc@vger.kernel.org Subject: [PATCH v4 01/12] Documentation: Document arm64 kpti control Date: Fri, 25 Jan 2019 12:07:00 -0600 Message-Id: <20190125180711.1970973-2-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For a while Arm64 has been capable of force enabling or disabling the kpti mitigations. Lets make sure the documentation reflects that. Signed-off-by: Jeremy Linton Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org --- Documentation/admin-guide/kernel-parameters.txt | 6 ++++++ 1 file changed, 6 insertions(+) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index b799bcf67d7b..9475f02c79da 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1982,6 +1982,12 @@ Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y, the default is off. + kpti= [ARM64] Control page table isolation of user + and kernel address spaces. + Default: enabled on cores which need mitigation. + 0: force disabled + 1: force enabled + kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs. Default is 0 (don't ignore, but inject #GP) From patchwork Fri Jan 25 18:07:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156621 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652266jaa; Fri, 25 Jan 2019 10:07:28 -0800 (PST) X-Google-Smtp-Source: ALg8bN6PerQckVFHqd/5/nNaN05lziKMtR57O2SIRuf5CufJsUv0snaGMV/YEK4D9V2Ala3FQWLX X-Received: by 2002:a62:4c5:: with SMTP id 188mr12089096pfe.130.1548439648201; Fri, 25 Jan 2019 10:07:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439648; cv=none; d=google.com; s=arc-20160816; b=ea3RXEawLndXLKObZWqxgrKViKbbSVPf1PxqryUXsGaWtV5odBpXpmVyCkL6X+QLhc 1s82EBBzAv3PMGwqVZeCXg4pdwwsZS55NvwSwfTP0ntfkWwJD4XVSrt9M8UPSC/pZOD/ Ovq4w75/3fRxOx+nIPMEeMqA+qFuttff3TtTx+a+1aLPUkZCFIBPSzCHYAJPmH42BElJ rCZ81zIyRwCYFmcBpsHYPILjsgvy72SyDUyqEW3nayMQ95Shp2yYMDWg8HwO9SGyGh92 pGE4eQjPH0Vc7SwoHy+b+Pa7KWLGVCoumpGFsYbGRr3Ael9LLDPFIOqgxcWpR9uY5mG9 FGpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Q05QjCs9q4IVn0C4iHVSIYFCudby+tF5/zrLwC9OmNo=; b=KN8+SJ87JkoFcLdkibAEmkKRwy46Capru05+uSVsDTXnvw5IDlFMXhi+HaOh2MJumA IVv8pa6eCFwyv7zRjpXwF0IjMkIUOmFZj9YD5Wn+GATOlnMmWWgUdHpNvOjjzfLT/Z82 MmI6W6dRC5s4vntVI68Ow76mGFFuaNP+8OJbtyTM14Bb1RVzLNr2JmSVZs4XYYUn5gz8 OdL46sT+CTtN1LJZE38fmj28IptrZY3Ep4tCPq4DRc1Zk36ZymIG84EhSYaWp0XAI+IB hzWYaB/kiuuu3bJP4Wqq8fnFFQfbfpR+jfgJ/vTe5fcIK4EsF33ipSDlBQ1v1mOgSZ49 /JUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b7si26435128plk.206.2019.01.25.10.07.27; Fri, 25 Jan 2019 10:07:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729066AbfAYSHZ (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:25 -0500 Received: from foss.arm.com ([217.140.101.70]:51770 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729135AbfAYSHX (ORCPT ); Fri, 25 Jan 2019 13:07:23 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3220F1650; Fri, 25 Jan 2019 10:07:23 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 78E013F5AF; Fri, 25 Jan 2019 10:07:22 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton , Jonathan Corbet , linux-doc@vger.kernel.org Subject: [PATCH v4 02/12] arm64: Provide a command line to disable spectre_v2 mitigation Date: Fri, 25 Jan 2019 12:07:01 -0600 Message-Id: <20190125180711.1970973-3-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are various reasons, including bencmarking, to disable spectrev2 mitigation on a machine. Provide a command-line to do so. Signed-off-by: Jeremy Linton Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org --- Documentation/admin-guide/kernel-parameters.txt | 8 ++++---- arch/arm64/kernel/cpu_errata.c | 11 +++++++++++ 2 files changed, 15 insertions(+), 4 deletions(-) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9475f02c79da..2ae77979488e 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2849,10 +2849,10 @@ check bypass). With this option data leaks are possible in the system. - nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2 - (indirect branch prediction) vulnerability. System may - allow data leaks with this option, which is equivalent - to spectre_v2=off. + nospectre_v2 [X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for + the Spectre variant 2 (indirect branch prediction) + vulnerability. System may allow data leaks with this + option. nospec_store_bypass_disable [HW] Disable all mitigations for the Speculative Store Bypass vulnerability diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 9950bb0cbd52..9a7b5fca51a0 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void) : "=&r" (tmp)); } +static bool __nospectre_v2; +static int __init parse_nospectre_v2(char *str) +{ + __nospectre_v2 = true; + return 0; +} +early_param("nospectre_v2", parse_nospectre_v2); + static void enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) { @@ -231,6 +239,9 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) if (!entry->matches(entry, SCOPE_LOCAL_CPU)) return; + if (__nospectre_v2) + return; + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) return; From patchwork Fri Jan 25 18:07:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156632 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp653663jaa; Fri, 25 Jan 2019 10:08:40 -0800 (PST) X-Google-Smtp-Source: ALg8bN5SfSc2KoJUBVbQdJu6U9+nsrdq8l3Ahtn5ftNab2kMfKeaObpFeNtmCzWrkNN2xakl6wbz X-Received: by 2002:a63:6782:: with SMTP id b124mr10977956pgc.151.1548439720432; Fri, 25 Jan 2019 10:08:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439720; cv=none; d=google.com; s=arc-20160816; b=aojP9YTEp43wL7M2zRXXM3KcL+r3uBYsknLQP/8DJHsvoXHam0Kb5F4CcC8Hhc2oAO wrlgzDOg3ULK0cPzpg6+vKGey9S6qsQ4yRfRQa7/zGdHdmMUcZmexqPUqsG+TtvDwdqc UlxGg+x2bfjZdBuBKSBggjZ3NnFUT52jdygGciD8A9NrBLob+GM19B0WZxyV6sQd06/W osKwK58bt2rHWOUX7uzIrWdLaKb6JePU5Hh0k9eTCa/dl171LA4sKwsWN97b9emn2+UZ ucsD0liNXl/U0Ux4SQZhQQ9WSeUQMcz30jQt22vTadu9j8orPl3CnxbfZ4Lx7vyYjMMN XhYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ZojkTUPEj5lx4YlN+pKA7UePQnzHCpNCN6CxsZpwRqc=; b=oAKvuTbfSZCH+qFuBXJf76PbeCvK6k9GK5M8utffuTk2V949tsk2DGDHreI3EfG12F 3oe7+DMsIn/6UfJr0XkfaHL3DN6OuOU+OGMNnTI+XYthdougfMwGbfOM0C3PruaeIQcp zazku1w31odP2n+QcBx2G0FpR+r0ljca2uBeaa104B9rObpJSsH61F9O80jTutS2wuxv GHmDCCKLDhiQtlTyZtSMCI2fAWOvPGhgw4sOVQnvyOiCg1MdqXrqlZ+8stTgkdgquWwq WpJ9QU2afXkVHS3M6kiN+FqP/PZ8nWoDHeZJ2cwLFBQQNSP5985TRwDZv4sl7sCGWS/J AVtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s4si25097820pga.377.2019.01.25.10.08.40; Fri, 25 Jan 2019 10:08:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729522AbfAYSIj (ORCPT + 31 others); Fri, 25 Jan 2019 13:08:39 -0500 Received: from foss.arm.com ([217.140.101.70]:51786 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729223AbfAYSHY (ORCPT ); Fri, 25 Jan 2019 13:07:24 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67BA1165C; Fri, 25 Jan 2019 10:07:24 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AA23F3F5AF; Fri, 25 Jan 2019 10:07:23 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton , Christoffer Dall , kvmarm@lists.cs.columbia.edu Subject: [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd Date: Fri, 25 Jan 2019 12:07:02 -0600 Message-Id: <20190125180711.1970973-4-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Buried behind EXPERT is the ability to build a kernel without SSBD, this needlessly clutters up the code as well as creates the opportunity for bugs. It also removes the kernel's ability to determine if the machine its running on is vulnerable. Since its also possible to disable it at boot time, lets remove the config option. Signed-off-by: Jeremy Linton Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- arch/arm64/Kconfig | 9 --------- arch/arm64/include/asm/cpufeature.h | 8 -------- arch/arm64/include/asm/kvm_mmu.h | 7 ------- arch/arm64/kernel/Makefile | 3 +-- arch/arm64/kernel/cpu_errata.c | 4 ---- arch/arm64/kernel/cpufeature.c | 4 ---- arch/arm64/kernel/entry.S | 2 -- arch/arm64/kvm/hyp/hyp-entry.S | 2 -- arch/arm64/kvm/hyp/switch.c | 4 ---- 9 files changed, 1 insertion(+), 42 deletions(-) -- 2.17.2 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a4168d366127..0baa632bf0a8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1038,15 +1038,6 @@ config HARDEN_EL2_VECTORS If unsure, say Y. -config ARM64_SSBD - bool "Speculative Store Bypass Disable" if EXPERT - default y - help - This enables mitigation of the bypassing of previous stores - by speculative loads. - - If unsure, say Y. - config RODATA_FULL_DEFAULT_ENABLED bool "Apply r/o permissions of VM areas also to their linear aliases" default y diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index dfcfba725d72..bbed2067a1a4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -620,19 +620,11 @@ static inline bool system_supports_generic_auth(void) static inline int arm64_get_ssbd_state(void) { -#ifdef CONFIG_ARM64_SSBD extern int ssbd_state; return ssbd_state; -#else - return ARM64_SSBD_UNKNOWN; -#endif } -#ifdef CONFIG_ARM64_SSBD void arm64_set_ssbd_mitigation(bool state); -#else -static inline void arm64_set_ssbd_mitigation(bool state) {} -#endif extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 8af4b1befa42..a5c152d79820 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -541,7 +541,6 @@ static inline int kvm_map_vectors(void) } #endif -#ifdef CONFIG_ARM64_SSBD DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); static inline int hyp_map_aux_data(void) @@ -558,12 +557,6 @@ static inline int hyp_map_aux_data(void) } return 0; } -#else -static inline int hyp_map_aux_data(void) -{ - return 0; -} -#endif #define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr) diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index cd434d0719c1..306336a2fa34 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -19,7 +19,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ return_address.o cpuinfo.o cpu_errata.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ - syscall.o + syscall.o ssbd.o extra-$(CONFIG_EFI) := efi-entry.o @@ -57,7 +57,6 @@ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o -obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-y += vdso/ probes/ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 9a7b5fca51a0..934d50788ca3 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -281,7 +281,6 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) } #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ -#ifdef CONFIG_ARM64_SSBD DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); int ssbd_state __read_mostly = ARM64_SSBD_KERNEL; @@ -473,7 +472,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, return required; } -#endif /* CONFIG_ARM64_SSBD */ static void __maybe_unused cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) @@ -726,14 +724,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors), }, #endif -#ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypass Disable", .capability = ARM64_SSBD, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = has_ssbd_mitigation, }, -#endif #ifdef CONFIG_ARM64_ERRATUM_1188873 { /* Cortex-A76 r0p0 to r2p0 */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f6d84e2c92fe..d1a7fd7972f9 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1131,7 +1131,6 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused) WARN_ON(val & (7 << 27 | 7 << 21)); } -#ifdef CONFIG_ARM64_SSBD static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) { if (user_mode(regs)) @@ -1171,7 +1170,6 @@ static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused) arm64_set_ssbd_mitigation(true); } } -#endif /* CONFIG_ARM64_SSBD */ #ifdef CONFIG_ARM64_PAN static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) @@ -1400,7 +1398,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR0_CRC32_SHIFT, .min_field_value = 1, }, -#ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypassing Safe (SSBS)", .capability = ARM64_SSBS, @@ -1412,7 +1409,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY, .cpu_enable = cpu_enable_ssbs, }, -#endif #ifdef CONFIG_ARM64_CNP { .desc = "Common not Private translations", diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 0ec0c46b2c0c..bee54b7d17b9 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -137,7 +137,6 @@ alternative_else_nop_endif // This macro corrupts x0-x3. It is the caller's duty // to save/restore them if required. .macro apply_ssbd, state, tmp1, tmp2 -#ifdef CONFIG_ARM64_SSBD alternative_cb arm64_enable_wa2_handling b .L__asm_ssbd_skip\@ alternative_cb_end @@ -151,7 +150,6 @@ alternative_cb arm64_update_smccc_conduit nop // Patched to SMC/HVC #0 alternative_cb_end .L__asm_ssbd_skip\@: -#endif .endm .macro kernel_entry, el, regsize = 64 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 73c1b483ec39..53c9344968d4 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -114,7 +114,6 @@ el1_hvc_guest: ARM_SMCCC_ARCH_WORKAROUND_2) cbnz w1, el1_trap -#ifdef CONFIG_ARM64_SSBD alternative_cb arm64_enable_wa2_handling b wa2_end alternative_cb_end @@ -141,7 +140,6 @@ alternative_cb_end wa2_end: mov x2, xzr mov x1, xzr -#endif wa_epilogue: mov x0, xzr diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index b0b1478094b4..9ce43ae6fc13 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -436,7 +436,6 @@ static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) { -#ifdef CONFIG_ARM64_SSBD /* * The host runs with the workaround always present. If the * guest wants it disabled, so be it... @@ -444,19 +443,16 @@ static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) if (__needs_ssbd_off(vcpu) && __hyp_this_cpu_read(arm64_ssbd_callback_required)) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); -#endif } static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) { -#ifdef CONFIG_ARM64_SSBD /* * If the guest has disabled the workaround, bring it back on. */ if (__needs_ssbd_off(vcpu) && __hyp_this_cpu_read(arm64_ssbd_callback_required)) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); -#endif } /* Switch to the guest for VHE systems running in EL2 */ From patchwork Fri Jan 25 18:07:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156622 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652301jaa; Fri, 25 Jan 2019 10:07:29 -0800 (PST) X-Google-Smtp-Source: ALg8bN7uX+ksneDtthKAp6791o9BnLBpV8DpmPrayaS39excXThJb+VXzU1qChpfNmqZdYkYhjjA X-Received: by 2002:a62:160d:: with SMTP id 13mr11828083pfw.203.1548439649458; Fri, 25 Jan 2019 10:07:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439649; cv=none; d=google.com; s=arc-20160816; b=dsmd3feGV2xgJ58lXLUZtSdIj1KdhHg67ED1ye3T+X2Yd6vS9VCYPZ+HGdrtXFE3wm 4Te4QMB7JtAUlFqdvwuyuw2Q1fFf+uDZUtfZ5O9fz1BCwXTH0Z1+pDPUzSP3dQcNZi3G CZ3rkfDMHG6jcPY1PHypK5ZlJJAe58n3HbzwKLcVKrrUZznCJAV7TXTTp1y0GJ7C11os tBQQZnemL1ffwdMdqoz2TcHHKy+tWHgAoqFB3p3HLfj5qz4DI1XUAxZU7Uv1R+T/fSBd OjLghLsR/yAQs6TX3djWjObcePJKv5Inp2TvSEgh+xltcjclgnUMhyhdvOAbxyr+0LT/ SP+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=t97Tsh3j5+OXCns9MhTjMONgVXchyLxwWH59IUSWzmY=; b=o6LOg48Ni23oQQcfkjgjsvguQqFHYJ8ZlQ1zrwWIX/GlibruNPK+a+CBPaPtG5kaos nNXAf0WSfdyCE8ZGHT2UYHXaSaNzZ37dXul5VYfRFBH4iCVnAymj1KBhSeNfsQjH3bXL DVrpk0udrD4xE8i8NukMFyaZZfnIUZI2Z2c/M2Ml/nnffdFrzMQstiIJtyl5tBdYzz5g 6MRfoKt5UEP8luD9X5V2PyZ7Dr1sFMafN3b7H+5aX7yneKfxBzd5B3fV9Sj4eg2mFHGi IcXpYGbTbpSrjeYQOUkCudKvIGNYunIkogaKu6ZcU7McfCt3rLCNcabFWVAe2RVdNbSH 8JlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x66si25231079pfk.73.2019.01.25.10.07.29; Fri, 25 Jan 2019 10:07:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729313AbfAYSH1 (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:27 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51802 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729258AbfAYSH0 (ORCPT ); Fri, 25 Jan 2019 13:07:26 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A2148EBD; Fri, 25 Jan 2019 10:07:25 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E90B33F5AF; Fri, 25 Jan 2019 10:07:24 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton , Christoffer Dall , kvmarm@lists.cs.columbia.edu Subject: [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors Date: Fri, 25 Jan 2019 12:07:03 -0600 Message-Id: <20190125180711.1970973-5-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Buried behind EXPERT is the ability to build a kernel without hardened branch predictors, this needlessly clutters up the code as well as creates the opportunity for bugs. It also removes the kernel's ability to determine if the machine its running on is vulnerable. Since its also possible to disable it at boot time, lets remove the config option. Signed-off-by: Jeremy Linton Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- arch/arm64/Kconfig | 17 ----------------- arch/arm64/include/asm/kvm_mmu.h | 12 ------------ arch/arm64/include/asm/mmu.h | 12 ------------ arch/arm64/kernel/cpu_errata.c | 19 ------------------- arch/arm64/kernel/entry.S | 2 -- arch/arm64/kvm/Kconfig | 3 --- arch/arm64/kvm/hyp/hyp-entry.S | 2 -- 7 files changed, 67 deletions(-) -- 2.17.2 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0baa632bf0a8..6b4c6d3fdf4d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1005,23 +1005,6 @@ config UNMAP_KERNEL_AT_EL0 If unsure, say Y. -config HARDEN_BRANCH_PREDICTOR - bool "Harden the branch predictor against aliasing attacks" if EXPERT - default y - help - Speculation attacks against some high-performance processors rely on - being able to manipulate the branch predictor for a victim context by - executing aliasing branches in the attacker context. Such attacks - can be partially mitigated against by clearing internal branch - predictor state and limiting the prediction logic in some situations. - - This config option will take CPU-specific actions to harden the - branch predictor against aliasing attacks and may rely on specific - instruction sequences or control bits being set by the system - firmware. - - If unsure, say Y. - config HARDEN_EL2_VECTORS bool "Harden EL2 vector mapping against system register leak" if EXPERT default y diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a5c152d79820..9dd680194db9 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -444,7 +444,6 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, return ret; } -#ifdef CONFIG_KVM_INDIRECT_VECTORS /* * EL2 vectors can be mapped and rerouted in a number of ways, * depending on the kernel configuration and CPU present: @@ -529,17 +528,6 @@ static inline int kvm_map_vectors(void) return 0; } -#else -static inline void *kvm_get_hyp_vector(void) -{ - return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); -} - -static inline int kvm_map_vectors(void) -{ - return 0; -} -#endif DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 3e8063f4f9d3..20fdf71f96c3 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -95,13 +95,9 @@ struct bp_hardening_data { bp_hardening_cb_t fn; }; -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ - defined(CONFIG_HARDEN_EL2_VECTORS)) extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[]; extern atomic_t arm64_el2_vector_last_slot; -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void) @@ -120,14 +116,6 @@ static inline void arm64_apply_bp_hardening(void) if (d->fn) d->fn(); } -#else -static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void) -{ - return NULL; -} - -static inline void arm64_apply_bp_hardening(void) { } -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ extern void paging_init(void); extern void bootmem_init(void); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 934d50788ca3..de09a3537cd4 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -109,13 +109,11 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused) atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR #include #include DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); -#ifdef CONFIG_KVM_INDIRECT_VECTORS extern char __smccc_workaround_1_smc_start[]; extern char __smccc_workaround_1_smc_end[]; @@ -165,17 +163,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn, __this_cpu_write(bp_hardening_data.fn, fn); raw_spin_unlock(&bp_lock); } -#else -#define __smccc_workaround_1_smc_start NULL -#define __smccc_workaround_1_smc_end NULL - -static void __install_bp_hardening_cb(bp_hardening_cb_t fn, - const char *hyp_vecs_start, - const char *hyp_vecs_end) -{ - __this_cpu_write(bp_hardening_data.fn, fn); -} -#endif /* CONFIG_KVM_INDIRECT_VECTORS */ static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry, bp_hardening_cb_t fn, @@ -279,7 +266,6 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) return; } -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); @@ -516,7 +502,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE_LIST(midr_list) -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR /* * List of CPUs where we need to issue a psci call to @@ -535,8 +520,6 @@ static const struct midr_range arm64_bp_harden_smccc_cpus[] = { {}, }; -#endif - #ifdef CONFIG_HARDEN_EL2_VECTORS static const struct midr_range arm64_harden_el2_vectors[] = { @@ -710,13 +693,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), }, #endif -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, .cpu_enable = enable_smccc_arch_workaround_1, ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), }, -#endif #ifdef CONFIG_HARDEN_EL2_VECTORS { .desc = "EL2 vector hardening", diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index bee54b7d17b9..3f0eaaf704c8 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -842,11 +842,9 @@ el0_irq_naked: #endif ct_user_exit -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR tbz x22, #55, 1f bl do_el0_irq_bp_hardening 1: -#endif irq_handler #ifdef CONFIG_TRACE_IRQFLAGS diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index a3f85624313e..402bcfb85f25 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -58,9 +58,6 @@ config KVM_ARM_PMU Adds support for a virtual Performance Monitoring Unit (PMU) in virtual machines. -config KVM_INDIRECT_VECTORS - def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS) - source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 53c9344968d4..e02ddf40f113 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -272,7 +272,6 @@ ENTRY(__kvm_hyp_vector) valid_vect el1_error // Error 32-bit EL1 ENDPROC(__kvm_hyp_vector) -#ifdef CONFIG_KVM_INDIRECT_VECTORS .macro hyp_ventry .align 7 1: .rept 27 @@ -331,4 +330,3 @@ ENTRY(__smccc_workaround_1_smc_start) ldp x0, x1, [sp, #(8 * 2)] add sp, sp, #(8 * 4) ENTRY(__smccc_workaround_1_smc_end) -#endif From patchwork Fri Jan 25 18:07:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156629 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652969jaa; Fri, 25 Jan 2019 10:08:03 -0800 (PST) X-Google-Smtp-Source: ALg8bN4UU2UMbk6FAmNneuq83jEiNZPWuwDVnDjbiZ3kkpK0m9PYwiSjSehx3LAoSAuuCu5EGU2j X-Received: by 2002:a63:c00b:: with SMTP id h11mr11042819pgg.429.1548439683677; Fri, 25 Jan 2019 10:08:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439683; cv=none; d=google.com; s=arc-20160816; b=AFIML0MDCIGBHvkXfNnxkJrw/7a+AOmMg6WO6A89Sm5ZITEV2Bydx1JYuNrLC1/ojf FfV0ULSOTXs+nsYv0Zk/9Tcxt6CxkJ78tJpVXWzPBYWQZg4ypB5oGH+gAcABYh6+hYpU ovDN9IQJsA28ixxwSy8co/BeqN2ex8gACrYkD6gLJa12EHSy4sAXd1gmiIyh7hhAmyNu lj7Praq9nfX2+Qs11Mdut0D1JEzuHStlp73UfRhLZlWR2d+1BK1mUTBSwO3tJxqBs4mQ Xtk+ai8n+XF1ADBrtqjI4B0IKgkKfSd4bt9ePj9WhQVpLovL8VqV16GJEzxNTdgOH93E 3CTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=T2MaeZQgRxU5Blo0AUbSzWqEEehYkQY67gGVh1R1r0Y=; b=OhLhbNqNhtYcAqbTpmTheU1aPnXL5EGCYlum3+N0YvYani7jLhCrZAnyHHOl5ssiSc GrOwbTcw9LQxOyISAjCTmxMJBx4KpJWRj98aR5gYM0d1QLjzjvw+VOIJn0+aftClTF93 uY3fXhWoLNqu/8Jtqo8z/+D4FjpTIKjxvP3Oz7V1ZndU/yive+W6nw/pqzemy88HtAlw i7dfUV/EK6SydZkMq9uZC/nEE7spQ1aRm9k42sMFguXax39khtu36WuI9agOU16bOwIR M9WYPwjdQHpAJg7QI9XawqOuqXWmsvygrdRF4PVtfwXnTt7wMUItk17W9j7GSbBLXDKa ya9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r27si25639723pgl.494.2019.01.25.10.08.03; Fri, 25 Jan 2019 10:08:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729350AbfAYSH3 (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:29 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51816 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729274AbfAYSH1 (ORCPT ); Fri, 25 Jan 2019 13:07:27 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BE48A1596; Fri, 25 Jan 2019 10:07:26 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 22ED23F5AF; Fri, 25 Jan 2019 10:07:26 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 05/12] arm64: remove the ability to build a kernel without kpti Date: Fri, 25 Jan 2019 12:07:04 -0600 Message-Id: <20190125180711.1970973-6-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Buried behind EXPERT is the ability to build a kernel without hardened branch predictors, this needlessly clutters up the code as well as creates the opportunity for bugs. It also removes the kernel's ability to determine if the machine its running on is vulnerable. Since its also possible to disable it at boot time, lets remove the config option. Signed-off-by: Jeremy Linton --- arch/arm64/Kconfig | 12 ------------ arch/arm64/include/asm/fixmap.h | 2 -- arch/arm64/include/asm/mmu.h | 7 +------ arch/arm64/include/asm/sdei.h | 2 +- arch/arm64/kernel/asm-offsets.c | 2 -- arch/arm64/kernel/cpufeature.c | 4 ---- arch/arm64/kernel/entry.S | 11 +---------- arch/arm64/kernel/sdei.c | 2 -- arch/arm64/kernel/vmlinux.lds.S | 8 -------- arch/arm64/mm/context.c | 6 ------ arch/arm64/mm/mmu.c | 2 -- arch/arm64/mm/proc.S | 2 -- 12 files changed, 3 insertions(+), 57 deletions(-) -- 2.17.2 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6b4c6d3fdf4d..09a85410d814 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -993,18 +993,6 @@ config FORCE_MAX_ZONEORDER However for 4K, we choose a higher default value, 11 as opposed to 10, giving us 4M allocations matching the default size used by generic code. -config UNMAP_KERNEL_AT_EL0 - bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT - default y - help - Speculation attacks against some high-performance processors can - be used to bypass MMU permission checks and leak kernel data to - userspace. This can be defended against by unmapping the kernel - when running in userspace, mapping it back in on exception entry - via a trampoline page in the vector table. - - If unsure, say Y. - config HARDEN_EL2_VECTORS bool "Harden EL2 vector mapping against system register leak" if EXPERT default y diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index ec1e6d6fa14c..62371f07d4ce 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -58,11 +58,9 @@ enum fixed_addresses { FIX_APEI_GHES_NMI, #endif /* CONFIG_ACPI_APEI_GHES */ -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 FIX_ENTRY_TRAMP_DATA, FIX_ENTRY_TRAMP_TEXT, #define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT)) -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ __end_of_permanent_fixed_addresses, /* diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 20fdf71f96c3..9d689661471c 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -42,18 +42,13 @@ typedef struct { static inline bool arm64_kernel_unmapped_at_el0(void) { - return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && - cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); + return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); } static inline bool arm64_kernel_use_ng_mappings(void) { bool tx1_bug; - /* What's a kpti? Use global mappings if we don't know. */ - if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) - return false; - /* * Note: this function is called before the CPU capabilities have * been configured, so our early mappings will be global. If we diff --git a/arch/arm64/include/asm/sdei.h b/arch/arm64/include/asm/sdei.h index ffe47d766c25..82c3e9b6a4b0 100644 --- a/arch/arm64/include/asm/sdei.h +++ b/arch/arm64/include/asm/sdei.h @@ -23,7 +23,7 @@ extern unsigned long sdei_exit_mode; asmlinkage void __sdei_asm_handler(unsigned long event_num, unsigned long arg, unsigned long pc, unsigned long pstate); -/* and its CONFIG_UNMAP_KERNEL_AT_EL0 trampoline */ +/* and its unmap kernel at el0 trampoline */ asmlinkage void __sdei_asm_entry_trampoline(unsigned long event_num, unsigned long arg, unsigned long pc, diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 65b8afc84466..6a6f83de91b8 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -165,9 +165,7 @@ int main(void) DEFINE(HIBERN_PBE_NEXT, offsetof(struct pbe, next)); DEFINE(ARM64_FTR_SYSVAL, offsetof(struct arm64_ftr_reg, sys_val)); BLANK(); -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 DEFINE(TRAMP_VALIAS, TRAMP_VALIAS); -#endif #ifdef CONFIG_ARM_SDE_INTERFACE DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d1a7fd7972f9..a9e18b9cdc1e 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -944,7 +944,6 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) return has_cpuid_feature(entry, scope); } -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, @@ -1035,7 +1034,6 @@ static int __init parse_kpti(char *str) return 0; } early_param("kpti", parse_kpti); -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ #ifdef CONFIG_ARM64_HW_AFDBM static inline void __cpu_enable_hw_dbm(void) @@ -1284,7 +1282,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64PFR0_EL0_SHIFT, .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, }, -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, @@ -1300,7 +1297,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = unmap_kernel_at_el0, .cpu_enable = kpti_install_ng_mappings, }, -#endif { /* FP/SIMD is not implemented */ .capability = ARM64_HAS_NO_FPSIMD, diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 3f0eaaf704c8..1d8efc144b04 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -70,7 +70,6 @@ .macro kernel_ventry, el, label, regsize = 64 .align 7 -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 alternative_if ARM64_UNMAP_KERNEL_AT_EL0 .if \el == 0 .if \regsize == 64 @@ -81,7 +80,6 @@ alternative_if ARM64_UNMAP_KERNEL_AT_EL0 .endif .endif alternative_else_nop_endif -#endif sub sp, sp, #S_FRAME_SIZE #ifdef CONFIG_VMAP_STACK @@ -345,7 +343,6 @@ alternative_else_nop_endif .if \el == 0 alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 bne 4f msr far_el1, x30 tramp_alias x30, tramp_exit_native @@ -353,7 +350,7 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 4: tramp_alias x30, tramp_exit_compat br x30 -#endif + .else eret .endif @@ -913,7 +910,6 @@ ENDPROC(el0_svc) .popsection // .entry.text -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 /* * Exception vectors trampoline. */ @@ -1023,7 +1019,6 @@ __entry_tramp_data_start: .quad vectors .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ /* * Register switch for AArch64. The callee-saved registers need to be saved @@ -1086,7 +1081,6 @@ NOKPROBE(ret_from_fork) b . .endm -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 /* * The regular SDEI entry point may have been unmapped along with the rest of * the kernel. This trampoline restores the kernel mapping to make the x1 memory @@ -1146,7 +1140,6 @@ __sdei_asm_trampoline_next_handler: .quad __sdei_asm_handler .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ /* * Software Delegated Exception entry point. @@ -1240,10 +1233,8 @@ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 sdei_handler_exit exit_mode=x2 alternative_else_nop_endif -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline br x5 -#endif ENDPROC(__sdei_asm_handler) NOKPROBE(__sdei_asm_handler) #endif /* CONFIG_ARM_SDE_INTERFACE */ diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 5ba4465e44f0..a0dbdb962019 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -157,7 +157,6 @@ unsigned long sdei_arch_get_entry_point(int conduit) sdei_exit_mode = (conduit == CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 if (arm64_kernel_unmapped_at_el0()) { unsigned long offset; @@ -165,7 +164,6 @@ unsigned long sdei_arch_get_entry_point(int conduit) (unsigned long)__entry_tramp_text_start; return TRAMP_VALIAS + offset; } else -#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ return (unsigned long)__sdei_asm_handler; } diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 7fa008374907..a4dbee11bcb5 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -57,16 +57,12 @@ jiffies = jiffies_64; #define HIBERNATE_TEXT #endif -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define TRAMP_TEXT \ . = ALIGN(PAGE_SIZE); \ __entry_tramp_text_start = .; \ *(.entry.tramp.text) \ . = ALIGN(PAGE_SIZE); \ __entry_tramp_text_end = .; -#else -#define TRAMP_TEXT -#endif /* * The size of the PE/COFF section that covers the kernel image, which @@ -143,10 +139,8 @@ SECTIONS idmap_pg_dir = .; . += IDMAP_DIR_SIZE; -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 tramp_pg_dir = .; . += PAGE_SIZE; -#endif #ifdef CONFIG_ARM64_SW_TTBR0_PAN reserved_ttbr0 = .; @@ -257,10 +251,8 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K, ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1)) <= SZ_4K, "Hibernate exit text too big or misaligned") #endif -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE, "Entry trampoline text too big") -#endif /* * If padding is applied before .head.text, virt<->phys conversions will fail. */ diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 1f0ea2facf24..e99f3e645e06 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -40,15 +40,9 @@ static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) #define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) #define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) -#else -#define NUM_USER_ASIDS (ASID_FIRST_VERSION) -#define asid2idx(asid) ((asid) & ~ASID_MASK) -#define idx2asid(idx) asid2idx(idx) -#endif /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b6f5aa52ac67..97252baf4700 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -570,7 +570,6 @@ static int __init parse_rodata(char *arg) } early_param("rodata", parse_rodata); -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 static int __init map_entry_trampoline(void) { pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; @@ -597,7 +596,6 @@ static int __init map_entry_trampoline(void) return 0; } core_initcall(map_entry_trampoline); -#endif /* * Create fine-grained mappings for the kernel. diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 73886a5f1f30..e9ca5cbb93bc 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -217,7 +217,6 @@ ENTRY(idmap_cpu_replace_ttbr1) ENDPROC(idmap_cpu_replace_ttbr1) .popsection -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 .pushsection ".idmap.text", "awx" .macro __idmap_kpti_get_pgtable_ent, type @@ -406,7 +405,6 @@ __idmap_kpti_secondary: .unreq pte ENDPROC(idmap_kpti_install_ng_mappings) .popsection -#endif /* * __cpu_setup From patchwork Fri Jan 25 18:07:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156630 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp653016jaa; Fri, 25 Jan 2019 10:08:06 -0800 (PST) X-Google-Smtp-Source: ALg8bN6ARFzBvFpj1qYB5tgp2Xeyd98OndE9t/fs8/WEPvgrygd31IsafVqobaSkro7KGpoytueI X-Received: by 2002:a17:902:45:: with SMTP id 63mr11569087pla.272.1548439686010; Fri, 25 Jan 2019 10:08:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439686; cv=none; d=google.com; s=arc-20160816; b=AqasAFu6Fgvw6SpZcq+ZH0JtpeXilA/fgDu/wt+xio/VF/3xSNh7e8V/wUsRgp8Jrr X/dYzt3Ui5tUmLuFElpgzATeBQ9ptrGjag5YuFO9Wiv36USE1hIm9S+bDG9yQMUVlf68 pfFXBVvDasTgIvjAuLv1Tz6pGRQtsjJi8StTkHbHwhFiyKwGOTd4McLKYHA0wnMXYZzz 8Pt33G2qmoemwbQTT+CMt6Yv2xCjENo7mvlxU7406zLJdyvTHVZzNrWDkkO0AcKTktT9 Roy/xGZd8TBG3zuQHpY8FN5YfxYr5XRrWsgCOaWFcrqOX72cLjF7OO8uSyw42Yudh0Nw 6mqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=nF+GGGJ8GcCBvVvF+LD7PeKGNYChWMtGWk2M2z9K798=; b=Jwp7QSKKGDuWCF4nkGNJrLn2czJMiHr38vdE2CPupxs7okF7qW5669rMF2Z1Zlq0xV 4qmLoQKN98Nb4YVdQ+esyfehdcxM2kFxpDWdbqdhVaBzAzzr/FvZBnkjXb2WwUNbcStj 74pd0j0xniAeoH/ZdUZiEAgVAs/GP2OmwjBqV2lwRFq3/PM4Afq+dYPLs9dvXa5J8kOZ 86+I4wJ0NKQHT8jbS6489Vx/u5lvcw1y6U7sAy63iHE8ZjkIyYTmwS3UDVZ6ZPAh/jG8 ZepkqLZLk3TiIaSKocjbQ3iY5cbZTyiGrs9IP1SAliZtntc4/f3N7zZVaValER1eytUC z6fA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 18si22239520pgo.331.2019.01.25.10.08.05; Fri, 25 Jan 2019 10:08:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729491AbfAYSID (ORCPT + 31 others); Fri, 25 Jan 2019 13:08:03 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51826 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729322AbfAYSH2 (ORCPT ); Fri, 25 Jan 2019 13:07:28 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E82021650; Fri, 25 Jan 2019 10:07:27 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 446F93F5AF; Fri, 25 Jan 2019 10:07:27 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 06/12] arm64: add sysfs vulnerability show for spectre v1 Date: Fri, 25 Jan 2019 12:07:05 -0600 Message-Id: <20190125180711.1970973-7-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mian Yousaf Kaukab spectre v1, has been mitigated, and the mitigation is always active. Signed-off-by: Mian Yousaf Kaukab Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 10 ++++++++++ 1 file changed, 10 insertions(+) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index de09a3537cd4..ef636acf5604 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -730,3 +730,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { } }; + +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES + +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "Mitigation: __user pointer sanitization\n"); +} + +#endif From patchwork Fri Jan 25 18:07:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156628 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652954jaa; Fri, 25 Jan 2019 10:08:03 -0800 (PST) X-Google-Smtp-Source: ALg8bN4WMSYyFFLsKLDWoxA0IhIfaT2FFj+DikKiAERoXjK9Q96FVpA2FoRHPTH3KiU24RQ0k+p6 X-Received: by 2002:a62:5fc4:: with SMTP id t187mr12057903pfb.66.1548439683200; Fri, 25 Jan 2019 10:08:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439683; cv=none; d=google.com; s=arc-20160816; b=FMnD+MZF4TRZmJLvv0Uz2wpPXYrzxClx/NO1mI5vUb1ljxAzwsSjNV5Do31aDV91cF M1mSFL5176IjHGHvMzlZP1/v2JJv6NgOczIDRYeSiCowa3bCfFsM2I7U7H60YnYzU7xj KzFFRod9bA17nm9d/wlncKZlzmb/JYs23/QtG68JTlRu4mhxmySKsn2chZ2kRMfk7fCt ovSQ+4dr5Wv2pP+3qa1xuH2Y+fwOG5mOaW60NRJu7pauDQC+cBFDC8gRWfn+lMXKx+jV RyBPYODatw2fq3wlCUHHTO9392OHDB3/vW23G2wydjs42WdcvRxeKIWJDfKSbLlpF6bz UI9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=OyXUeuHhPY6jG2+k0sZgpSp0sLOHTgM4CrFIOJ/sfdU=; b=IgS5sDoXQY9+sGtAV7fzmmitl9aRnhvpWucAk6/n6voydDEcJygdbVQsXGZGTiHH4d yC7BETFEpCBhpNUmqyTvreCQd9QYGEwSQyuKwgPTY0roXXa9LCaYOf1qefNEEUivaDil Khd5h+E3pb11PeWeYPd0gt6FHBkWtvJQlUDyRoHFs9o0bCCsbz2ZH/+hiqM8ljtT4mj8 zxAYvVhJl7okx26bnN617uzj3BtaFzYLxiYxRpMflTQbOollJiYAP0HZLwYcYCci8pPq Icol2HF3ReZ+R+lJxOP3VXN9bt+Dd/WUJ/8H+7aYr2JcWCa9XLEnSJqL7NqVybBZHFpi 8kaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r27si25639723pgl.494.2019.01.25.10.08.02; Fri, 25 Jan 2019 10:08:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729372AbfAYSHb (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:31 -0500 Received: from foss.arm.com ([217.140.101.70]:51836 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729336AbfAYSH3 (ORCPT ); Fri, 25 Jan 2019 13:07:29 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 00DFB1682; Fri, 25 Jan 2019 10:07:29 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 595C13F5AF; Fri, 25 Jan 2019 10:07:28 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 07/12] arm64: add sysfs vulnerability show for meltdown Date: Fri, 25 Jan 2019 12:07:06 -0600 Message-Id: <20190125180711.1970973-8-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Display the mitigation status if active, otherwise assume the cpu is safe unless it doesn't have CSV3 and isn't in our whitelist. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpufeature.c | 33 +++++++++++++++++++++++++++------ 1 file changed, 27 insertions(+), 6 deletions(-) -- 2.17.2 Reviewed-by: Julien Thierry diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index a9e18b9cdc1e..624dfe0b5cdd 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -944,6 +944,8 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope) return has_cpuid_feature(entry, scope); } +/* default value is invalid until unmap_kernel_at_el0() runs */ +static bool __meltdown_safe = true; static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, @@ -962,6 +964,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, { /* sentinel */ } }; char const *str = "command line option"; + bool meltdown_safe; + + meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list); + + /* Defer to CPU feature registers */ + if (has_cpuid_feature(entry, scope)) + meltdown_safe = true; + + if (!meltdown_safe) + __meltdown_safe = false; /* * For reasons that aren't entirely clear, enabling KPTI on Cavium @@ -984,12 +996,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) return kaslr_offset() > 0; - /* Don't force KPTI for CPUs that are not vulnerable */ - if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list)) - return false; - - /* Defer to CPU feature registers */ - return !has_cpuid_feature(entry, scope); + return !meltdown_safe; } static void @@ -2055,3 +2062,17 @@ static int __init enable_mrs_emulation(void) } core_initcall(enable_mrs_emulation); + +#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES +ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, + char *buf) +{ + if (arm64_kernel_unmapped_at_el0()) + return sprintf(buf, "Mitigation: KPTI\n"); + + if (__meltdown_safe) + return sprintf(buf, "Not affected\n"); + + return sprintf(buf, "Vulnerable\n"); +} +#endif From patchwork Fri Jan 25 18:07:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156623 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652386jaa; Fri, 25 Jan 2019 10:07:35 -0800 (PST) X-Google-Smtp-Source: ALg8bN7IbQrkSfcU7FjqbTxrO0X9tuPMuIkSqJaDq8Zj4DZyFPDJ1szOwqWtd3y+k+JaTffNGaR1 X-Received: by 2002:a63:e445:: with SMTP id i5mr10850381pgk.307.1548439655045; Fri, 25 Jan 2019 10:07:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439655; cv=none; d=google.com; s=arc-20160816; b=qChFqUaJLsUWj8kibrWapDAdEPnqXFSDWAPSfNmOpAjv+FqclT7RyXbjjMhPGe3God tDWLTGIY06ge5rXiYnqq2+BrcacvZrYwZ3S4eAe/Zl3MeBfqcswfySiXR1qnl0pghopG 2cCsSKZ2E/W4Vl62P7L9x8mTx864+UGQVIGsLRI9VAgPy+I2njJLnEvZrlaLcEBZGoIs Mus0HStXRXcpfzzp1G49e3m49Ox8agImJknSBmqHIkD5nt6XxOl1qavE9F+r1fzPlLZK /9MPZhTcWiz3eh5Vecgkdi61rvUfc4ad5k6NH8cCMubAM7506gdZ4NtWRIL06HwoILmt 7HYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=pHDYbyepUCwz6eu1LhGvFQ5jeKIGxTMMNmFw/c1IAFE=; b=L2weRBksxFFUYf3YMfgRynYkjF02A1q5u2WwzHP+V+Iy6D5sjZX7fGBu3eUPYqLx/O fjpAf7kvo70K2oOgTFzCSEnOHTvkdVdfsXVFiIvXeja3qnEhfysvEd+//1VWhMcNRemJ i8BPiwxBZ0LaD2zaPrY8cKNkta3P4a2kSFdXhpznXACBCNiJ0hVFWR8Qfz/pj9xF9/H4 lbXvGDhLuWvZVbfF1ASyq7VHR2svyRkyrwsgAwVZHHSHlijxSf2S43V2k1sxOqBV2CYl YcudZbU9dQGNnPtEzEiM7WBmw+PBZ2q+hVQBZXrLBn8UNONIQ6wnQU4djn5gW8RX/E6M opfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b3si26104567pld.282.2019.01.25.10.07.34; Fri, 25 Jan 2019 10:07:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729396AbfAYSHd (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:33 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51852 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729364AbfAYSHa (ORCPT ); Fri, 25 Jan 2019 13:07:30 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 37BC7169E; Fri, 25 Jan 2019 10:07:30 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 90AF73F5AF; Fri, 25 Jan 2019 10:07:29 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 08/12] arm64: Advertise mitigation of Spectre-v2, or lack thereof Date: Fri, 25 Jan 2019 12:07:07 -0600 Message-Id: <20190125180711.1970973-9-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Marc Zyngier We currently have a list of CPUs affected by Spectre-v2, for which we check that the firmware implements ARCH_WORKAROUND_1. It turns out that not all firmwares do implement the required mitigation, and that we fail to let the user know about it. Instead, let's slightly revamp our checks, and rely on a whitelist of cores that are known to be non-vulnerable, and let the user know the status of the mitigation in the kernel log. Signed-off-by: Marc Zyngier [This makes more sense in front of the sysfs patch] [Pick pieces of that patch into this and move it earlier] Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 104 +++++++++++++++++---------------- 1 file changed, 54 insertions(+), 50 deletions(-) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index ef636acf5604..4d23b4d4cfa8 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -129,9 +129,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, __flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K); } -static void __install_bp_hardening_cb(bp_hardening_cb_t fn, - const char *hyp_vecs_start, - const char *hyp_vecs_end) +static void install_bp_hardening_cb(bp_hardening_cb_t fn, + const char *hyp_vecs_start, + const char *hyp_vecs_end) { static DEFINE_RAW_SPINLOCK(bp_lock); int cpu, slot = -1; @@ -164,23 +164,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn, raw_spin_unlock(&bp_lock); } -static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry, - bp_hardening_cb_t fn, - const char *hyp_vecs_start, - const char *hyp_vecs_end) -{ - u64 pfr0; - - if (!entry->matches(entry, SCOPE_LOCAL_CPU)) - return; - - pfr0 = read_cpuid(ID_AA64PFR0_EL1); - if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT)) - return; - - __install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end); -} - #include #include #include @@ -215,29 +198,27 @@ static int __init parse_nospectre_v2(char *str) } early_param("nospectre_v2", parse_nospectre_v2); -static void -enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) +/* + * -1: No workaround + * 0: No workaround required + * 1: Workaround installed + */ +static int detect_harden_bp_fw(void) { bp_hardening_cb_t cb; void *smccc_start, *smccc_end; struct arm_smccc_res res; u32 midr = read_cpuid_id(); - if (!entry->matches(entry, SCOPE_LOCAL_CPU)) - return; - - if (__nospectre_v2) - return; - if (psci_ops.smccc_version == SMCCC_VERSION_1_0) - return; + return -1; switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return; + return -1; cb = call_hvc_arch_workaround_1; /* This is a guest, no need to patch KVM vectors */ smccc_start = NULL; @@ -248,23 +229,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return; + return -1; cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc_start; smccc_end = __smccc_workaround_1_smc_end; break; default: - return; + return -1; } if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) || ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) cb = qcom_link_stack_sanitization; - install_bp_hardening_cb(entry, cb, smccc_start, smccc_end); + install_bp_hardening_cb(cb, smccc_start, smccc_end); - return; + return 1; } DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); @@ -502,24 +483,47 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE_LIST(midr_list) - /* - * List of CPUs where we need to issue a psci call to - * harden the branch predictor. + * List of CPUs that do not need any Spectre-v2 mitigation at all. */ -static const struct midr_range arm64_bp_harden_smccc_cpus[] = { - MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), - MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), - MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), - MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), - MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), - MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), - MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1), - MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR), - MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER), - {}, +static const struct midr_range spectre_v2_safe_list[] = { + MIDR_ALL_VERSIONS(MIDR_CORTEX_A35), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A53), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A55), + { /* sentinel */ } }; +static bool __maybe_unused +check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope) +{ + int need_wa; + + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + + /* If the CPU has CSV2 set, we're safe */ + if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1), + ID_AA64PFR0_CSV2_SHIFT)) + return false; + + /* Alternatively, we have a list of unaffected CPUs */ + if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list)) + return false; + + /* Fallback to firmware detection */ + need_wa = detect_harden_bp_fw(); + if (!need_wa) + return false; + + if (need_wa < 0) + pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n"); + + /* forced off */ + if (__nospectre_v2) + return false; + + return (need_wa > 0); +} + #ifdef CONFIG_HARDEN_EL2_VECTORS static const struct midr_range arm64_harden_el2_vectors[] = { @@ -695,8 +699,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = { #endif { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - .cpu_enable = enable_smccc_arch_workaround_1, - ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, + .matches = check_branch_predictor, }, #ifdef CONFIG_HARDEN_EL2_VECTORS { From patchwork Fri Jan 25 18:07:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156627 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652942jaa; Fri, 25 Jan 2019 10:08:02 -0800 (PST) X-Google-Smtp-Source: ALg8bN7bjjzjuTKEgCEjUK2G5SAjo/uGFH6uuUcDdeqaalN9MIydxJoFyd4JxrKNfniGSoqSHFbt X-Received: by 2002:a17:902:7044:: with SMTP id h4mr11876356plt.35.1548439682557; Fri, 25 Jan 2019 10:08:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439682; cv=none; d=google.com; s=arc-20160816; b=Q4oa6IwuNrzh93Xi/Z3U7p4yTRP9EbSapJ2wPeiEIYiAcmktHEFqUoqs5nszE75ivS nND2VSCdJAPzxD/wQ4sILoacSLYHoQRwGhVGGdk6N9tjA0NBTEt+nHZ1dzvPg42icTh2 mHkggbbBqvhFA62lHOd6RE+HL9+w5u31NwtDYSpPps3Ezyq7lzoJqiWvfb3F4YhWr8dj UXnWlh5Py4yNkp1y/5DLRShcGKNUSWo+nVzp9mz3xk1fKv1Otp90Yt6Bx5kBAt+Id3o7 1z6Ghh0DMqvxApv0GtltB4p1+ifZZCjSUNoDgglnqi29037K7tc4V8yOrqZgiSiA1KRx W/Cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=isEWo00ekroD+hHx+cCcZtCCnan/N/Bz67pHPEmyP9s=; b=Szsj9N0qsjRLkB4Vh/MFegkaBrP6ncVX2TAtQoGy8qLkmyKCerCpCfO5uyC7LeCOlb 0wlAZ1dL/xWHE5cm4S9SNtRg6o+3AjntBNgUqbxNjS2tnevTRmmI+ErXNciVv3QAnADP mD+kYQ4a23YrH/f1scgGUFbeejtDUq/u7izGVgjlzd1jLmd0t2VDNkbYQplETpEyeAdK LBLon2OaA5JiIA9+oqJY3YFFC8uMFyZlo+opMi1DCT9mkbix75KYmu3uZkdypCj4vqYs OtES4hWZouST/IkF6DvuB9ZQylytQFzqohdrVrCCzJJIGQsePnNsbUzgk0ns1JQQHIYm 4nCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r27si25639723pgl.494.2019.01.25.10.08.01; Fri, 25 Jan 2019 10:08:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729476AbfAYSHx (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:53 -0500 Received: from foss.arm.com ([217.140.101.70]:51868 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729373AbfAYSHb (ORCPT ); Fri, 25 Jan 2019 13:07:31 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5B2D1165C; Fri, 25 Jan 2019 10:07:31 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B02193F5AF; Fri, 25 Jan 2019 10:07:30 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 09/12] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2 Date: Fri, 25 Jan 2019 12:07:08 -0600 Message-Id: <20190125180711.1970973-10-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Marc Zyngier The SMCCC ARCH_WORKAROUND_1 service can indicate that although the firmware knows about the Spectre-v2 mitigation, this particular CPU is not vulnerable, and it is thus not necessary to call the firmware on this CPU. Let's use this information to our benefit. Signed-off-by: Marc Zyngier Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++--------- 1 file changed, 23 insertions(+), 9 deletions(-) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 4d23b4d4cfa8..024c83ffff99 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -217,22 +217,36 @@ static int detect_harden_bp_fw(void) case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + switch ((int)res.a0) { + case 1: + /* Firmware says we're just fine */ + return 0; + case 0: + cb = call_hvc_arch_workaround_1; + /* This is a guest, no need to patch KVM vectors */ + smccc_start = NULL; + smccc_end = NULL; + break; + default: return -1; - cb = call_hvc_arch_workaround_1; - /* This is a guest, no need to patch KVM vectors */ - smccc_start = NULL; - smccc_end = NULL; + } break; case PSCI_CONDUIT_SMC: arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + switch ((int)res.a0) { + case 1: + /* Firmware says we're just fine */ + return 0; + case 0: + cb = call_smc_arch_workaround_1; + smccc_start = __smccc_workaround_1_smc_start; + smccc_end = __smccc_workaround_1_smc_end; + break; + default: return -1; - cb = call_smc_arch_workaround_1; - smccc_start = __smccc_workaround_1_smc_start; - smccc_end = __smccc_workaround_1_smc_end; + } break; default: From patchwork Fri Jan 25 18:07:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156625 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652540jaa; Fri, 25 Jan 2019 10:07:41 -0800 (PST) X-Google-Smtp-Source: ALg8bN5/Zp+2JfYv3GkRRcR76h2N0fZcdjMX3hxUjW1SyukfoUI61iU7OmmfgncX+nF/c5XRuocc X-Received: by 2002:a62:1c0a:: with SMTP id c10mr11867674pfc.213.1548439661356; Fri, 25 Jan 2019 10:07:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439661; cv=none; d=google.com; s=arc-20160816; b=aMDo+8lY7QNBEjnjitFutYESmN8RCUgZI9D1+Kqov1h/7AH3kM+w+ZjyGnZqTPUv0x KAY6vvTA3jEfuq3s4IE7cmTPqRR+/681Xh0ceGl4z22pyOCwrlCtIjjrWlU3ScUV/ZXK QpUE45HOYWbl2iVXkwGqt1+44+JM8TuOim06ZTcTeVGE3gSr3i2Jeu1/4C6g/1Yhcw/R /a+MedA0U+n8IjLBrkcz5S633UyUl8jfQIaev8vJ4Ar5nU2EVRMv4VnMUT7jWxIvCFr5 GNtG82Qc18NIHbnRNcBqUZCRMkWhlZ1RXyR0OB3euzuBrdAbt4y5JazLdDmP0N3g7PfU JX+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=eWWhQ3bicY9imQAW658XNmhEXzy+rstScrXNooZeQ7g=; b=CCV2FzngeDubjl9S2zuMgCtdzO0x/Oh/TfKv1Tf9pzL8lCz/q2x5i/frzMTT6YDRSK sGnmvnsy6IkjdkjXISav4iKlVqWP7QkJNqNqWpIl/Pv6w5rfS3HDVHb/hpPcFBxj4+DS VeQN2FQgajsr/kStL24OBSS6fxnmYcaY9wcg8qXOrx1Ytad1+YKS6FsQt8HFAL4xsivz plEPgMVrbcN/GneKxrgtnKTsgDkU2xGsk52w6aEPK+Ssh94HQTC5QzpLVKHwecMUjiN8 JwKrnkSX4p9S8yxZD9iDICf8tTz70164UlPUgVLlqwQqr7ZgjuLpPLEkyFUypFMFT9jO 9eag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g124si24980949pgc.568.2019.01.25.10.07.41; Fri, 25 Jan 2019 10:07:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729438AbfAYSHj (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:39 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51880 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729392AbfAYSHc (ORCPT ); Fri, 25 Jan 2019 13:07:32 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 630BE1713; Fri, 25 Jan 2019 10:07:32 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BBF513F5AF; Fri, 25 Jan 2019 10:07:31 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 10/12] arm64: add sysfs vulnerability show for spectre v2 Date: Fri, 25 Jan 2019 12:07:09 -0600 Message-Id: <20190125180711.1970973-11-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add code to track whether all the cores in the machine are vulnerable, and whether all the vulnerable cores have been mitigated. Once we have that information we can add the sysfs stub and provide an accurate view of what is known about the machine. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 024c83ffff99..caedf268c972 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -497,6 +497,10 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE_LIST(midr_list) +/* Track overall mitigation state. We are only mitigated if all cores are ok */ +static bool __hardenbp_enab = true; +static bool __spectrev2_safe = true; + /* * List of CPUs that do not need any Spectre-v2 mitigation at all. */ @@ -507,6 +511,10 @@ static const struct midr_range spectre_v2_safe_list[] = { { /* sentinel */ } }; +/* + * Track overall bp hardening for all heterogeneous cores in the machine. + * We are only considered "safe" if all booted cores are known safe. + */ static bool __maybe_unused check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope) { @@ -528,12 +536,19 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope) if (!need_wa) return false; - if (need_wa < 0) + __spectrev2_safe = false; + + if (need_wa < 0) { pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n"); + __hardenbp_enab = false; + } /* forced off */ - if (__nospectre_v2) + if (__nospectre_v2) { + pr_info_once("spectrev2 mitigation disabled by command line option\n"); + __hardenbp_enab = false; return false; + } return (need_wa > 0); } @@ -757,4 +772,16 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, return sprintf(buf, "Mitigation: __user pointer sanitization\n"); } +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, + char *buf) +{ + if (__spectrev2_safe) + return sprintf(buf, "Not affected\n"); + + if (__hardenbp_enab) + return sprintf(buf, "Mitigation: Branch predictor hardening\n"); + + return sprintf(buf, "Vulnerable\n"); +} + #endif From patchwork Fri Jan 25 18:07:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156624 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652465jaa; Fri, 25 Jan 2019 10:07:38 -0800 (PST) X-Google-Smtp-Source: ALg8bN6rhI7fmXlj+TAt/V7fxFYimqqd7F0QTG757atGq78GarC+bsuqQB+BZk0ofBL1jYjBbP/L X-Received: by 2002:a17:902:d83:: with SMTP id 3mr11713784plv.43.1548439658142; Fri, 25 Jan 2019 10:07:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439658; cv=none; d=google.com; s=arc-20160816; b=aPxEt7ZHUMpVGqb36KlI1XA603/eDkWLs458OB1NuSRvo38gzvtdRyMzTYUs560VZL flpUt2DzIMKU3iWmKZ+iKcOM9VWg1E2990BhNPLrWWpy7UyPZxiiRy23viCR+icH4xZy JRc1RIplLX0iYcLfYAqe+9RGWMx2s1X6Sj7arrXTFlXzDAw9kbRHF5cDOR+rNlFsCNZu CqVIFtDmBfGKMBvBUBUgEcnPF6BWWPbHfLLY3afnlgG/sS/cHahkaAXTmNVAJnJNBlZe VR1KJNrumtJ+paUPuVRZjVfR/hxA53zUVVCHMrWQ8saO7BUq3uoRhCq2PGD6E9Frm5um nk0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=a5oNyZGTQ2F2LFggFWd+lraHCqPAxS6UAPz8kYq9P3E=; b=QndU1v+J0PN/FNkQOVQwoMYywg9Nt6JnIyA8/f0TjeuMnIILJkDTq3UyT+PNQQ1nOK bD3mE5zEylYj9UBG2ft5cq2YEBQrdORLvgXl6bMvC8gVMjEHvm+b4jOQTwmbzmp3UxHp D1ElHuL/fNV/2YURvZ2e1adO2TI4a0umdKw+wd5WH8m+GDvdRV2CbqDYHqLNV5SnxINy h9paI8JEKAt0HPCiw2VJ0ReQ10XamdoHQGPVBnA+8v9Wspayf+y8tVjQjKwyhYWf6HMw iYL9A1nODve9li5lI+1dgLPArh11e6DmlduwYZW1C3J4tBUmSvbTrsjW0tgXguGnzMJs JtIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b3si26104567pld.282.2019.01.25.10.07.37; Fri, 25 Jan 2019 10:07:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729399AbfAYSHg (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:36 -0500 Received: from foss.arm.com ([217.140.101.70]:51888 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729364AbfAYSHd (ORCPT ); Fri, 25 Jan 2019 13:07:33 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5E219EBD; Fri, 25 Jan 2019 10:07:33 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B65B33F5AF; Fri, 25 Jan 2019 10:07:32 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 11/12] arm64: add sysfs vulnerability show for speculative store bypass Date: Fri, 25 Jan 2019 12:07:10 -0600 Message-Id: <20190125180711.1970973-12-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return status based on ssbd_state and the arm64 SSBS feature. If the mitigation is disabled, or the firmware isn't responding then return the expected machine state based on a new blacklist of known vulnerable cores. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 45 ++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) -- 2.17.2 diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index caedf268c972..e9ae8e5fd7e1 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -265,6 +265,7 @@ static int detect_harden_bp_fw(void) DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); int ssbd_state __read_mostly = ARM64_SSBD_KERNEL; +static bool __ssb_safe = true; static const struct ssbd_options { const char *str; @@ -362,10 +363,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, { struct arm_smccc_res res; bool required = true; + bool is_vul; s32 val; WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); + + if (is_vul) + __ssb_safe = false; + if (this_cpu_has_cap(ARM64_SSBS)) { required = false; goto out_printmsg; @@ -399,6 +406,7 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, ssbd_state = ARM64_SSBD_UNKNOWN; return false; + /* machines with mixed mitigation requirements must not return this */ case SMCCC_RET_NOT_REQUIRED: pr_info_once("%s mitigation not required\n", entry->desc); ssbd_state = ARM64_SSBD_MITIGATED; @@ -454,6 +462,16 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, return required; } +/* known vulnerable cores */ +static const struct midr_range arm64_ssb_cpus[] = { + MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A76), + {}, +}; + static void __maybe_unused cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) { @@ -743,6 +761,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_SSBD, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = has_ssbd_mitigation, + .midr_range_list = arm64_ssb_cpus, }, #ifdef CONFIG_ARM64_ERRATUM_1188873 { @@ -784,4 +803,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, return sprintf(buf, "Vulnerable\n"); } +ssize_t cpu_show_spec_store_bypass(struct device *dev, + struct device_attribute *attr, char *buf) +{ + /* + * Two assumptions: First, get_ssbd_state() reflects the worse case + * for hetrogenous machines, and that if SSBS is supported its + * supported by all cores. + */ + switch (arm64_get_ssbd_state()) { + case ARM64_SSBD_MITIGATED: + return sprintf(buf, "Not affected\n"); + + case ARM64_SSBD_KERNEL: + case ARM64_SSBD_FORCE_ENABLE: + if (cpus_have_cap(ARM64_SSBS)) + return sprintf(buf, "Not affected\n"); + return sprintf(buf, + "Mitigation: Speculative Store Bypass disabled\n"); + } + + if (__ssb_safe) + return sprintf(buf, "Not affected\n"); + + return sprintf(buf, "Vulnerable\n"); +} + #endif From patchwork Fri Jan 25 18:07:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156626 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652925jaa; Fri, 25 Jan 2019 10:08:01 -0800 (PST) X-Google-Smtp-Source: ALg8bN7KrfWFvZcxKACVyK4c1zB+fQGK6L/6eHvIk+I21aZ/cLaSDDnPFHSm+ANllwdhMxjgUV/D X-Received: by 2002:a63:2bc4:: with SMTP id r187mr10755305pgr.306.1548439681746; Fri, 25 Jan 2019 10:08:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439681; cv=none; d=google.com; s=arc-20160816; b=Qn4la3Aaj3L1gLhOvLF3FS+tt66hRdinDG3d383YFQ7F5CGmKe+v+PyxU1VxlStoai xineJ4EAKMn8AZWEwFijMr6AcMeAXXnTD8NLkbfP+RSp26O9aI9X6DUG+ICKusxP1B08 zHJu7ubHgZaBtKTRjG9KzDiw/E77NKETakm38f9SAsxCg95bA2aZljHgvIVFMHZbJ/mA oxt/04j4D5CeIl7XDb0prFt9ITRCgJS70vifuZZWgpeIHfoTcoka/IDGv1eotGjqjaEl CU8SGMLvWZ+xzilO8fuoDysUCNq8Hf1NYe6F2fmUYHJK8of98UQljVPgQr4cn2xHZxvl ARlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=bx+/lbxEEvp1c6AmFlHAZaFDiMN4xpnArIUet7fiZCg=; b=ldNulhS8WengYm9TA0l7FQqrChwr51PP7G9tCF+nlBdyzQq6ET6DSGLvYjWnDqAJSn ZHZQmgwxEgo6t1rzKkJ3yqPMBRQdV0162C0UbkM5OktUvyUJ2wK4l5zzbq20i3r2i1Ne qHfhDDcjSqhJCGJaLLsXaqWeiHXzoiqHWyD3KCgCmziN0i36ZdWjEY2tJrVI7SbFj+r1 iIO6KZ0zg4/vWUb9vllK10hB/ScDVD49hqfMgnAK1r4U2FxWZeNR5yFiz9pqtc4BHk7c o8jKavZzZIAudDuxmJ6u9JNhIc0bEXQ+pUucVTz+VIoLtAWXq+jCKhXXt8fffBLFdvm3 xTGg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r27si25639723pgl.494.2019.01.25.10.07.47; Fri, 25 Jan 2019 10:08:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729455AbfAYSHq (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:46 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51898 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729400AbfAYSHe (ORCPT ); Fri, 25 Jan 2019 13:07:34 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A41B1596; Fri, 25 Jan 2019 10:07:34 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C43C73F5AF; Fri, 25 Jan 2019 10:07:33 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v4 12/12] arm64: enable generic CPU vulnerabilites support Date: Fri, 25 Jan 2019 12:07:11 -0600 Message-Id: <20190125180711.1970973-13-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mian Yousaf Kaukab Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2, meltdown and store-bypass. Signed-off-by: Mian Yousaf Kaukab Signed-off-by: Jeremy Linton --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) -- 2.17.2 Reviewed-by: Andre Przywara diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 09a85410d814..36a7cfbbfbb3 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -88,6 +88,7 @@ config ARM64 select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS_BROADCAST select GENERIC_CPU_AUTOPROBE + select GENERIC_CPU_VULNERABILITIES select GENERIC_EARLY_IOREMAP select GENERIC_IDLE_POLL_SETUP select GENERIC_IRQ_MULTI_HANDLER