From patchwork Fri Jan 25 18:07:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156632 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp653663jaa; Fri, 25 Jan 2019 10:08:40 -0800 (PST) X-Google-Smtp-Source: ALg8bN5SfSc2KoJUBVbQdJu6U9+nsrdq8l3Ahtn5ftNab2kMfKeaObpFeNtmCzWrkNN2xakl6wbz X-Received: by 2002:a63:6782:: with SMTP id b124mr10977956pgc.151.1548439720432; Fri, 25 Jan 2019 10:08:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439720; cv=none; d=google.com; s=arc-20160816; b=aojP9YTEp43wL7M2zRXXM3KcL+r3uBYsknLQP/8DJHsvoXHam0Kb5F4CcC8Hhc2oAO wrlgzDOg3ULK0cPzpg6+vKGey9S6qsQ4yRfRQa7/zGdHdmMUcZmexqPUqsG+TtvDwdqc UlxGg+x2bfjZdBuBKSBggjZ3NnFUT52jdygGciD8A9NrBLob+GM19B0WZxyV6sQd06/W osKwK58bt2rHWOUX7uzIrWdLaKb6JePU5Hh0k9eTCa/dl171LA4sKwsWN97b9emn2+UZ ucsD0liNXl/U0Ux4SQZhQQ9WSeUQMcz30jQt22vTadu9j8orPl3CnxbfZ4Lx7vyYjMMN XhYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ZojkTUPEj5lx4YlN+pKA7UePQnzHCpNCN6CxsZpwRqc=; b=oAKvuTbfSZCH+qFuBXJf76PbeCvK6k9GK5M8utffuTk2V949tsk2DGDHreI3EfG12F 3oe7+DMsIn/6UfJr0XkfaHL3DN6OuOU+OGMNnTI+XYthdougfMwGbfOM0C3PruaeIQcp zazku1w31odP2n+QcBx2G0FpR+r0ljca2uBeaa104B9rObpJSsH61F9O80jTutS2wuxv GHmDCCKLDhiQtlTyZtSMCI2fAWOvPGhgw4sOVQnvyOiCg1MdqXrqlZ+8stTgkdgquWwq WpJ9QU2afXkVHS3M6kiN+FqP/PZ8nWoDHeZJ2cwLFBQQNSP5985TRwDZv4sl7sCGWS/J AVtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s4si25097820pga.377.2019.01.25.10.08.40; Fri, 25 Jan 2019 10:08:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729522AbfAYSIj (ORCPT + 31 others); Fri, 25 Jan 2019 13:08:39 -0500 Received: from foss.arm.com ([217.140.101.70]:51786 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729223AbfAYSHY (ORCPT ); Fri, 25 Jan 2019 13:07:24 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67BA1165C; Fri, 25 Jan 2019 10:07:24 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AA23F3F5AF; Fri, 25 Jan 2019 10:07:23 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton , Christoffer Dall , kvmarm@lists.cs.columbia.edu Subject: [PATCH v4 03/12] arm64: Remove the ability to build a kernel without ssbd Date: Fri, 25 Jan 2019 12:07:02 -0600 Message-Id: <20190125180711.1970973-4-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Buried behind EXPERT is the ability to build a kernel without SSBD, this needlessly clutters up the code as well as creates the opportunity for bugs. It also removes the kernel's ability to determine if the machine its running on is vulnerable. Since its also possible to disable it at boot time, lets remove the config option. Signed-off-by: Jeremy Linton Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- arch/arm64/Kconfig | 9 --------- arch/arm64/include/asm/cpufeature.h | 8 -------- arch/arm64/include/asm/kvm_mmu.h | 7 ------- arch/arm64/kernel/Makefile | 3 +-- arch/arm64/kernel/cpu_errata.c | 4 ---- arch/arm64/kernel/cpufeature.c | 4 ---- arch/arm64/kernel/entry.S | 2 -- arch/arm64/kvm/hyp/hyp-entry.S | 2 -- arch/arm64/kvm/hyp/switch.c | 4 ---- 9 files changed, 1 insertion(+), 42 deletions(-) -- 2.17.2 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a4168d366127..0baa632bf0a8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1038,15 +1038,6 @@ config HARDEN_EL2_VECTORS If unsure, say Y. -config ARM64_SSBD - bool "Speculative Store Bypass Disable" if EXPERT - default y - help - This enables mitigation of the bypassing of previous stores - by speculative loads. - - If unsure, say Y. - config RODATA_FULL_DEFAULT_ENABLED bool "Apply r/o permissions of VM areas also to their linear aliases" default y diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index dfcfba725d72..bbed2067a1a4 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -620,19 +620,11 @@ static inline bool system_supports_generic_auth(void) static inline int arm64_get_ssbd_state(void) { -#ifdef CONFIG_ARM64_SSBD extern int ssbd_state; return ssbd_state; -#else - return ARM64_SSBD_UNKNOWN; -#endif } -#ifdef CONFIG_ARM64_SSBD void arm64_set_ssbd_mitigation(bool state); -#else -static inline void arm64_set_ssbd_mitigation(bool state) {} -#endif extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 8af4b1befa42..a5c152d79820 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -541,7 +541,6 @@ static inline int kvm_map_vectors(void) } #endif -#ifdef CONFIG_ARM64_SSBD DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); static inline int hyp_map_aux_data(void) @@ -558,12 +557,6 @@ static inline int hyp_map_aux_data(void) } return 0; } -#else -static inline int hyp_map_aux_data(void) -{ - return 0; -} -#endif #define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr) diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index cd434d0719c1..306336a2fa34 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -19,7 +19,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ return_address.o cpuinfo.o cpu_errata.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ - syscall.o + syscall.o ssbd.o extra-$(CONFIG_EFI) := efi-entry.o @@ -57,7 +57,6 @@ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o -obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-y += vdso/ probes/ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 9a7b5fca51a0..934d50788ca3 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -281,7 +281,6 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) } #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ -#ifdef CONFIG_ARM64_SSBD DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); int ssbd_state __read_mostly = ARM64_SSBD_KERNEL; @@ -473,7 +472,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, return required; } -#endif /* CONFIG_ARM64_SSBD */ static void __maybe_unused cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) @@ -726,14 +724,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors), }, #endif -#ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypass Disable", .capability = ARM64_SSBD, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .matches = has_ssbd_mitigation, }, -#endif #ifdef CONFIG_ARM64_ERRATUM_1188873 { /* Cortex-A76 r0p0 to r2p0 */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f6d84e2c92fe..d1a7fd7972f9 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1131,7 +1131,6 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused) WARN_ON(val & (7 << 27 | 7 << 21)); } -#ifdef CONFIG_ARM64_SSBD static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr) { if (user_mode(regs)) @@ -1171,7 +1170,6 @@ static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused) arm64_set_ssbd_mitigation(true); } } -#endif /* CONFIG_ARM64_SSBD */ #ifdef CONFIG_ARM64_PAN static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) @@ -1400,7 +1398,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR0_CRC32_SHIFT, .min_field_value = 1, }, -#ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypassing Safe (SSBS)", .capability = ARM64_SSBS, @@ -1412,7 +1409,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY, .cpu_enable = cpu_enable_ssbs, }, -#endif #ifdef CONFIG_ARM64_CNP { .desc = "Common not Private translations", diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 0ec0c46b2c0c..bee54b7d17b9 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -137,7 +137,6 @@ alternative_else_nop_endif // This macro corrupts x0-x3. It is the caller's duty // to save/restore them if required. .macro apply_ssbd, state, tmp1, tmp2 -#ifdef CONFIG_ARM64_SSBD alternative_cb arm64_enable_wa2_handling b .L__asm_ssbd_skip\@ alternative_cb_end @@ -151,7 +150,6 @@ alternative_cb arm64_update_smccc_conduit nop // Patched to SMC/HVC #0 alternative_cb_end .L__asm_ssbd_skip\@: -#endif .endm .macro kernel_entry, el, regsize = 64 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 73c1b483ec39..53c9344968d4 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -114,7 +114,6 @@ el1_hvc_guest: ARM_SMCCC_ARCH_WORKAROUND_2) cbnz w1, el1_trap -#ifdef CONFIG_ARM64_SSBD alternative_cb arm64_enable_wa2_handling b wa2_end alternative_cb_end @@ -141,7 +140,6 @@ alternative_cb_end wa2_end: mov x2, xzr mov x1, xzr -#endif wa_epilogue: mov x0, xzr diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index b0b1478094b4..9ce43ae6fc13 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -436,7 +436,6 @@ static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) { -#ifdef CONFIG_ARM64_SSBD /* * The host runs with the workaround always present. If the * guest wants it disabled, so be it... @@ -444,19 +443,16 @@ static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) if (__needs_ssbd_off(vcpu) && __hyp_this_cpu_read(arm64_ssbd_callback_required)) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); -#endif } static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) { -#ifdef CONFIG_ARM64_SSBD /* * If the guest has disabled the workaround, bring it back on. */ if (__needs_ssbd_off(vcpu) && __hyp_this_cpu_read(arm64_ssbd_callback_required)) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); -#endif } /* Switch to the guest for VHE systems running in EL2 */