diff mbox series

[ARM64,v4.4,V3,26/44] arm64: Move BP hardening to check_and_switch_context

Message ID d2f9dccdd85950989be37aebdaece9aef0a6a9b5.1567077734.git.viresh.kumar@linaro.org
State New
Headers show
Series V4.4 backport of arm64 Spectre patches | expand

Commit Message

Viresh Kumar Aug. 29, 2019, 11:34 a.m. UTC
From: Marc Zyngier <marc.zyngier@arm.com>


commit a8e4c0a919ae310944ed2c9ace11cf3ccd8a609b upstream.

We call arm64_apply_bp_hardening() from post_ttbr_update_workaround,
which has the unexpected consequence of being triggered on every
exception return to userspace when ARM64_SW_TTBR0_PAN is selected,
even if no context switch actually occured.

This is a bit suboptimal, and it would be more logical to only
invalidate the branch predictor when we actually switch to
a different mm.

In order to solve this, move the call to arm64_apply_bp_hardening()
into check_and_switch_context(), where we're guaranteed to pick
a different mm context.

Acked-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

---
 arch/arm64/mm/context.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

-- 
2.21.0.rc0.269.g1a574e7a288b
diff mbox series

Patch

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index be42bd3dca5c..de5afc27b4e6 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -183,6 +183,8 @@  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
 	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
 
 switch_mm_fastpath:
+	arm64_apply_bp_hardening();
+
 	cpu_switch_mm(mm->pgd, mm);
 }
 
@@ -193,8 +195,6 @@  asmlinkage void post_ttbr_update_workaround(void)
 			"ic iallu; dsb nsh; isb",
 			ARM64_WORKAROUND_CAVIUM_27456,
 			CONFIG_CAVIUM_ERRATUM_27456));
-
-	arm64_apply_bp_hardening();
 }
 
 static int asids_init(void)