diff mbox series

[v2,07/11] arm64: Add skeleton to harden the branch predictor against aliasing attacks

Message ID 1515157961-20963-8-git-send-email-will.deacon@arm.com
State Superseded
Headers show
Series arm64 kpti hardening and variant 2 workarounds | expand

Commit Message

Will Deacon Jan. 5, 2018, 1:12 p.m. UTC
Aliasing attacks against CPU branch predictors can allow an attacker to
redirect speculative control flow on some CPUs and potentially divulge
information from one context to another.

This patch adds initial skeleton code behind a new Kconfig option to
enable implementation-specific mitigations against these attacks for
CPUs that are affected.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 arch/arm64/Kconfig               | 17 +++++++++
 arch/arm64/include/asm/cpucaps.h |  3 +-
 arch/arm64/include/asm/mmu.h     | 37 ++++++++++++++++++++
 arch/arm64/include/asm/sysreg.h  |  1 +
 arch/arm64/kernel/Makefile       |  4 +++
 arch/arm64/kernel/bpi.S          | 55 +++++++++++++++++++++++++++++
 arch/arm64/kernel/cpu_errata.c   | 74 ++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpufeature.c   |  1 +
 arch/arm64/mm/context.c          |  2 ++
 arch/arm64/mm/fault.c            |  1 +
 10 files changed, 194 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kernel/bpi.S

-- 
2.1.4

Comments

Jon Masters Jan. 8, 2018, 12:15 a.m. UTC | #1
On 01/05/2018 08:12 AM, Will Deacon wrote:

> Aliasing attacks against CPU branch predictors can allow an attacker to

> redirect speculative control flow on some CPUs and potentially divulge

> information from one context to another.

> 

> This patch adds initial skeleton code behind a new Kconfig option to

> enable implementation-specific mitigations against these attacks for

> CPUs that are affected.


Thanks to Qualcomm for the (typically prompt and immediate) followup. As
a reminder to the other usual server suspects (all of whom we've spoken
with about mitigations for this so we know there things coming), I'm
expecting to see your patches for this hit the list within the next 48
hours. You know who you are, and I'll be doing the rounds over the next
24 hours to check your status as to when you'll be posting these.

Thanks,

Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop
James Morse Jan. 8, 2018, 12:16 p.m. UTC | #2
Hi Will, Marc,

On 05/01/18 13:12, Will Deacon wrote:
> Aliasing attacks against CPU branch predictors can allow an attacker to

> redirect speculative control flow on some CPUs and potentially divulge

> information from one context to another.

>

> This patch adds initial skeleton code behind a new Kconfig option to

> enable implementation-specific mitigations against these attacks for

> CPUs that are affected.


[...]

> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h

> index 6f7bdb89817f..6dd83d75b82a 100644

> --- a/arch/arm64/include/asm/mmu.h

> +++ b/arch/arm64/include/asm/mmu.h

> @@ -41,6 +41,43 @@ static inline bool arm64_kernel_unmapped_at_el0(void)


> +static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)

> +{

> +	return this_cpu_ptr(&bp_hardening_data);

> +}

> +

> +static inline void arm64_apply_bp_hardening(void)

> +{

> +	struct bp_hardening_data *d;

> +

> +	if (!cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR))

> +		return;

> +

> +	d = arm64_get_bp_hardening_data();

> +	if (d->fn)

> +		d->fn();

> +}


> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c

> index 22168cd0dde7..5203b6040cb6 100644

> --- a/arch/arm64/mm/fault.c

> +++ b/arch/arm64/mm/fault.c

> @@ -318,6 +318,7 @@ static void __do_user_fault(struct task_struct *tsk, unsigned long addr,

>  		lsb = PAGE_SHIFT;

>  	si.si_addr_lsb = lsb;

>  

> +	arm64_apply_bp_hardening();


Due to the this_cpu_ptr() call:

| BUG: using smp_processor_id() in preemptible [00000000] code: print_my_pa/2093
| caller is debug_smp_processor_id+0x1c/0x24
| CPU: 0 PID: 2093 Comm: print_my_pa Tainted: G        W
4.15.0-rc3-00044-g7f0aaec94f27-dirty #8950
| Call trace:
|  dump_backtrace+0x0/0x164
|  show_stack+0x14/0x1c
|  dump_stack+0xa4/0xdc
|  check_preemption_disabled+0xfc/0x100
|  debug_smp_processor_id+0x1c/0x24
|  __do_user_fault+0xcc/0x180
|  do_page_fault+0x14c/0x364
|  do_translation_fault+0x40/0x48
|  do_mem_abort+0x40/0xb8
|  el0_da+0x20/0x24

Make it a TIF flag?

(Seen with arm64's kpti-base tag and this series)


>  	force_sig_info(sig, &si, tsk);

>  }



Thanks,

James
Will Deacon Jan. 8, 2018, 2:26 p.m. UTC | #3
Hi James,

Thanks for having a look.

On Mon, Jan 08, 2018 at 12:16:28PM +0000, James Morse wrote:
> On 05/01/18 13:12, Will Deacon wrote:

> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c

> > index 22168cd0dde7..5203b6040cb6 100644

> > --- a/arch/arm64/mm/fault.c

> > +++ b/arch/arm64/mm/fault.c

> > @@ -318,6 +318,7 @@ static void __do_user_fault(struct task_struct *tsk, unsigned long addr,

> >  		lsb = PAGE_SHIFT;

> >  	si.si_addr_lsb = lsb;

> >  

> > +	arm64_apply_bp_hardening();

> 

> Due to the this_cpu_ptr() call:

> 

> | BUG: using smp_processor_id() in preemptible [00000000] code: print_my_pa/2093

> | caller is debug_smp_processor_id+0x1c/0x24

> | CPU: 0 PID: 2093 Comm: print_my_pa Tainted: G        W

> 4.15.0-rc3-00044-g7f0aaec94f27-dirty #8950

> | Call trace:

> |  dump_backtrace+0x0/0x164

> |  show_stack+0x14/0x1c

> |  dump_stack+0xa4/0xdc

> |  check_preemption_disabled+0xfc/0x100

> |  debug_smp_processor_id+0x1c/0x24

> |  __do_user_fault+0xcc/0x180

> |  do_page_fault+0x14c/0x364

> |  do_translation_fault+0x40/0x48

> |  do_mem_abort+0x40/0xb8

> |  el0_da+0x20/0x24


Ah bugger, yes, we re-enabled interrupts in the entry code when we took the
fault initially.

> Make it a TIF flag?

> 

> (Seen with arm64's kpti-base tag and this series)


A TIF flag is still a bit fiddly, because we need to track that the
predictor could be dirty on this CPU. Instead, I'll postpone the re-enabling
of IRQs on el0_ia until we're in C code -- we can quickly a check on the
address before doing that. See below.

Will

--->8

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 80b539845da6..07a7d4db8ec4 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -721,12 +721,15 @@ el0_ia:
 	 * Instruction abort handling
 	 */
 	mrs	x26, far_el1
-	enable_daif
+	enable_da_f
+#ifdef CONFIG_TRACE_IRQFLAGS
+	bl	trace_hardirqs_off
+#endif
 	ct_user_exit
 	mov	x0, x26
 	mov	x1, x25
 	mov	x2, sp
-	bl	do_mem_abort
+	bl	do_el0_ia_bp_hardening
 	b	ret_to_user
 el0_fpsimd_acc:
 	/*
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 5203b6040cb6..0e671ddf4855 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -318,7 +318,6 @@ static void __do_user_fault(struct task_struct *tsk, unsigned long addr,
 		lsb = PAGE_SHIFT;
 	si.si_addr_lsb = lsb;
 
-	arm64_apply_bp_hardening();
 	force_sig_info(sig, &si, tsk);
 }
 
@@ -709,6 +708,23 @@ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
 	arm64_notify_die("", regs, &info, esr);
 }
 
+asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
+						   unsigned int esr,
+						   struct pt_regs *regs)
+{
+	/*
+	 * We've taken an instruction abort from userspace and not yet
+	 * re-enabled IRQs. If the address is a kernel address, apply
+	 * BP hardening prior to enabling IRQs and pre-emption.
+	 */
+	if (addr > TASK_SIZE)
+		arm64_apply_bp_hardening();
+
+	local_irq_enable();
+	do_mem_abort(addr, esr, regs);
+}
+
+
 asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
 					   unsigned int esr,
 					   struct pt_regs *regs)
Yisheng Xie Jan. 17, 2018, 4:10 a.m. UTC | #4
Hi Will,

On 2018/1/5 21:12, Will Deacon wrote:
> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c

> index 5f7097d0cd12..d99b36555a16 100644

> --- a/arch/arm64/mm/context.c

> +++ b/arch/arm64/mm/context.c

> @@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)

>  			"ic iallu; dsb nsh; isb",

>  			ARM64_WORKAROUND_CAVIUM_27456,

>  			CONFIG_CAVIUM_ERRATUM_27456));

> +

> +	arm64_apply_bp_hardening();

>  }


post_ttbr_update_workaround was used for fix Cavium erratum 2745? so does that
means, if we do not have this erratum, we do not need arm64_apply_bp_hardening()?
when mm_swtich and kernel_exit?

From the code logical, it seems not only related to erratum 2745 anymore?
should it be renamed?

Thanks
Yisheng
Will Deacon Jan. 17, 2018, 10:07 a.m. UTC | #5
On Wed, Jan 17, 2018 at 12:10:33PM +0800, Yisheng Xie wrote:
> Hi Will,

> 

> On 2018/1/5 21:12, Will Deacon wrote:

> > diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c

> > index 5f7097d0cd12..d99b36555a16 100644

> > --- a/arch/arm64/mm/context.c

> > +++ b/arch/arm64/mm/context.c

> > @@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)

> >  			"ic iallu; dsb nsh; isb",

> >  			ARM64_WORKAROUND_CAVIUM_27456,

> >  			CONFIG_CAVIUM_ERRATUM_27456));

> > +

> > +	arm64_apply_bp_hardening();

> >  }

> 

> post_ttbr_update_workaround was used for fix Cavium erratum 2745? so does that

> means, if we do not have this erratum, we do not need arm64_apply_bp_hardening()?

> when mm_swtich and kernel_exit?

> 

> From the code logical, it seems not only related to erratum 2745 anymore?

> should it be renamed?


post_ttbr_update_workaround just runs code after a TTBR update, which
includes mitigations against variant 2 of "spectre" and also a workaround
for a Cavium erratum. These are separate issues.

Will
Yisheng Xie Jan. 18, 2018, 8:37 a.m. UTC | #6
Hi Will,

On 2018/1/17 18:07, Will Deacon wrote:
> On Wed, Jan 17, 2018 at 12:10:33PM +0800, Yisheng Xie wrote:

>> Hi Will,

>>

>> On 2018/1/5 21:12, Will Deacon wrote:

>>> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c

>>> index 5f7097d0cd12..d99b36555a16 100644

>>> --- a/arch/arm64/mm/context.c

>>> +++ b/arch/arm64/mm/context.c

>>> @@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)

>>>  			"ic iallu; dsb nsh; isb",

>>>  			ARM64_WORKAROUND_CAVIUM_27456,

>>>  			CONFIG_CAVIUM_ERRATUM_27456));

>>> +

>>> +	arm64_apply_bp_hardening();

>>>  }

>>

>> post_ttbr_update_workaround was used for fix Cavium erratum 2745? so does that

>> means, if we do not have this erratum, we do not need arm64_apply_bp_hardening()?

>> when mm_swtich and kernel_exit?

>>

>> From the code logical, it seems not only related to erratum 2745 anymore?

>> should it be renamed?

> 

> post_ttbr_update_workaround just runs code after a TTBR update, which

> includes mitigations against variant 2 of "spectre" and also a workaround

> for a Cavium erratum. These are separate issues.


Get it, Thanks for your kind explain.

Thanks
Yisheng
> 

> Will

> 

> .

>
Li Kun Jan. 19, 2018, 3:37 a.m. UTC | #7
Hi will,


在 2018/1/17 18:07, Will Deacon 写道:
> On Wed, Jan 17, 2018 at 12:10:33PM +0800, Yisheng Xie wrote:

>> Hi Will,

>>

>> On 2018/1/5 21:12, Will Deacon wrote:

>>> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c

>>> index 5f7097d0cd12..d99b36555a16 100644

>>> --- a/arch/arm64/mm/context.c

>>> +++ b/arch/arm64/mm/context.c

>>> @@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)

>>>   			"ic iallu; dsb nsh; isb",

>>>   			ARM64_WORKAROUND_CAVIUM_27456,

>>>   			CONFIG_CAVIUM_ERRATUM_27456));

>>> +

>>> +	arm64_apply_bp_hardening();

>>>   }

>> post_ttbr_update_workaround was used for fix Cavium erratum 2745? so does that

>> means, if we do not have this erratum, we do not need arm64_apply_bp_hardening()?

>> when mm_swtich and kernel_exit?

>>

>>  From the code logical, it seems not only related to erratum 2745 anymore?

>> should it be renamed?

> post_ttbr_update_workaround just runs code after a TTBR update, which

> includes mitigations against variant 2 of "spectre" and also a workaround

> for a Cavium erratum. These are separate issues.

But AFAIU, according to the theory of spectre, we don't need to clear 
the BTB every time we return to user?
If we enable CONFIG_ARM64_SW_TTBR0_PAN, there will be a call to 
arm64_apply_bp_hardening every time kernel exit to el0.
kernel_exit
     post_ttbr_update_workaround
         arm64_apply_bp_hardening
>

> Will

>

> _______________________________________________

> linux-arm-kernel mailing list

> linux-arm-kernel@lists.infradead.org

> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel


-- 
Best Regards
Li Kun
Will Deacon Jan. 19, 2018, 2:28 p.m. UTC | #8
On Fri, Jan 19, 2018 at 11:37:24AM +0800, Li Kun wrote:
> 在 2018/1/17 18:07, Will Deacon 写道:

> >On Wed, Jan 17, 2018 at 12:10:33PM +0800, Yisheng Xie wrote:

> >>On 2018/1/5 21:12, Will Deacon wrote:

> >>>diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c

> >>>index 5f7097d0cd12..d99b36555a16 100644

> >>>--- a/arch/arm64/mm/context.c

> >>>+++ b/arch/arm64/mm/context.c

> >>>@@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)

> >>>  			"ic iallu; dsb nsh; isb",

> >>>  			ARM64_WORKAROUND_CAVIUM_27456,

> >>>  			CONFIG_CAVIUM_ERRATUM_27456));

> >>>+

> >>>+	arm64_apply_bp_hardening();

> >>>  }

> >>post_ttbr_update_workaround was used for fix Cavium erratum 2745? so does that

> >>means, if we do not have this erratum, we do not need arm64_apply_bp_hardening()?

> >>when mm_swtich and kernel_exit?

> >>

> >> From the code logical, it seems not only related to erratum 2745 anymore?

> >>should it be renamed?

> >post_ttbr_update_workaround just runs code after a TTBR update, which

> >includes mitigations against variant 2 of "spectre" and also a workaround

> >for a Cavium erratum. These are separate issues.

> But AFAIU, according to the theory of spectre, we don't need to clear the

> BTB every time we return to user?

> If we enable CONFIG_ARM64_SW_TTBR0_PAN, there will be a call to

> arm64_apply_bp_hardening every time kernel exit to el0.

> kernel_exit

>     post_ttbr_update_workaround

>         arm64_apply_bp_hardening


That's a really good point, thanks. What it means is that
post_ttbr_update_workaround is actually the wrong place for this, and we
should be doing it more directly on the switch_mm path -- probably in
check_and_switch_context.

Will
Li Kun Jan. 22, 2018, 6:52 a.m. UTC | #9
On 2018/1/19 22:28, Will Deacon Wrote:
> On Fri, Jan 19, 2018 at 11:37:24AM +0800, Li Kun wrote:

>> 在 2018/1/17 18:07, Will Deacon 写道:

>>> On Wed, Jan 17, 2018 at 12:10:33PM +0800, Yisheng Xie wrote:

>>>> On 2018/1/5 21:12, Will Deacon wrote:

>>>>> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c

>>>>> index 5f7097d0cd12..d99b36555a16 100644

>>>>> --- a/arch/arm64/mm/context.c

>>>>> +++ b/arch/arm64/mm/context.c

>>>>> @@ -246,6 +246,8 @@ asmlinkage void post_ttbr_update_workaround(void)

>>>>>   			"ic iallu; dsb nsh; isb",

>>>>>   			ARM64_WORKAROUND_CAVIUM_27456,

>>>>>   			CONFIG_CAVIUM_ERRATUM_27456));

>>>>> +

>>>>> +	arm64_apply_bp_hardening();

>>>>>   }

>>>> post_ttbr_update_workaround was used for fix Cavium erratum 2745? so does that

>>>> means, if we do not have this erratum, we do not need arm64_apply_bp_hardening()?

>>>> when mm_swtich and kernel_exit?

>>>>

>>>>  From the code logical, it seems not only related to erratum 2745 anymore?

>>>> should it be renamed?

>>> post_ttbr_update_workaround just runs code after a TTBR update, which

>>> includes mitigations against variant 2 of "spectre" and also a workaround

>>> for a Cavium erratum. These are separate issues.

>> But AFAIU, according to the theory of spectre, we don't need to clear the

>> BTB every time we return to user?

>> If we enable CONFIG_ARM64_SW_TTBR0_PAN, there will be a call to

>> arm64_apply_bp_hardening every time kernel exit to el0.

>> kernel_exit

>>      post_ttbr_update_workaround

>>          arm64_apply_bp_hardening

> That's a really good point, thanks. What it means is that

> post_ttbr_update_workaround is actually the wrong place for this, and we

> should be doing it more directly on the switch_mm path -- probably in

> check_and_switch_context.

Yes, that's exactly what i mean.:-)
>

> Will


-- 
Best Regards
Li Kun
diff mbox series

Patch

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index efaaa3a66b95..cea44b95187c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -845,6 +845,23 @@  config UNMAP_KERNEL_AT_EL0
 
 	  If unsure, say Y.
 
+config HARDEN_BRANCH_PREDICTOR
+	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	default y
+	help
+	  Speculation attacks against some high-performance processors rely on
+	  being able to manipulate the branch predictor for a victim context by
+	  executing aliasing branches in the attacker context.  Such attacks
+	  can be partially mitigated against by clearing internal branch
+	  predictor state and limiting the prediction logic in some situations.
+
+	  This config option will take CPU-specific actions to harden the
+	  branch predictor against aliasing attacks and may rely on specific
+	  instruction sequences or control bits being set by the system
+	  firmware.
+
+	  If unsure, say Y.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index b4537ffd1018..51616e77fe6b 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -42,7 +42,8 @@ 
 #define ARM64_HAS_DCPOP				21
 #define ARM64_SVE				22
 #define ARM64_UNMAP_KERNEL_AT_EL0		23
+#define ARM64_HARDEN_BRANCH_PREDICTOR		24
 
-#define ARM64_NCAPS				24
+#define ARM64_NCAPS				25
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 6f7bdb89817f..6dd83d75b82a 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -41,6 +41,43 @@  static inline bool arm64_kernel_unmapped_at_el0(void)
 	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
+typedef void (*bp_hardening_cb_t)(void);
+
+struct bp_hardening_data {
+	int			hyp_vectors_slot;
+	bp_hardening_cb_t	fn;
+};
+
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
+
+DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
+{
+	return this_cpu_ptr(&bp_hardening_data);
+}
+
+static inline void arm64_apply_bp_hardening(void)
+{
+	struct bp_hardening_data *d;
+
+	if (!cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR))
+		return;
+
+	d = arm64_get_bp_hardening_data();
+	if (d->fn)
+		d->fn();
+}
+#else
+static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
+{
+	return NULL;
+}
+
+static inline void arm64_apply_bp_hardening(void)	{ }
+#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
 extern void paging_init(void);
 extern void bootmem_init(void);
 extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index ae519bbd3f9e..871744973ece 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -438,6 +438,7 @@ 
 
 /* id_aa64pfr0 */
 #define ID_AA64PFR0_CSV3_SHIFT		60
+#define ID_AA64PFR0_CSV2_SHIFT		56
 #define ID_AA64PFR0_SVE_SHIFT		32
 #define ID_AA64PFR0_GIC_SHIFT		24
 #define ID_AA64PFR0_ASIMD_SHIFT		20
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 067baace74a0..0c760db04858 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -53,6 +53,10 @@  arm64-obj-$(CONFIG_ARM64_RELOC_TEST)	+= arm64-reloc-test.o
 arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
 arm64-obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 
+ifeq ($(CONFIG_KVM),y)
+arm64-obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR)	+= bpi.o
+endif
+
 obj-y					+= $(arm64-obj-y) vdso/ probes/
 obj-m					+= $(arm64-obj-m)
 head-y					:= head.o
diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S
new file mode 100644
index 000000000000..06a931eb2673
--- /dev/null
+++ b/arch/arm64/kernel/bpi.S
@@ -0,0 +1,55 @@ 
+/*
+ * Contains CPU specific branch predictor invalidation sequences
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/linkage.h>
+
+.macro ventry target
+	.rept 31
+	nop
+	.endr
+	b	\target
+.endm
+
+.macro vectors target
+	ventry \target + 0x000
+	ventry \target + 0x080
+	ventry \target + 0x100
+	ventry \target + 0x180
+
+	ventry \target + 0x200
+	ventry \target + 0x280
+	ventry \target + 0x300
+	ventry \target + 0x380
+
+	ventry \target + 0x400
+	ventry \target + 0x480
+	ventry \target + 0x500
+	ventry \target + 0x580
+
+	ventry \target + 0x600
+	ventry \target + 0x680
+	ventry \target + 0x700
+	ventry \target + 0x780
+.endm
+
+	.align	11
+ENTRY(__bp_harden_hyp_vecs_start)
+	.rept 4
+	vectors __kvm_hyp_vector
+	.endr
+ENTRY(__bp_harden_hyp_vecs_end)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 0e27f86ee709..16ea5c6f314e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -46,6 +46,80 @@  static int cpu_enable_trap_ctr_access(void *__unused)
 	return 0;
 }
 
+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
+#include <asm/mmu_context.h>
+#include <asm/cacheflush.h>
+
+DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+#ifdef CONFIG_KVM
+static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
+				const char *hyp_vecs_end)
+{
+	void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K);
+	int i;
+
+	for (i = 0; i < SZ_2K; i += 0x80)
+		memcpy(dst + i, hyp_vecs_start, hyp_vecs_end - hyp_vecs_start);
+
+	flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
+}
+
+static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+				      const char *hyp_vecs_start,
+				      const char *hyp_vecs_end)
+{
+	static int last_slot = -1;
+	static DEFINE_SPINLOCK(bp_lock);
+	int cpu, slot = -1;
+
+	spin_lock(&bp_lock);
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(bp_hardening_data.fn, cpu) == fn) {
+			slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
+			break;
+		}
+	}
+
+	if (slot == -1) {
+		last_slot++;
+		BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start)
+			/ SZ_2K) <= last_slot);
+		slot = last_slot;
+		__copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
+	}
+
+	__this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
+	__this_cpu_write(bp_hardening_data.fn, fn);
+	spin_unlock(&bp_lock);
+}
+#else
+static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
+				      const char *hyp_vecs_start,
+				      const char *hyp_vecs_end)
+{
+	__this_cpu_write(bp_hardening_data.fn, fn);
+}
+#endif	/* CONFIG_KVM */
+
+static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
+				     bp_hardening_cb_t fn,
+				     const char *hyp_vecs_start,
+				     const char *hyp_vecs_end)
+{
+	u64 pfr0;
+
+	if (!entry->matches(entry, SCOPE_LOCAL_CPU))
+		return;
+
+	pfr0 = read_cpuid(ID_AA64PFR0_EL1);
+	if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
+		return;
+
+	__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
+}
+#endif	/* CONFIG_HARDEN_BRANCH_PREDICTOR */
+
 #define MIDR_RANGE(model, min, max) \
 	.def_scope = SCOPE_LOCAL_CPU, \
 	.matches = is_affected_midr_range, \
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 55712ab4e3bf..9d4d82c11528 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -146,6 +146,7 @@  static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 
 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
 	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 5f7097d0cd12..d99b36555a16 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -246,6 +246,8 @@  asmlinkage void post_ttbr_update_workaround(void)
 			"ic iallu; dsb nsh; isb",
 			ARM64_WORKAROUND_CAVIUM_27456,
 			CONFIG_CAVIUM_ERRATUM_27456));
+
+	arm64_apply_bp_hardening();
 }
 
 static int asids_init(void)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 22168cd0dde7..5203b6040cb6 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -318,6 +318,7 @@  static void __do_user_fault(struct task_struct *tsk, unsigned long addr,
 		lsb = PAGE_SHIFT;
 	si.si_addr_lsb = lsb;
 
+	arm64_apply_bp_hardening();
 	force_sig_info(sig, &si, tsk);
 }