From patchwork Wed Nov 7 16:43:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 150426 Delivered-To: patches@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp5387338ljp; Wed, 7 Nov 2018 08:44:18 -0800 (PST) X-Received: by 2002:a0c:f952:: with SMTP id i18mr952904qvo.199.1541609058815; Wed, 07 Nov 2018 08:44:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541609058; cv=none; d=google.com; s=arc-20160816; b=Q3lWnJK5Sq5ez/hmh+fTy4l3nlAkLNfuQgsDnZG0Zh/LGVTbDCWCFAhHq/IGcAZQLQ 4R6I1WNY8CEM4x6RfVqC/cp9z1QDzRt0W/IixZkvjJr/b11tvaTay+37jaYeiWaGGhXL JklL733YZC6xr0MGD5Z5uHF12ucf5dRvuv3waPwY6HY3Wv03zIDWLF/ml06Ds9Lk6hMB 3aHbLbwL0wQqHyvJZoJMT8Ht4HW6a5+TV1TmVd0tvhd4nMrE6JRz2fCMxQo7nXLzU/pO EGWrqbsM+20dzQmsdkKKwa7uDY6Ps5SqSrrbIwxc5ISaTcUPMd/Gh03sAFDPZoiIroYd IuUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WXyYzHDJT4U7JNi0Zkxg9jBK2cAMVyL/uBW0YJohaCI=; b=sUoN6VH0nokztWeH43ovJ1eaUVM8WeUNk8OIlqOSxnoft/Iy4Xa2o3IbnCe3RQZgP/ 1k/ND6JkUa8S1kY/RXKlVtVAgIRfjnOMCMYy82E0NhDbMokudUNheJrP5Lm2pToOvSZD AMepUNPvmkfcX1xvaiORE8z5C6XPofODTD9u2T8Mzr0Fd/Mutnse600fy8DbielTJ8OB L8qnDl86oz2iK4EFMLd3mgiFJK3qQOR3pyQgFjflO2+H60qMWTuwijWrNMzwL7WENorm /L95s/b1Es4DgbIhPKjMZYUhVSs84EaMteROjicl0hwncfEm5CpmfWVqV452AXdzGkkW us/g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Wz8adxeR; spf=pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 53sor1280631qtm.42.2018.11.07.08.44.18 for (Google Transport Security); Wed, 07 Nov 2018 08:44:18 -0800 (PST) Received-SPF: pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Wz8adxeR; spf=pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WXyYzHDJT4U7JNi0Zkxg9jBK2cAMVyL/uBW0YJohaCI=; b=Wz8adxeR6+PdIqh75w1DQwFcgjl1ZqS3WEAwQlZGB5V5AxINipow3oGlEXcPlX0SwD B+3D9nCjiYZkeGyWdLhJZc7GyzmP6W949ORvqoQgaO9j1gTPclypE0dz3J1mWl7bq80j DAkFqsSIVIuWc3B3Boarbh8dLkNoEHxjr3kGE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WXyYzHDJT4U7JNi0Zkxg9jBK2cAMVyL/uBW0YJohaCI=; b=t+Ijym+M5gyCY6OfW+UKl6MxAGfa+/IWHzMnumSXHuAI+CK60gusA2auk10+novobF /btLuuf7O8bwmJck2s65DvpSPLkqLV4aBwBtWitDSpxLYfDCmixzPUmiR1X0NYZBjP5a KrJ8Z3xIfmABzI6n7fHD+IlWh6GASVnq3iMJ7yGThPahlCI/FXmmy7VekzgG5qAmo75o 2depiDjT+ooeESsXphrqQWQJYYa0S3TO5cBthM3+iyG1qFCXa18zy7lf8stz7/Wvj79B 7c0e+C1t/kDkbv24c4WiShBijtPLj2s4MmfgamUufx7bxHcZccn0pYnJSZ1ep4bFotf6 jlXA== X-Gm-Message-State: AGRZ1gJTCc0n8yXscZ0P8u3WTj0uRamB0dr++LFXZfmJ8gKDkrDOzx3x 77rfs51UFGm9zSj/FCytk59n+X2+ueF1bg== X-Google-Smtp-Source: AJdET5dQn9LW7qRtvBIIKNy+NlMNeXvsMVi/BnNE+//wxDeIpGxF+M+EFNkAQoogVL0ppOnUpR7Mlw== X-Received: by 2002:ac8:1987:: with SMTP id u7mr909751qtj.86.1541609058305; Wed, 07 Nov 2018 08:44:18 -0800 (PST) Return-Path: Received: from localhost.localdomain (pool-72-71-243-63.cncdnh.fast00.myfairpoint.net. [72.71.243.63]) by smtp.googlemail.com with ESMTPSA id 96-v6sm681817qtc.56.2018.11.07.08.44.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 07 Nov 2018 08:44:17 -0800 (PST) From: David Long To: stable@vger.kernel.org, Russell King - ARM Linux , Florian Fainelli , Tony Lindgren , Marc Zyngier , Mark Rutland Cc: Greg KH , Mark Brown Subject: [PATCH 4.9 V2 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 Date: Wed, 7 Nov 2018 11:43:49 -0500 Message-Id: <20181107164402.9380-12-dave.long@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181107164402.9380-1-dave.long@linaro.org> References: <20181107164402.9380-1-dave.long@linaro.org> From: Marc Zyngier Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream. In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long --- arch/arm/include/asm/kvm_asm.h | 2 - arch/arm/include/asm/kvm_mmu.h | 17 +++++++- arch/arm/kvm/hyp/hyp-entry.S | 71 +++++++++++++++++++++++++++++++++- 3 files changed, 85 insertions(+), 5 deletions(-) -- 2.17.1 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 8ef05381984b..24f3ec7c9fbe 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -61,8 +61,6 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; - extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index e2f05cedaf97..625edef2a54f 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -248,7 +248,22 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, static inline void *kvm_get_hyp_vector(void) { - return kvm_ksym_ref(__kvm_hyp_vector); + switch(read_cpuid_part()) { +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + { + extern char __kvm_hyp_vector_bp_inv[]; + return kvm_ksym_ref(__kvm_hyp_vector_bp_inv); + } + +#endif + default: + { + extern char __kvm_hyp_vector[]; + return kvm_ksym_ref(__kvm_hyp_vector); + } + } } static inline int kvm_map_vectors(void) diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S index 96beb53934c9..58ec002721a1 100644 --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -71,6 +71,66 @@ __kvm_hyp_vector: W(b) hyp_irq W(b) hyp_fiq +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + .align 5 +__kvm_hyp_vector_bp_inv: + .global __kvm_hyp_vector_bp_inv + + /* + * We encode the exception entry in the bottom 3 bits of + * SP, and we have to guarantee to be 8 bytes aligned. + */ + W(add) sp, sp, #1 /* Reset 7 */ + W(add) sp, sp, #1 /* Undef 6 */ + W(add) sp, sp, #1 /* Syscall 5 */ + W(add) sp, sp, #1 /* Prefetch abort 4 */ + W(add) sp, sp, #1 /* Data abort 3 */ + W(add) sp, sp, #1 /* HVC 2 */ + W(add) sp, sp, #1 /* IRQ 1 */ + W(nop) /* FIQ 0 */ + + mcr p15, 0, r0, c7, c5, 6 /* BPIALL */ + isb + +#ifdef CONFIG_THUMB2_KERNEL + /* + * Yet another silly hack: Use VPIDR as a temp register. + * Thumb2 is really a pain, as SP cannot be used with most + * of the bitwise instructions. The vect_br macro ensures + * things gets cleaned-up. + */ + mcr p15, 4, r0, c0, c0, 0 /* VPIDR */ + mov r0, sp + and r0, r0, #7 + sub sp, sp, r0 + push {r1, r2} + mov r1, r0 + mrc p15, 4, r0, c0, c0, 0 /* VPIDR */ + mrc p15, 0, r2, c0, c0, 0 /* MIDR */ + mcr p15, 4, r2, c0, c0, 0 /* VPIDR */ +#endif + +.macro vect_br val, targ +ARM( eor sp, sp, #\val ) +ARM( tst sp, #7 ) +ARM( eorne sp, sp, #\val ) + +THUMB( cmp r1, #\val ) +THUMB( popeq {r1, r2} ) + + beq \targ +.endm + + vect_br 0, hyp_fiq + vect_br 1, hyp_irq + vect_br 2, hyp_hvc + vect_br 3, hyp_dabt + vect_br 4, hyp_pabt + vect_br 5, hyp_svc + vect_br 6, hyp_undef + vect_br 7, hyp_reset +#endif + .macro invalid_vector label, cause .align \label: mov r0, #\cause @@ -131,7 +191,14 @@ hyp_hvc: mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR beq 1f - push {lr} + /* + * Pushing r2 here is just a way of keeping the stack aligned to + * 8 bytes on any path that can trigger a HYP exception. Here, + * we may well be about to jump into the guest, and the guest + * exit would otherwise be badly decoded by our fancy + * "decode-exception-without-a-branch" code... + */ + push {r2, lr} mov lr, r0 mov r0, r1 @@ -141,7 +208,7 @@ hyp_hvc: THUMB( orr lr, #1) blx lr @ Call the HYP function - pop {lr} + pop {r2, lr} 1: eret guest_trap: