From patchwork Tue Oct 3 03:10:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 114643 Delivered-To: patch@linaro.org Received: by 10.140.22.163 with SMTP id 32csp1384387qgn; Mon, 2 Oct 2017 20:11:50 -0700 (PDT) X-Received: by 10.84.212.144 with SMTP id e16mr10753940pli.205.1507000310693; Mon, 02 Oct 2017 20:11:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507000310; cv=none; d=google.com; s=arc-20160816; b=rqqLe6x1lsiw4pGn1dVer+3HVDAmeral5p7KNTN9sInUxQiCsyMVYl+xq7VB07Xser XuNKPv3T2CotU5QAGtReQvUKHpBweB6ccuasRJoOBccFAroo0p0CNyuNGbnbLs4e7III xTtbW8F1pQvcIRaMBQYXcACMstcxAN/WsGYYPOg7M7A1ja6QuBIyuUXe4RSfNqCkN519 JZC1ij84qUkp5BAys4wozqk2pNMCg5uzuHTGdElGUVuq4wnRFIlfi6BTn0T7h7qDAdfI Ms5OLuPge9bsE7zGMepcxMK6tEQH7j7hCGsDPsFFB985c6RnhnSks4pnA9SqKeE6WhWe UxKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=eAqx2xLxCExiluCRkqw3a6k08S4JY17Xyb7qWUk8dxk=; b=NSYXEK13j/DV1feeTdDdjhIduAPFtljMucF92j+MUTlLoQ3+xa9HEQR2NfUejERd3j NewhCTfGBa0+P5XsxSHwWP2UWWuXdx3helxC+RhSvhxDMpWP+7sQJQ1+qL5XGqsJyOQ2 RxzshK3BmJ33hWIAd+RioHXdT1IK+YLlzPtrE7b/JeS5PZfViOd7sfHIp2hiC12dcb13 jq9YjqOEvBgkXO8NpEjDjebv6w34K5G36FvY03cGfoenaxMwRz/sdS4f3/k4yOpuhik/ 5FC3u2WeWkYFeaK0WU1NeCL23L1sP1qOQczhd9fIKv3e1oSX5mRu1ATuo0VrYh/S0hLb Rzlg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=S8xzPk2W; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b9si7652472plx.666.2017.10.02.20.11.50; Mon, 02 Oct 2017 20:11:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=S8xzPk2W; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751893AbdJCDLr (ORCPT + 26 others); Mon, 2 Oct 2017 23:11:47 -0400 Received: from mail-io0-f178.google.com ([209.85.223.178]:46983 "EHLO mail-io0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751469AbdJCDLn (ORCPT ); Mon, 2 Oct 2017 23:11:43 -0400 Received: by mail-io0-f178.google.com with SMTP id d16so6450703ioj.3 for ; Mon, 02 Oct 2017 20:11:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eAqx2xLxCExiluCRkqw3a6k08S4JY17Xyb7qWUk8dxk=; b=S8xzPk2Wq31DEaTbSDQYeG4CWBCMPuKFYDuzz2HU9FXhvPmUtg5DHHyrbPkeu3aiQt 6rrnLYvufcAiHpiAlzodUICx4aOuhnJcvPVNw3iX1bWWq0lzJvjzNefhP9BmuMsBv4YM YrC7L2efBwhGBFZTgv0oF0s+6fxC2DJZgkM94= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eAqx2xLxCExiluCRkqw3a6k08S4JY17Xyb7qWUk8dxk=; b=hJTta4ozvbaoMcXobY28+ym21gX52Etv/HC1pWOHeLwHQVXuA0VghDY2EoZUwH6Rk9 aTf65Mt2r5dpj1mkVy7RktGm3uFWozy0jsxxcxYLwV1/x5n7w9pFXCCms7Vy/TII+y6x uoW+0NZoyY2MYJKGY+J232L4OV5ubHZFZsTqtkEhawtuAkPxk1eNMA9/ALM/fV4OomUD JteOODDp7S5LmKs1H+l6KVKjzEP2tbG6yhStPubDpaacPqK8JdHlvvkF/6WI16Abnzyw WAWOO6fwdfrgI4XE3J2+nSskmLuZA0QKj79MZSd513Zk6QLOurIw3+9Dx6w724mh79AR kUiQ== X-Gm-Message-State: AMCzsaXRbrz5BbjqwM3q+7gWa6p1OoVJaDZ0FRcM6McWDldVVvImInmp Xf7sqIJuQX5kVGOvPVj+AhlZMg== X-Google-Smtp-Source: AOwi7QAIBwULKw9b0zd/HR8w5S8KEvT4t3oAigcFqJ7oB2XwtvUhoMEGnvWO1tz0o1vhUTdqx9R4Bg== X-Received: by 10.107.204.5 with SMTP id c5mr26146495iog.66.1507000302318; Mon, 02 Oct 2017 20:11:42 -0700 (PDT) Received: from node.jintackl-qv28633.kvmarm-pg0.wisc.cloudlab.us (c220g1-031126.wisc.cloudlab.us. [128.104.222.76]) by smtp.gmail.com with ESMTPSA id h84sm5367193iod.72.2017.10.02.20.11.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 02 Oct 2017 20:11:41 -0700 (PDT) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, kvmarm@lists.cs.columbia.edu Cc: jintack@cs.columbia.edu, pbonzini@redhat.com, rkrcmar@redhat.com, catalin.marinas@arm.com, will.deacon@arm.com, linux@armlinux.org.uk, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jintack Lim Subject: [RFC PATCH v2 05/31] KVM: arm/arm64: Support mmu for the virtual EL2 execution Date: Mon, 2 Oct 2017 22:10:47 -0500 Message-Id: <1507000273-3735-3-git-send-email-jintack.lim@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1507000273-3735-1-git-send-email-jintack.lim@linaro.org> References: <1507000273-3735-1-git-send-email-jintack.lim@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Christoffer Dall When running a guest hypervisor in virtual EL2, the translation context has to be separate from the rest of the system, including the guest EL1/0 translation regime, so we allocate a separate VMID for this mode. Considering that we have two different vttbr values due to separate VMIDs, it's racy to keep a vttbr value in a struct (kvm_s2_mmu) and share it between multiple vcpus. So, remove the shared vttbr field, and set up per-vcpu hw_vttbr field. Hypercalls to flush tlb now have vttbr as a parameter instead of mmu, since mmu structure does not have vttbr any more. Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim --- Notes: v1-->v2: Fixed a bug that hw_vttbr was not initialized correctly in kvm_arch_vcpu_init() where vmid is not allocated yet. This prevented the guest from booting on 32bit arm; hw_vttbr is set on each entry on aarch64, so it was fine. arch/arm/include/asm/kvm_asm.h | 6 ++-- arch/arm/include/asm/kvm_emulate.h | 4 +++ arch/arm/include/asm/kvm_host.h | 14 +++++--- arch/arm/include/asm/kvm_mmu.h | 11 ++++++ arch/arm/kvm/hyp/switch.c | 4 +-- arch/arm/kvm/hyp/tlb.c | 15 ++++----- arch/arm64/include/asm/kvm_asm.h | 6 ++-- arch/arm64/include/asm/kvm_emulate.h | 8 +++++ arch/arm64/include/asm/kvm_host.h | 14 +++++--- arch/arm64/include/asm/kvm_mmu.h | 11 ++++++ arch/arm64/kvm/hyp/switch.c | 4 +-- arch/arm64/kvm/hyp/tlb.c | 34 +++++++++---------- virt/kvm/arm/arm.c | 65 +++++++++++++++++++++--------------- virt/kvm/arm/mmu.c | 9 +++-- 14 files changed, 128 insertions(+), 77 deletions(-) -- 1.9.1 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 71b7255..23a79bd 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -65,9 +65,9 @@ extern char __kvm_hyp_vector[]; extern void __kvm_flush_vm_context(void); -extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa); -extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); -extern void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu); +extern void __kvm_tlb_flush_vmid_ipa(u64 vttbr, phys_addr_t ipa); +extern void __kvm_tlb_flush_vmid(u64 vttbr); +extern void __kvm_tlb_flush_local_vmid(u64 vttbr); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index 29a4dec..24a3fbf 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -293,4 +293,8 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, } } +static inline struct kvm_s2_vmid *vcpu_get_active_vmid(struct kvm_vcpu *vcpu) +{ + return &vcpu->kvm->arch.mmu.vmid; +} #endif /* __ARM_KVM_EMULATE_H__ */ diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 78d826e..33ccdbe 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -53,16 +53,18 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_reset_coprocs(struct kvm_vcpu *vcpu); -struct kvm_s2_mmu { +struct kvm_s2_vmid { /* The VMID generation used for the virt. memory system */ u64 vmid_gen; u32 vmid; +}; + +struct kvm_s2_mmu { + struct kvm_s2_vmid vmid; + struct kvm_s2_vmid el2_vmid; /* Stage-2 page table */ pgd_t *pgd; - - /* VTTBR value associated with above pgd and vmid */ - u64 vttbr; }; struct kvm_arch { @@ -193,6 +195,9 @@ struct kvm_vcpu_arch { /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; + + /* VTTBR value used by the hardware on next switch */ + u64 hw_vttbr; }; struct kvm_vm_stat { @@ -239,6 +244,7 @@ static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm, { } +unsigned int get_kvm_vmid_bits(void); struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu __percpu **kvm_get_running_vcpus(void); void kvm_arm_halt_guest(struct kvm *kvm); diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index fa6f217..86fdc70 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -221,6 +221,17 @@ static inline unsigned int kvm_get_vmid_bits(void) return 8; } +static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, + struct kvm_s2_mmu *mmu) +{ + u64 vmid_field, baddr; + + baddr = virt_to_phys(mmu->pgd); + vmid_field = ((u64)vmid->vmid << VTTBR_VMID_SHIFT) & + VTTBR_VMID_MASK(get_kvm_vmid_bits()); + return baddr | vmid_field; +} + #endif /* !__ASSEMBLY__ */ #endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c index 4814671..4798e39 100644 --- a/arch/arm/kvm/hyp/switch.c +++ b/arch/arm/kvm/hyp/switch.c @@ -75,9 +75,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu) { - struct kvm_s2_mmu *mmu = kern_hyp_va(vcpu->arch.hw_mmu); - - write_sysreg(mmu->vttbr, VTTBR); + write_sysreg(vcpu->arch.hw_vttbr, VTTBR); write_sysreg(vcpu->arch.midr, VPIDR); } diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c index 56f0a49..562ad0b 100644 --- a/arch/arm/kvm/hyp/tlb.c +++ b/arch/arm/kvm/hyp/tlb.c @@ -34,13 +34,12 @@ * As v7 does not support flushing per IPA, just nuke the whole TLB * instead, ignoring the ipa value. */ -void __hyp_text __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) +void __hyp_text __kvm_tlb_flush_vmid(u64 vttbr) { dsb(ishst); /* Switch to requested VMID */ - mmu = kern_hyp_va(mmu); - write_sysreg(mmu->vttbr, VTTBR); + write_sysreg(vttbr, VTTBR); isb(); write_sysreg(0, TLBIALLIS); @@ -50,17 +49,15 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) write_sysreg(0, VTTBR); } -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, - phys_addr_t ipa) +void __hyp_text __kvm_tlb_flush_vmid_ipa(u64 vttbr, phys_addr_t ipa) { - __kvm_tlb_flush_vmid(mmu); + __kvm_tlb_flush_vmid(vttbr); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu) +void __hyp_text __kvm_tlb_flush_local_vmid(u64 vttbr) { /* Switch to requested VMID */ - mmu = kern_hyp_va(mmu); - write_sysreg(mmu->vttbr, VTTBR); + write_sysreg(vttbr, VTTBR); isb(); write_sysreg(0, TLBIALL); diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ff6244f..e492749 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -52,9 +52,9 @@ extern char __kvm_hyp_vector[]; extern void __kvm_flush_vm_context(void); -extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa); -extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); -extern void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu); +extern void __kvm_tlb_flush_vmid_ipa(u64 vttbr, phys_addr_t ipa); +extern void __kvm_tlb_flush_vmid(u64 vttbr); +extern void __kvm_tlb_flush_local_vmid(u64 vttbr); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 4776bfc..71a3a04 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -385,4 +385,12 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, return data; /* Leave LE untouched */ } +static inline struct kvm_s2_vmid *vcpu_get_active_vmid(struct kvm_vcpu *vcpu) +{ + if (unlikely(is_hyp_ctxt(vcpu))) + return &vcpu->kvm->arch.mmu.el2_vmid; + + return &vcpu->kvm->arch.mmu.vmid; +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e7e9f70..a7edf0e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -50,17 +50,19 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); -struct kvm_s2_mmu { +struct kvm_s2_vmid { /* The VMID generation used for the virt. memory system */ u64 vmid_gen; u32 vmid; +}; + +struct kvm_s2_mmu { + struct kvm_s2_vmid vmid; + struct kvm_s2_vmid el2_vmid; /* 1-level 2nd stage table and lock */ spinlock_t pgd_lock; pgd_t *pgd; - - /* VTTBR value associated with above pgd and vmid */ - u64 vttbr; }; struct kvm_arch { @@ -337,6 +339,9 @@ struct kvm_vcpu_arch { /* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; + + /* VTTBR value used by the hardware on next switch */ + u64 hw_vttbr; }; #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) @@ -394,6 +399,7 @@ static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm, { } +unsigned int get_kvm_vmid_bits(void); struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void); void kvm_arm_halt_guest(struct kvm *kvm); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a89cc22..21c0299 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -312,5 +312,16 @@ static inline unsigned int kvm_get_vmid_bits(void) return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8; } +static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, + struct kvm_s2_mmu *mmu) +{ + u64 vmid_field, baddr; + + baddr = virt_to_phys(mmu->pgd); + vmid_field = ((u64)vmid->vmid << VTTBR_VMID_SHIFT) & + VTTBR_VMID_MASK(get_kvm_vmid_bits()); + return baddr | vmid_field; +} + #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 8b1b3e9..3626e76 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -181,9 +181,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu) { - struct kvm_s2_mmu *mmu = kern_hyp_va(vcpu->arch.hw_mmu); - - write_sysreg(mmu->vttbr, vttbr_el2); + write_sysreg(vcpu->arch.hw_vttbr, vttbr_el2); } static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index 0897678..680b960 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -18,7 +18,7 @@ #include #include -static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm_s2_mmu *mmu) +static void __hyp_text __tlb_switch_to_guest_vhe(u64 vttbr) { u64 val; @@ -29,16 +29,16 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm_s2_mmu *mmu) * bits. Changing E2H is impossible (goodbye TTBR1_EL2), so * let's flip TGE before executing the TLB operation. */ - write_sysreg(mmu->vttbr, vttbr_el2); + write_sysreg(vttbr, vttbr_el2); val = read_sysreg(hcr_el2); val &= ~HCR_TGE; write_sysreg(val, hcr_el2); isb(); } -static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm_s2_mmu *mmu) +static void __hyp_text __tlb_switch_to_guest_nvhe(u64 vttbr) { - write_sysreg(mmu->vttbr, vttbr_el2); + write_sysreg(vttbr, vttbr_el2); isb(); } @@ -47,7 +47,7 @@ static hyp_alternate_select(__tlb_switch_to_guest, __tlb_switch_to_guest_vhe, ARM64_HAS_VIRT_HOST_EXTN); -static void __hyp_text __tlb_switch_to_host_vhe(struct kvm_s2_mmu *mmu) +static void __hyp_text __tlb_switch_to_host_vhe(void) { /* * We're done with the TLB operation, let's restore the host's @@ -57,7 +57,7 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm_s2_mmu *mmu) write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); } -static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm_s2_mmu *mmu) +static void __hyp_text __tlb_switch_to_host_nvhe(void) { write_sysreg(0, vttbr_el2); } @@ -67,14 +67,12 @@ static hyp_alternate_select(__tlb_switch_to_host, __tlb_switch_to_host_vhe, ARM64_HAS_VIRT_HOST_EXTN); -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, - phys_addr_t ipa) +void __hyp_text __kvm_tlb_flush_vmid_ipa(u64 vttbr, phys_addr_t ipa) { dsb(ishst); /* Switch to requested VMID */ - mmu = kern_hyp_va(mmu); - __tlb_switch_to_guest()(mmu); + __tlb_switch_to_guest()(vttbr); /* * We could do so much better if we had the VA as well. @@ -117,35 +115,33 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, if (!has_vhe() && icache_is_vpipt()) __flush_icache_all(); - __tlb_switch_to_host()(mmu); + __tlb_switch_to_host()(); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) +void __hyp_text __kvm_tlb_flush_vmid(u64 vttbr) { dsb(ishst); /* Switch to requested VMID */ - mmu = kern_hyp_va(mmu); - __tlb_switch_to_guest()(mmu); + __tlb_switch_to_guest()(vttbr); __tlbi(vmalls12e1is); dsb(ish); isb(); - __tlb_switch_to_host()(mmu); + __tlb_switch_to_host()(); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu) +void __hyp_text __kvm_tlb_flush_local_vmid(u64 vttbr) { /* Switch to requested VMID */ - mmu = kern_hyp_va(mmu); - __tlb_switch_to_guest()(mmu); + __tlb_switch_to_guest()(vttbr); __tlbi(vmalle1); dsb(nsh); isb(); - __tlb_switch_to_host()(mmu); + __tlb_switch_to_host()(); } void __hyp_text __kvm_flush_vm_context(void) diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index bee27bb..41e0654 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -75,6 +75,11 @@ static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu) __this_cpu_write(kvm_arm_running_vcpu, vcpu); } +unsigned int get_kvm_vmid_bits(void) +{ + return kvm_vmid_bits; +} + /** * kvm_arm_get_running_vcpu - get the vcpu running on the current CPU. * Must be called from non-preemptible context @@ -138,7 +143,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_vgic_early_init(kvm); /* Mark the initial VMID generation invalid */ - kvm->arch.mmu.vmid_gen = 0; + kvm->arch.mmu.vmid.vmid_gen = 0; + kvm->arch.mmu.el2_vmid.vmid_gen = 0; /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->arch.max_vcpus = vgic_present ? @@ -325,6 +331,8 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) { + struct kvm_s2_mmu *mmu = &vcpu->kvm->arch.mmu; + /* Force users to call KVM_ARM_VCPU_INIT */ vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); @@ -334,7 +342,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) kvm_arm_reset_debug_ptr(vcpu); - vcpu->arch.hw_mmu = &vcpu->kvm->arch.mmu; + vcpu->arch.hw_mmu = mmu; return kvm_vgic_vcpu_init(vcpu); } @@ -350,7 +358,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * over-invalidation doesn't affect correctness. */ if (*last_ran != vcpu->vcpu_id) { - kvm_call_hyp(__kvm_tlb_flush_local_vmid, &vcpu->kvm->arch.mmu); + struct kvm_s2_mmu *mmu = &vcpu->kvm->arch.mmu; + u64 vttbr = kvm_get_vttbr(&mmu->vmid, mmu); + + kvm_call_hyp(__kvm_tlb_flush_local_vmid, vttbr); *last_ran = vcpu->vcpu_id; } @@ -434,36 +445,38 @@ void force_vm_exit(const cpumask_t *mask) /** * need_new_vmid_gen - check that the VMID is still valid - * @kvm: The VM's VMID to check + * @vmid: The VMID to check * * return true if there is a new generation of VMIDs being used * - * The hardware supports only 256 values with the value zero reserved for the - * host, so we check if an assigned value belongs to a previous generation, - * which which requires us to assign a new value. If we're the first to use a - * VMID for the new generation, we must flush necessary caches and TLBs on all - * CPUs. + * The hardware supports a limited set of values with the value zero reserved + * for the host, so we check if an assigned value belongs to a previous + * generation, which which requires us to assign a new value. If we're the + * first to use a VMID for the new generation, we must flush necessary caches + * and TLBs on all CPUs. */ -static bool need_new_vmid_gen(struct kvm_s2_mmu *mmu) +static bool need_new_vmid_gen(struct kvm_s2_vmid *vmid) { - return unlikely(mmu->vmid_gen != atomic64_read(&kvm_vmid_gen)); + return unlikely(vmid->vmid_gen != atomic64_read(&kvm_vmid_gen)); } /** * update_vttbr - Update the VTTBR with a valid VMID before the guest runs * @kvm: The guest that we are about to run - * @mmu: The stage-2 translation context to update + * @vmid: The stage-2 VMID information struct * * Called from kvm_arch_vcpu_ioctl_run before entering the guest to ensure the * VM has a valid VMID, otherwise assigns a new one and flushes corresponding * caches and TLBs. */ -static void update_vttbr(struct kvm *kvm, struct kvm_s2_mmu *mmu) +static void update_vttbr(struct kvm *kvm, struct kvm_s2_vmid *vmid) { - phys_addr_t pgd_phys; - u64 vmid; + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; + struct kvm_vcpu *vcpu; + int i = 0; + u64 new_vttbr; - if (!need_new_vmid_gen(mmu)) + if (!need_new_vmid_gen(vmid)) return; spin_lock(&kvm_vmid_lock); @@ -473,7 +486,7 @@ static void update_vttbr(struct kvm *kvm, struct kvm_s2_mmu *mmu) * already allocated a valid vmid for this vm, then this vcpu should * use the same vmid. */ - if (!need_new_vmid_gen(mmu)) { + if (!need_new_vmid_gen(vmid)) { spin_unlock(&kvm_vmid_lock); return; } @@ -497,17 +510,15 @@ static void update_vttbr(struct kvm *kvm, struct kvm_s2_mmu *mmu) kvm_call_hyp(__kvm_flush_vm_context); } - mmu->vmid_gen = atomic64_read(&kvm_vmid_gen); - mmu->vmid = kvm_next_vmid; + vmid->vmid_gen = atomic64_read(&kvm_vmid_gen); + vmid->vmid = kvm_next_vmid; kvm_next_vmid++; kvm_next_vmid &= (1 << kvm_vmid_bits) - 1; - /* update vttbr to be used with the new vmid */ - pgd_phys = virt_to_phys(mmu->pgd); - BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK); - vmid = ((u64)(mmu->vmid) << VTTBR_VMID_SHIFT) & - VTTBR_VMID_MASK(kvm_vmid_bits); - mmu->vttbr = pgd_phys | vmid; + new_vttbr = kvm_get_vttbr(&mmu->vmid, mmu); + kvm_for_each_vcpu(i, vcpu, kvm) { + vcpu->arch.hw_vttbr = new_vttbr; + } spin_unlock(&kvm_vmid_lock); } @@ -642,7 +653,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ cond_resched(); - update_vttbr(vcpu->kvm, vcpu->arch.hw_mmu); + update_vttbr(vcpu->kvm, vcpu_get_active_vmid(vcpu)); check_vcpu_requests(vcpu); @@ -681,7 +692,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); - if (ret <= 0 || need_new_vmid_gen(vcpu->arch.hw_mmu) || + if (ret <= 0 || need_new_vmid_gen(vcpu_get_active_vmid(vcpu)) || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; local_irq_enable(); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index d8ea1f9..0edcf23 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -61,12 +61,17 @@ static bool memslot_is_logging(struct kvm_memory_slot *memslot) */ void kvm_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, kvm); + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; + u64 vttbr = kvm_get_vttbr(&mmu->vmid, mmu); + + kvm_call_hyp(__kvm_tlb_flush_vmid, vttbr); } static void kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa) { - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ipa); + u64 vttbr = kvm_get_vttbr(&mmu->vmid, mmu); + + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, vttbr, ipa); } /*