From patchwork Wed Jan 22 14:56:36 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 23544 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f71.google.com (mail-oa0-f71.google.com [209.85.219.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 96904218CB for ; Wed, 22 Jan 2014 15:18:27 +0000 (UTC) Received: by mail-oa0-f71.google.com with SMTP id g12sf1826186oah.10 for ; Wed, 22 Jan 2014 07:18:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=U2Bv++vj08+HtHlBFXy6qAKivB36dU/6gBKXee8KJ4w=; b=WS5WUKug/TyYcjKKsrFbOss9INZgQWAuQ2lmgNuX0mdvsHfMDyaZM7TXVxlwoODr/v DQHnys7DX/wIny30xxqnTYJS+RZ+zwPIJal9GN04sdAz96PE4TFtWzd9tQcEOPWdJkFl Q0xCNGGGSfkFGwMNKVEoZ0qYBJjZsk76HSqIf12IEikGlMH11vvg5qtfYNtfD47CL1D2 2K8nIQ13qdKZuzQXhUTcHL4CZrQ6atZERb/xjZwF8KWRND/zDW28CkujaD2+F3exZ0HQ rn8++Z/U/KvwELuZJfTuo+8SjdMoktLoznjQLMZI2bw4/uS1nTl8e5grMA5HnTpKjz+a R/og== X-Gm-Message-State: ALoCoQnjerE2e7jZfxnUmGT5hfOSGChc38erjh3w7RRLDaQDbHbxA+Fh7f3zVKRs/apjbsu8EpvT X-Received: by 10.182.126.137 with SMTP id my9mr752002obb.13.1390403906728; Wed, 22 Jan 2014 07:18:26 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.81.16 with SMTP id e16ls73994qgd.63.gmail; Wed, 22 Jan 2014 07:18:26 -0800 (PST) X-Received: by 10.220.147.16 with SMTP id j16mr1295297vcv.28.1390403906562; Wed, 22 Jan 2014 07:18:26 -0800 (PST) Received: from mail-vb0-f50.google.com (mail-vb0-f50.google.com [209.85.212.50]) by mx.google.com with ESMTPS id h13si4723209vct.68.2014.01.22.07.18.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 22 Jan 2014 07:18:26 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.50 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.50; Received: by mail-vb0-f50.google.com with SMTP id w8so293525vbj.37 for ; Wed, 22 Jan 2014 07:18:26 -0800 (PST) X-Received: by 10.220.97.145 with SMTP id l17mr588338vcn.35.1390403906473; Wed, 22 Jan 2014 07:18:26 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp191545vcz; Wed, 22 Jan 2014 07:18:25 -0800 (PST) X-Received: by 10.180.106.165 with SMTP id gv5mr3775634wib.32.1390403905122; Wed, 22 Jan 2014 07:18:25 -0800 (PST) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id er1si7058170wib.61.2014.01.22.07.18.24 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Jan 2014 07:18:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1W5zFg-0002Ba-G0; Wed, 22 Jan 2014 14:58:00 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W5zFe-0000oF-3P; Wed, 22 Jan 2014 14:57:58 +0000 Received: from fw-tnat.austin.arm.com ([217.140.110.23] helo=collaborate-mta1.arm.com) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W5zEp-0000gu-Eo for linux-arm-kernel@lists.infradead.org; Wed, 22 Jan 2014 14:57:11 +0000 Received: from e102391-lin.cambridge.arm.com (e102391-lin.cambridge.arm.com [10.1.209.166]) by collaborate-mta1.arm.com (Postfix) with ESMTP id B9C3C1401A6; Wed, 22 Jan 2014 08:56:46 -0600 (CST) From: Marc Zyngier To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Subject: [PATCH v2 04/10] arm64: KVM: flush VM pages before letting the guest enable caches Date: Wed, 22 Jan 2014 14:56:36 +0000 Message-Id: <1390402602-22777-5-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1390402602-22777-1-git-send-email-marc.zyngier@arm.com> References: <1390402602-22777-1-git-send-email-marc.zyngier@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140122_095707_652162_F40D27A2 X-CRM114-Status: GOOD ( 13.20 ) X-Spam-Score: -2.5 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Catalin Marinas , Christoffer Dall X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.50 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 When the guest runs with caches disabled (like in an early boot sequence, for example), all the writes are diectly going to RAM, bypassing the caches altogether. Once the MMU and caches are enabled, whatever sits in the cache becomes suddently visible, which isn't what the guest expects. A way to avoid this potential disaster is to invalidate the cache when the MMU is being turned on. For this, we hook into the SCTLR_EL1 trapping code, and scan the stage-2 page tables, invalidating the pages/sections that have already been mapped in. Signed-off-by: Marc Zyngier Reviewed-by: Catalin Marinas --- arch/arm/include/asm/kvm_mmu.h | 2 + arch/arm/kvm/mmu.c | 83 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/sys_regs.c | 5 ++- 4 files changed, 90 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index f997b9e..cbab9ba 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -141,6 +141,8 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l)) +void stage2_flush_vm(struct kvm *kvm); + #endif /* !__ASSEMBLY__ */ #endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 415fd63..52f8b7d 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -187,6 +187,89 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, } } +void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd, + unsigned long addr, unsigned long end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE); + } + } while(pte++, addr += PAGE_SIZE, addr != end); +} + +void stage2_flush_pmds(struct kvm *kvm, pud_t *pud, + unsigned long addr, unsigned long end) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + next = pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + if (kvm_pmd_huge(*pmd)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PMD_SIZE); + } else { + stage2_flush_ptes(kvm, pmd, addr, next); + } + } + } while(pmd++, addr = next, addr != end); +} + +void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd, + unsigned long addr, unsigned long end) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(pgd, addr); + do { + next = pud_addr_end(addr, end); + if (!pud_none(*pud)) { + if (pud_huge(*pud)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PUD_SIZE); + } else { + stage2_flush_pmds(kvm, pud, addr, next); + } + } + } while(pud++, addr = next, addr != end); +} + +static void stage2_flush_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + unsigned long addr = memslot->base_gfn << PAGE_SHIFT; + unsigned long end = addr + PAGE_SIZE * memslot->npages; + unsigned long next; + pgd_t *pgd; + + pgd = kvm->arch.pgd + pgd_index(addr); + do { + next = pgd_addr_end(addr, end); + stage2_flush_puds(kvm, pgd, addr, next); + } while(pgd++, addr = next, addr != end); +} + +void stage2_flush_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_flush_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); +} + /** * free_boot_hyp_pgd - free HYP boot page tables * diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 2232dd0..b7b2ca3 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -139,6 +139,7 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, } } +void stage2_flush_vm(struct kvm *kvm); #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1a4731b..3d27bb8 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -163,8 +164,10 @@ static bool access_sctlr(struct kvm_vcpu *vcpu, else vcpu_cp15(vcpu, r->reg) = val & 0xffffffffUL; - if ((val & (0b101)) == 0b101) /* MMU+Caches enabled? */ + if ((val & (0b101)) == 0b101) { /* MMU+Caches enabled? */ vcpu->arch.hcr_el2 &= ~HCR_TVM; + stage2_flush_vm(vcpu->kvm); + } return true; }