From patchwork Fri Apr 24 05:27:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 47506 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C8DCE20553 for ; Fri, 24 Apr 2015 05:31:31 +0000 (UTC) Received: by wiun10 with SMTP id n10sf1566487wiu.1 for ; Thu, 23 Apr 2015 22:31:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=H7aZc4W1obVW8wZq5z7Tv0zYquEYBLBtCz34GVYTd64=; b=gmbyimSZuUUro0W5lToWtm2z90qBZ8fpuyBHzWVmYbo3EGE0MKCgWFf6f/AfyFwe3g 5wpS/1zzm7AX+ecSKJQ3yC7qPMiNJSfl2n+6mO3dtCFBNcaunMpLIjMdfEDNZFUPQfg3 4AZKpfE4bJjL68b8rfqwnc69hZMFaS9gvBc5YINP1iRnA4ys8awLuKkXEc0LEObOUGX/ 51hk2YEwvj8oj/55tu35GzL66zEqX917WMv5QiZiIIHqTt1vh5DDCRp8ATD/697/EFGF RctXJqpHpifZP7ku3aoEpMrk/0K+pgokdvljmUF3PtkYUdPxTR1EAGfJI41qItU+YDZR RaqA== X-Gm-Message-State: ALoCoQnlc3OR6JDOjtpdS//DSZQWw4zmZjJst8FjV4/H0Z3WtfX35vbHy8Ov8cuPzUerM3eBujOB X-Received: by 10.152.23.7 with SMTP id i7mr3206457laf.1.1429853491136; Thu, 23 Apr 2015 22:31:31 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.29.169 with SMTP id l9ls452074lah.52.gmail; Thu, 23 Apr 2015 22:31:31 -0700 (PDT) X-Received: by 10.152.37.228 with SMTP id b4mr5351615lak.111.1429853490993; Thu, 23 Apr 2015 22:31:30 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id le13si7483033lbb.27.2015.04.23.22.31.30 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Apr 2015 22:31:30 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by labbd9 with SMTP id bd9so27552378lab.2 for ; Thu, 23 Apr 2015 22:31:30 -0700 (PDT) X-Received: by 10.112.222.133 with SMTP id qm5mr5451262lbc.86.1429853490700; Thu, 23 Apr 2015 22:31:30 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp977645lbt; Thu, 23 Apr 2015 22:31:29 -0700 (PDT) X-Received: by 10.68.91.197 with SMTP id cg5mr11788674pbb.26.1429853488863; Thu, 23 Apr 2015 22:31:28 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bd10si15800741pdb.178.2015.04.23.22.31.27; Thu, 23 Apr 2015 22:31:28 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754521AbbDXFb1 (ORCPT + 2 others); Fri, 24 Apr 2015 01:31:27 -0400 Received: from mail-ob0-f178.google.com ([209.85.214.178]:36601 "EHLO mail-ob0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754524AbbDXFb1 (ORCPT ); Fri, 24 Apr 2015 01:31:27 -0400 Received: by obbeb7 with SMTP id eb7so30127293obb.3 for ; Thu, 23 Apr 2015 22:31:26 -0700 (PDT) X-Received: by 10.202.182.136 with SMTP id g130mr5381488oif.94.1429853486504; Thu, 23 Apr 2015 22:31:26 -0700 (PDT) Received: from localhost ([167.160.116.36]) by mx.google.com with ESMTPSA id w81sm6096631oia.6.2015.04.23.22.31.21 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 Apr 2015 22:31:25 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: jslaby@suse.cz, christoffer.dall@linaro.org, shannon.zhao@linaro.org, Marc Zyngier Subject: [PATCH for 3.12.y stable 21/63] arm64: KVM: flush VM pages before letting the guest enable caches Date: Fri, 24 Apr 2015 13:27:19 +0800 Message-Id: <1429853281-6136-22-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1429853281-6136-1-git-send-email-shannon.zhao@linaro.org> References: <1429853281-6136-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Marc Zyngier commit 9d218a1fcf4c6b759d442ef702842fae92e1ea61 upstream. When the guest runs with caches disabled (like in an early boot sequence, for example), all the writes are diectly going to RAM, bypassing the caches altogether. Once the MMU and caches are enabled, whatever sits in the cache becomes suddenly visible, which isn't what the guest expects. A way to avoid this potential disaster is to invalidate the cache when the MMU is being turned on. For this, we hook into the SCTLR_EL1 trapping code, and scan the stage-2 page tables, invalidating the pages/sections that have already been mapped in. Signed-off-by: Marc Zyngier Reviewed-by: Catalin Marinas Reviewed-by: Christoffer Dall Signed-off-by: Shannon Zhao --- arch/arm/include/asm/kvm_mmu.h | 2 + arch/arm/kvm/mmu.c | 83 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 2 + arch/arm64/kvm/sys_regs.c | 5 ++- 4 files changed, 91 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 5c946df..0de650f 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -143,6 +143,8 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l)) +void stage2_flush_vm(struct kvm *kvm); + #endif /* !__ASSEMBLY__ */ #endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index e747dc1..61c5a92 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -162,6 +162,89 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, } } +static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd, + phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE); + } + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud, + phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = pmd_offset(pud, addr); + do { + next = kvm_pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + stage2_flush_ptes(kvm, pmd, addr, next); + } + } while (pmd++, addr = next, addr != end); +} + +static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = pud_offset(pgd, addr); + do { + next = kvm_pud_addr_end(addr, end); + if (!pud_none(*pud)) { + stage2_flush_pmds(kvm, pud, addr, next); + } + } while (pud++, addr = next, addr != end); +} + +static void stage2_flush_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = addr + PAGE_SIZE * memslot->npages; + phys_addr_t next; + pgd_t *pgd; + + pgd = kvm->arch.pgd + pgd_index(addr); + do { + next = kvm_pgd_addr_end(addr, end); + stage2_flush_puds(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +/** + * stage2_flush_vm - Invalidate cache for pages mapped in stage 2 + * @kvm: The struct kvm pointer + * + * Go through the stage 2 page tables and invalidate any cache lines + * backing memory already mapped to the VM. + */ +void stage2_flush_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_flush_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * free_boot_hyp_pgd - free HYP boot page tables * diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 802bd97..3b038b3 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -142,5 +142,7 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, } } +void stage2_flush_vm(struct kvm *kvm); + #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 2097e5e..0324458 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -154,8 +155,10 @@ static bool access_sctlr(struct kvm_vcpu *vcpu, { access_vm_reg(vcpu, p, r); - if (vcpu_has_cache_enabled(vcpu)) /* MMU+Caches enabled? */ + if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */ vcpu->arch.hcr_el2 &= ~HCR_TVM; + stage2_flush_vm(vcpu->kvm); + } return true; }