From patchwork Fri Jan 17 15:03:11 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 23339 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f71.google.com (mail-yh0-f71.google.com [209.85.213.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 95360202DA for ; Fri, 17 Jan 2014 15:06:27 +0000 (UTC) Received: by mail-yh0-f71.google.com with SMTP id f10sf7992172yha.10 for ; Fri, 17 Jan 2014 07:06:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=Uu/BE1s/Jnc3rZpNHt/Oggm/uLGmXiy30SRTIztdJXU=; b=Vt9fpCcIuI9jQXIbJKQWpjq0KWautYNm+39bZ+2zesEKFIRtb05KLbTPl2dgtAbUnJ 8tRjqcAkRGTeWZ61I7j5BDfcyPfRNtHtipmZV8ncg5WiuEUdc/bAUf+JOKPZ5nz3t4hT 66sWfJRQCbTlptRAz8aqgi+wpTr1iwmcppis+titnBU0NGpkYXvlNEynj3lAYLCXoOKN l8Ib1sv7JGZYxsq0lW3AYLYE7ifY6xHS+J99hDKFRahLuZBOfSa/SV/7GItwcOXroOuM 06L5XVF9LE/XWGcG3UmyF9qyPKqS4OPQj8iQjU7f8EUGYMujBPbT3ZgRuudzcmjopxe8 EQAg== X-Gm-Message-State: ALoCoQlBNd8aj3azLFQr2LEx2XgZu5OnZg213FYcTUIFRiVJnBPH0WyS470UjaTDW850ZO2Ba05D X-Received: by 10.236.133.161 with SMTP id q21mr731595yhi.18.1389971186804; Fri, 17 Jan 2014 07:06:26 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.51.44 with SMTP id t41ls461940qga.86.gmail; Fri, 17 Jan 2014 07:06:26 -0800 (PST) X-Received: by 10.52.57.194 with SMTP id k2mr949588vdq.41.1389971186517; Fri, 17 Jan 2014 07:06:26 -0800 (PST) Received: from mail-vc0-f176.google.com (mail-vc0-f176.google.com [209.85.220.176]) by mx.google.com with ESMTPS id cx4si5019312vcb.80.2014.01.17.07.06.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 17 Jan 2014 07:06:26 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.176; Received: by mail-vc0-f176.google.com with SMTP id la4so1618201vcb.35 for ; Fri, 17 Jan 2014 07:06:26 -0800 (PST) X-Received: by 10.52.171.227 with SMTP id ax3mr997767vdc.34.1389971186426; Fri, 17 Jan 2014 07:06:26 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.59.13.131 with SMTP id ey3csp26257ved; Fri, 17 Jan 2014 07:06:25 -0800 (PST) X-Received: by 10.194.2.228 with SMTP id 4mr1781748wjx.95.1389971185340; Fri, 17 Jan 2014 07:06:25 -0800 (PST) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id bf6si8540012wjc.14.2014.01.17.07.06.23 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Jan 2014 07:06:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1W4AyF-0006gE-1h; Fri, 17 Jan 2014 15:04:31 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W4Ay2-0003RV-JA; Fri, 17 Jan 2014 15:04:18 +0000 Received: from fw-tnat.austin.arm.com ([217.140.110.23] helo=collaborate-mta1.arm.com) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W4AxR-0003Hz-3w for linux-arm-kernel@lists.infradead.org; Fri, 17 Jan 2014 15:03:44 +0000 Received: from e102391-lin.cambridge.arm.com (e102391-lin.cambridge.arm.com [10.1.209.166]) by collaborate-mta1.arm.com (Postfix) with ESMTP id 1B956140089; Fri, 17 Jan 2014 09:03:14 -0600 (CST) From: Marc Zyngier To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Subject: [RFC PATCH 1/3] arm64: KVM: force cache clean on page fault when caches are off Date: Fri, 17 Jan 2014 15:03:11 +0000 Message-Id: <1389970993-19371-2-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1389970993-19371-1-git-send-email-marc.zyngier@arm.com> References: <1389970993-19371-1-git-send-email-marc.zyngier@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140117_100341_365246_FBBC3C8D X-CRM114-Status: GOOD ( 11.16 ) X-Spam-Score: -2.2 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -0.3 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Catalin Marinas , Christoffer Dall X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 In order for the guest with caches off to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_icache_guest_page function and flush the region if the guest SCTLR_EL1 register doesn't show the MMU and caches as being enabled. The function also get renamed to coherent_cache_guest_page. Signed-off-by: Marc Zyngier Reviewed-by: Catalin Marinas --- arch/arm/include/asm/kvm_mmu.h | 4 ++-- arch/arm/kvm/mmu.c | 4 ++-- arch/arm64/include/asm/kvm_mmu.h | 11 +++++++---- 3 files changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 77de4a4..f997b9e 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -116,8 +116,8 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) struct kvm; -static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva, - unsigned long size) +static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, + unsigned long size) { /* * If we are going to insert an instruction page and the icache is diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 5809069..415fd63 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -713,7 +713,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_set_s2pmd_writable(&new_pmd); kvm_set_pfn_dirty(pfn); } - coherent_icache_guest_page(kvm, hva & PMD_MASK, PMD_SIZE); + coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE); ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { pte_t new_pte = pfn_pte(pfn, PAGE_S2); @@ -721,7 +721,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_set_s2pte_writable(&new_pte); kvm_set_pfn_dirty(pfn); } - coherent_icache_guest_page(kvm, hva, PAGE_SIZE); + coherent_cache_guest_page(vcpu, hva, PAGE_SIZE); ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false); } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 680f74e..2232dd0 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -106,7 +106,6 @@ static inline bool kvm_is_write_fault(unsigned long esr) return true; } -static inline void kvm_clean_dcache_area(void *addr, size_t size) {} static inline void kvm_clean_pgd(pgd_t *pgd) {} static inline void kvm_clean_pmd_entry(pmd_t *pmd) {} static inline void kvm_clean_pte(pte_t *pte) {} @@ -124,9 +123,14 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) struct kvm; -static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva, - unsigned long size) +#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) + +static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, + unsigned long size) { + if ((vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) != 0b101) + kvm_flush_dcache_to_poc((void *)hva, size); + if (!icache_is_aliasing()) { /* PIPT */ flush_icache_range(hva, hva + size); } else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */ @@ -135,7 +139,6 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, hva_t hva, } } -#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */