From patchwork Fri Apr 24 05:27:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 47502 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8ACA920553 for ; Fri, 24 Apr 2015 05:31:05 +0000 (UTC) Received: by layy10 with SMTP id y10sf9415851lay.0 for ; Thu, 23 Apr 2015 22:31:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=XdLMTAvjLlPo++jzYvyQ0tCnxcvDFW6XanlMcGma1wE=; b=ggyUR9lMxuei3ykl09Nw4aD6DAJLdDsEvuZqymFPwXFAJOXMGMVqFVxRx5PxF63Tuv 3AOu05XwUn3sQIkPAh4yjZAcdXEF7ws8LhXOQA8Qo2RD6UttEH4noWNXAYWqpe6mOt1o jIyQBglg/MXdjaQlMXlvPh3aW2MOxXryX8lW/jJXbgXyNKB3FyyrB16nRV81K7XODoT5 gFBIQgcYClJ7wSZm8gF/FIAcBbz7RuLMthNddgcxQA3IbcoT/bCsnfh+O3Mnj0nW2BLQ NuvaTBqp6AmqQCV+GYe0987qcXmrr1fehUQJoEHjKQzBNL1rJJm1jfvBT3hLt36BF2Le 0xQA== X-Gm-Message-State: ALoCoQkCtIy/P4RYJR8PRiVNyiA7apaJQdd1MRJKdeIBOnMypGYO0IXn6eiLGGuDwQtE+pwcVaDs X-Received: by 10.112.42.236 with SMTP id r12mr3273754lbl.2.1429853464572; Thu, 23 Apr 2015 22:31:04 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.116.74 with SMTP id ju10ls166282lab.23.gmail; Thu, 23 Apr 2015 22:31:04 -0700 (PDT) X-Received: by 10.112.211.167 with SMTP id nd7mr5468773lbc.62.1429853464175; Thu, 23 Apr 2015 22:31:04 -0700 (PDT) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com. [209.85.217.181]) by mx.google.com with ESMTPS id yj8si7472615lab.53.2015.04.23.22.31.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Apr 2015 22:31:03 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) client-ip=209.85.217.181; Received: by lbbuc2 with SMTP id uc2so28529950lbb.2 for ; Thu, 23 Apr 2015 22:31:03 -0700 (PDT) X-Received: by 10.112.204.6 with SMTP id ku6mr5348162lbc.73.1429853463733; Thu, 23 Apr 2015 22:31:03 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp977463lbt; Thu, 23 Apr 2015 22:31:02 -0700 (PDT) X-Received: by 10.67.4.161 with SMTP id cf1mr11608788pad.35.1429853461696; Thu, 23 Apr 2015 22:31:01 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bd10si15800741pdb.178.2015.04.23.22.31.00; Thu, 23 Apr 2015 22:31:01 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754516AbbDXFbA (ORCPT + 2 others); Fri, 24 Apr 2015 01:31:00 -0400 Received: from mail-oi0-f48.google.com ([209.85.218.48]:36093 "EHLO mail-oi0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754505AbbDXFa7 (ORCPT ); Fri, 24 Apr 2015 01:30:59 -0400 Received: by oift201 with SMTP id t201so32372044oif.3 for ; Thu, 23 Apr 2015 22:30:59 -0700 (PDT) X-Received: by 10.202.230.11 with SMTP id d11mr5372820oih.6.1429853459219; Thu, 23 Apr 2015 22:30:59 -0700 (PDT) Received: from localhost ([167.160.116.36]) by mx.google.com with ESMTPSA id xs4sm6079168obc.12.2015.04.23.22.30.57 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 Apr 2015 22:30:58 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: jslaby@suse.cz, christoffer.dall@linaro.org, shannon.zhao@linaro.org, Marc Zyngier Subject: [PATCH for 3.12.y stable 17/63] arm64: KVM: force cache clean on page fault when caches are off Date: Fri, 24 Apr 2015 13:27:15 +0800 Message-Id: <1429853281-6136-18-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1429853281-6136-1-git-send-email-shannon.zhao@linaro.org> References: <1429853281-6136-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Marc Zyngier commit 2d58b733c87689d3d5144e4ac94ea861cc729145 upstream. In order for the guest with caches off to observe data written contained in a given page, we need to make sure that page is committed to memory, and not just hanging in the cache (as guest accesses are completely bypassing the cache until it decides to enable it). For this purpose, hook into the coherent_icache_guest_page function and flush the region if the guest SCTLR_EL1 register doesn't show the MMU and caches as being enabled. The function also get renamed to coherent_cache_guest_page. Signed-off-by: Marc Zyngier Reviewed-by: Catalin Marinas Reviewed-by: Christoffer Dall Signed-off-by: Shannon Zhao --- arch/arm/include/asm/kvm_mmu.h | 6 +++--- arch/arm/kvm/mmu.c | 3 ++- arch/arm64/include/asm/kvm_mmu.h | 19 +++++++++++++------ 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 9b28c41..ba285d7 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -105,7 +105,8 @@ static inline void kvm_set_s2pte_writable(pte_t *pte) struct kvm; -static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn) +static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, + unsigned long size) { /* * If we are going to insert an instruction page and the icache is @@ -120,8 +121,7 @@ static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn) * need any kind of flushing (DDI 0406C.b - Page B3-1392). */ if (icache_is_pipt()) { - unsigned long hva = gfn_to_hva(kvm, gfn); - __cpuc_coherent_user_range(hva, hva + PAGE_SIZE); + __cpuc_coherent_user_range(hva, hva + size); } else if (!icache_is_vivt_asid_tagged()) { /* any kind of VIPT cache */ __flush_icache_all(); diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index fe59e4a..9e92601 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -540,6 +540,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, int ret; bool write_fault, writable; unsigned long mmu_seq; + unsigned long hva = gfn_to_hva(vcpu->kvm, gfn); struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); @@ -570,7 +571,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; new_pte = pfn_pte(pfn, PAGE_S2); - coherent_icache_guest_page(vcpu->kvm, gfn); + coherent_cache_guest_page(vcpu, hva, PAGE_SIZE); spin_lock(&vcpu->kvm->mmu_lock); if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index efe609c..99229a61 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -105,7 +105,6 @@ static inline bool kvm_is_write_fault(unsigned long esr) return true; } -static inline void kvm_clean_dcache_area(void *addr, size_t size) {} static inline void kvm_clean_pgd(pgd_t *pgd) {} static inline void kvm_clean_pmd_entry(pmd_t *pmd) {} static inline void kvm_clean_pte(pte_t *pte) {} @@ -118,18 +117,26 @@ static inline void kvm_set_s2pte_writable(pte_t *pte) struct kvm; -static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn) +#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) + +static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) +{ + return (vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; +} + +static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, + unsigned long size) { + if (!vcpu_has_cache_enabled(vcpu)) + kvm_flush_dcache_to_poc((void *)hva, size); + if (!icache_is_aliasing()) { /* PIPT */ - unsigned long hva = gfn_to_hva(kvm, gfn); - flush_icache_range(hva, hva + PAGE_SIZE); + flush_icache_range(hva, hva + size); } else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */ /* any kind of VIPT cache */ __flush_icache_all(); } } -#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) - #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */