From patchwork Tue Jun 2 14:48:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 49386 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0A6D720BD1 for ; Tue, 2 Jun 2015 14:50:11 +0000 (UTC) Received: by lani11 with SMTP id i11sf46063553lan.3 for ; Tue, 02 Jun 2015 07:50:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=+XVG53ZF3y/C+lN1v3WwW1IU7quLYoop9UjUZqR28SM=; b=DcRXDo11whVuUqxLhO1jaZeVRiosXyp37F20QfcEPrMyHwkTsbAUuY1SosbuGM5Lpf bMXN0MgljFt+thf5hYySihyL5svotfb4PeO8n5hNYXCOoOTSd6ZiBzhfQKrzewe25lBI +ve/xOe1o3gDonrG9sH6Fwf6JnxTkBSr5YgPAKQkDHiRlLTBl6OG9XACuRXyUzVBw7lv EpOt1ySo+PWC+Xf5JOihXm6AmPghMD+J3m6pNAN4DYTUpxM1AAx8Xd2Gk6tK4KTYQwVx BtrRTV2moW4gE5sL90QM8X+1XUMtnXm3e0QyOZTJJs8qTrh/KUk2NpV+oObKDfzPjZQC cGPw== X-Gm-Message-State: ALoCoQl2u5VtdkR9BvEwSj4By2CotpZK7CM5SaZbBH07AM469xl7BGtZL+bNMuQd+Kfa5wXE/Kiq X-Received: by 10.180.36.172 with SMTP id r12mr9098302wij.6.1433256609746; Tue, 02 Jun 2015 07:50:09 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.27.194 with SMTP id v2ls878600lag.59.gmail; Tue, 02 Jun 2015 07:50:09 -0700 (PDT) X-Received: by 10.112.235.133 with SMTP id um5mr27345178lbc.7.1433256609577; Tue, 02 Jun 2015 07:50:09 -0700 (PDT) Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com. [209.85.217.172]) by mx.google.com with ESMTPS id qs3si12752346lbb.164.2015.06.02.07.50.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Jun 2015 07:50:09 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) client-ip=209.85.217.172; Received: by lbbuc2 with SMTP id uc2so106144874lbb.2 for ; Tue, 02 Jun 2015 07:50:09 -0700 (PDT) X-Received: by 10.152.6.69 with SMTP id y5mr26171084lay.72.1433256609485; Tue, 02 Jun 2015 07:50:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp3128823lbb; Tue, 2 Jun 2015 07:50:08 -0700 (PDT) X-Received: by 10.66.255.67 with SMTP id ao3mr50389391pad.60.1433256607500; Tue, 02 Jun 2015 07:50:07 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id hv8si26697315pdb.173.2015.06.02.07.50.06; Tue, 02 Jun 2015 07:50:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759059AbbFBOuF (ORCPT + 2 others); Tue, 2 Jun 2015 10:50:05 -0400 Received: from mail-oi0-f51.google.com ([209.85.218.51]:34693 "EHLO mail-oi0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932573AbbFBOuE (ORCPT ); Tue, 2 Jun 2015 10:50:04 -0400 Received: by oifu123 with SMTP id u123so127396953oif.1 for ; Tue, 02 Jun 2015 07:50:03 -0700 (PDT) X-Received: by 10.202.102.7 with SMTP id a7mr21826585oic.112.1433256603865; Tue, 02 Jun 2015 07:50:03 -0700 (PDT) Received: from localhost ([167.160.116.34]) by mx.google.com with ESMTPSA id bq1sm9572479obb.0.2015.06.02.07.50.01 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 02 Jun 2015 07:50:02 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: gregkh@linuxfoundation.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org, Ard Biesheuvel , Marc Zyngier Subject: [PATCH for 3.14.y stable 11/32] ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault() Date: Tue, 2 Jun 2015 22:48:06 +0800 Message-Id: <1433256507-7856-12-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1433256507-7856-1-git-send-email-shannon.zhao@linaro.org> References: <1433256507-7856-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Ard Biesheuvel commit a7d079cea2dffb112e26da2566dd84c0ef1fce97 upstream. The ISS encoding for an exception from a Data Abort has a WnR bit[6] that indicates whether the Data Abort was caused by a read or a write instruction. While there are several fields in the encoding that are only valid if the ISV bit[24] is set, WnR is not one of them, so we can read it unconditionally. Instead of fixing both implementations of kvm_is_write_fault() in place, reimplement it just once using kvm_vcpu_dabt_iswrite(), which already does the right thing with respect to the WnR bit. Also fix up the callers to pass 'vcpu' Acked-by: Laszlo Ersek Acked-by: Marc Zyngier Acked-by: Christoffer Dall Signed-off-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Shannon Zhao --- arch/arm/include/asm/kvm_mmu.h | 11 ----------- arch/arm/kvm/mmu.c | 10 +++++++++- arch/arm64/include/asm/kvm_mmu.h | 13 ------------- 3 files changed, 9 insertions(+), 25 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 0cbdb8e..630869e 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -78,17 +78,6 @@ static inline void kvm_set_pte(pte_t *pte, pte_t new_pte) flush_pmd_entry(pte); } -static inline bool kvm_is_write_fault(unsigned long hsr) -{ - unsigned long hsr_ec = hsr >> HSR_EC_SHIFT; - if (hsr_ec == HSR_EC_IABT) - return false; - else if ((hsr & HSR_ISV) && !(hsr & HSR_WNR)) - return false; - else - return true; -} - static inline void kvm_clean_pgd(pgd_t *pgd) { clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t)); diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 70ed2c1..049c56e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -746,6 +746,14 @@ static bool transparent_hugepage_adjust(pfn_t *pfnp, phys_addr_t *ipap) return false; } +static bool kvm_is_write_fault(struct kvm_vcpu *vcpu) +{ + if (kvm_vcpu_trap_is_iabt(vcpu)) + return false; + + return kvm_vcpu_dabt_iswrite(vcpu); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long fault_status) @@ -761,7 +769,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pfn_t pfn; pgprot_t mem_type = PAGE_S2; - write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); + write_fault = kvm_is_write_fault(vcpu); if (fault_status == FSC_PERM && !write_fault) { kvm_err("Unexpected L2 read permission error\n"); return -EFAULT; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 8e138c7..737da74 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -93,19 +93,6 @@ void kvm_clear_hyp_idmap(void); #define kvm_set_pte(ptep, pte) set_pte(ptep, pte) #define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd) -static inline bool kvm_is_write_fault(unsigned long esr) -{ - unsigned long esr_ec = esr >> ESR_EL2_EC_SHIFT; - - if (esr_ec == ESR_EL2_EC_IABT) - return false; - - if ((esr & ESR_EL2_ISV) && !(esr & ESR_EL2_WNR)) - return false; - - return true; -} - static inline void kvm_clean_pgd(pgd_t *pgd) {} static inline void kvm_clean_pmd_entry(pmd_t *pmd) {} static inline void kvm_clean_pte(pte_t *pte) {}