From patchwork Thu Apr 3 15:17:46 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 27691 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f197.google.com (mail-vc0-f197.google.com [209.85.220.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 972972036E for ; Thu, 3 Apr 2014 15:18:55 +0000 (UTC) Received: by mail-vc0-f197.google.com with SMTP id if11sf4336095vcb.0 for ; Thu, 03 Apr 2014 08:18:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:x-original-sender:x-original-authentication-results :precedence:mailing-list:list-id:list-post:list-help:list-archive :list-unsubscribe; bh=xn9QBGUTlB+yMrwetaKGF7dwlTzZnhAoRhG1ze79K+U=; b=DfsWYYBQn52/e/IyF4mMI/7d2RT/x1Oic4E2jcRhcZs5bCUXmtklZQCoigPzm0mgIi /KgxlXRf2eH7zdEZZXnalZ2hawUR4VxEsGyzvu8RB64SNUU4pc+3aTISGv7IJ5Y9Z/MY dxFafEeLhRJCWCR0SZ0LrG9/vB4GZaT35tPPiCDMj7ZffiGifyAwE98ph8rKHMKCHC3T LYcaTAIxL3V8F3CvJsMvlxZ6ySClzx60dhICaIuusS1RtxsypttzN58KayKB9wBUfKy2 IaMzM13gwhSv7CNfg73GkiVLOoVkucNuM2XuWG1cdfCdYuoy23dAd6sPR4cB/0Mu1rqp u0dw== X-Gm-Message-State: ALoCoQk4Mss6DXP/dTuZDETrPF0L8wl7WlDn/rOxAK0EnSJ2KWBTszeGqniaUpyvDMcOk0M3Z4qT X-Received: by 10.236.118.38 with SMTP id k26mr4171985yhh.35.1396538335368; Thu, 03 Apr 2014 08:18:55 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.93.33 with SMTP id c30ls732152qge.9.gmail; Thu, 03 Apr 2014 08:18:55 -0700 (PDT) X-Received: by 10.52.8.225 with SMTP id u1mr1032676vda.64.1396538335252; Thu, 03 Apr 2014 08:18:55 -0700 (PDT) Received: from mail-ve0-f180.google.com (mail-ve0-f180.google.com [209.85.128.180]) by mx.google.com with ESMTPS id ph6si280346veb.107.2014.04.03.08.18.55 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Apr 2014 08:18:55 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.180; Received: by mail-ve0-f180.google.com with SMTP id jz11so377729veb.11 for ; Thu, 03 Apr 2014 08:18:55 -0700 (PDT) X-Received: by 10.52.65.132 with SMTP id x4mr6675873vds.36.1396538335184; Thu, 03 Apr 2014 08:18:55 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.12.8 with SMTP id v8csp36751vcv; Thu, 3 Apr 2014 08:18:54 -0700 (PDT) X-Received: by 10.194.78.4 with SMTP id x4mr11371370wjw.58.1396538326470; Thu, 03 Apr 2014 08:18:46 -0700 (PDT) Received: from mail-we0-f169.google.com (mail-we0-f169.google.com [74.125.82.169]) by mx.google.com with ESMTPS id hv2si2438597wjb.162.2014.04.03.08.18.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Apr 2014 08:18:46 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.82.169 is neither permitted nor denied by best guess record for domain of eric.auger@linaro.org) client-ip=74.125.82.169; Received: by mail-we0-f169.google.com with SMTP id w62so2052268wes.0 for ; Thu, 03 Apr 2014 08:18:45 -0700 (PDT) X-Received: by 10.180.19.138 with SMTP id f10mr12048233wie.11.1396538325641; Thu, 03 Apr 2014 08:18:45 -0700 (PDT) Received: from gnx2579.gnb.st.com (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by mx.google.com with ESMTPSA id d6sm11810451wiz.4.2014.04.03.08.18.44 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Apr 2014 08:18:44 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, patches@linaro.org, christophe.barnichon@st.com, Eric Auger Subject: [PATCH] ARM: KVM: Handle IPA unmapping on memory region deletion Date: Thu, 3 Apr 2014 17:17:46 +0200 Message-Id: <1396538266-13245-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: eric.auger@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently when a KVM region is removed using kvm_vm_ioctl_set_memory_region (with memory region size equal to 0), the corresponding intermediate physical memory is not unmapped. This patch unmaps the region's IPA range in kvm_arch_commit_memory_region using unmap_stage2_range. The patch was tested on QEMU VFIO based use case where RAM memory region creation/deletion frequently happens for IRQ handling. Notes: - the KVM_MR_MOVE case shall request some similar addition but I cannot test this currently Signed-off-by: Eric Auger --- arch/arm/include/asm/kvm_mmu.h | 2 ++ arch/arm/kvm/arm.c | 8 ++++++++ arch/arm/kvm/mmu.c | 2 +- 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 2d122ad..a91c863 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -52,6 +52,8 @@ void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size); +void unmap_stage2_range(struct kvm *kvm, phys_addr_t guest_ipa, u64 size); + int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run); void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index bd18bb8..9a4bc10 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -241,6 +241,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, enum kvm_mr_change change) { + if (change == KVM_MR_DELETE) { + gpa_t gpa = old->base_gfn << PAGE_SHIFT; + u64 size = old->npages << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + unmap_stage2_range(kvm, gpa, size); + spin_unlock(&kvm->mmu_lock); + } } void kvm_arch_flush_shadow_all(struct kvm *kvm) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 7789857..e8580e2 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -443,7 +443,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm) * destroying the VM), otherwise another faulting VCPU may come in and mess * with things behind our backs. */ -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) +void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) { unmap_range(kvm, kvm->arch.pgd, start, size); }