From patchwork Thu Apr 30 09:25:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 47788 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f199.google.com (mail-lb0-f199.google.com [209.85.217.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3851020553 for ; Thu, 30 Apr 2015 09:26:26 +0000 (UTC) Received: by lbbqq2 with SMTP id qq2sf12951968lbb.0 for ; Thu, 30 Apr 2015 02:26:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=7lkDYPcxQ5lBUEg6WCjcKUyqU8TWRcLI1cBDy5ZIkec=; b=GHgPLVbPwi1D+gmW2wpSD6FNinB4vwQWxjblJMHfqCKmRz8IHztJ3FU6YSN0KHZ5eB W2K7l+Qw5mQwqop7rsJ1cIQCEIcQQN+rtMM/sxWEw3Xp6EeWXk1HOIfX92/se7grMvPA zSK/4JNhA6Iprg35f20g9khPgU66kjC36HtS4/Ta0q+NNOpoOdoZM0LuS5QVAHrkdXl6 6sruHFHFNpOLqPYbPplPSKpmptqNChn5GWBYD3B0giB1N10ZjFdxRKtjsmwILcKF8Jwf ZLGFCrhTiWETV+9KfSaFkLGQr4a4ywi3iABtJa/msP7ymIrvIfY+XSU3+ZsrsSxGrsnH xUFQ== X-Gm-Message-State: ALoCoQnu9Z6ujskMUgOIpX1NuyUj2HMzcTsmwlX4587V8ud/HdYjWq2xm4NhziLR1WwsByN/9olJ X-Received: by 10.194.121.67 with SMTP id li3mr2001358wjb.3.1430385985187; Thu, 30 Apr 2015 02:26:25 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.30.74 with SMTP id q10ls352024lah.35.gmail; Thu, 30 Apr 2015 02:26:25 -0700 (PDT) X-Received: by 10.152.116.102 with SMTP id jv6mr2999083lab.50.1430385985061; Thu, 30 Apr 2015 02:26:25 -0700 (PDT) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com. [209.85.217.178]) by mx.google.com with ESMTPS id pu2si1337629lbb.69.2015.04.30.02.26.25 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 02:26:25 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by lbbuc2 with SMTP id uc2so39795527lbb.2 for ; Thu, 30 Apr 2015 02:26:25 -0700 (PDT) X-Received: by 10.112.29.36 with SMTP id g4mr3081363lbh.56.1430385984961; Thu, 30 Apr 2015 02:26:24 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2995710lbt; Thu, 30 Apr 2015 02:26:24 -0700 (PDT) X-Received: by 10.66.236.167 with SMTP id uv7mr6462052pac.58.1430385954951; Thu, 30 Apr 2015 02:25:54 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id nf2si2640288pbc.149.2015.04.30.02.25.54; Thu, 30 Apr 2015 02:25:54 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751114AbbD3JZm (ORCPT + 2 others); Thu, 30 Apr 2015 05:25:42 -0400 Received: from ip4-83-240-67-251.cust.nbox.cz ([83.240.67.251]:52995 "EHLO ip4-83-240-18-248.cust.nbox.cz" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751412AbbD3JZP (ORCPT ); Thu, 30 Apr 2015 05:25:15 -0400 Received: from ku by ip4-83-240-18-248.cust.nbox.cz with local (Exim 4.85) (envelope-from ) id 1YnkiW-0005PZ-F9; Thu, 30 Apr 2015 11:25:12 +0200 From: Jiri Slaby To: stable@vger.kernel.org Cc: Christoffer Dall , Shannon Zhao , Jiri Slaby Subject: [patch added to the 3.12 stable tree] arm/arm64: KVM: Introduce stage2_unmap_vm Date: Thu, 30 Apr 2015 11:25:05 +0200 Message-Id: <1430385911-20480-57-git-send-email-jslaby@suse.cz> X-Mailer: git-send-email 2.3.5 In-Reply-To: <1430385911-20480-1-git-send-email-jslaby@suse.cz> References: <1430385911-20480-1-git-send-email-jslaby@suse.cz> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall This patch has been added to the 3.12 stable tree. If you have any objections, please let us know. =============== commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream. Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Shannon Zhao Signed-off-by: Jiri Slaby --- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 7 +++++ arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + 4 files changed, 74 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 17b93071bb17..8cd885699420 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -47,6 +47,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 8f4761b5af85..d1c5946e33a2 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -673,6 +673,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, if (ret) return ret; + /* + * Ensure a rebooted VM will fault in RAM pages and detect if the + * guest MMU is turned off and flush the caches as needed. + */ + if (vcpu->arch.has_run_once) + stage2_unmap_vm(vcpu->kvm); + vcpu_reset_hcr(vcpu); /* diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 5c31e3fff597..a79baa59fe15 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -528,6 +528,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) unmap_range(kvm, kvm->arch.pgd, start, size); } +static void stage2_unmap_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + hva_t hva = memslot->userspace_addr; + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t size = PAGE_SIZE * memslot->npages; + hva_t reg_end = hva + size; + + /* + * A memory region could potentially cover multiple VMAs, and any holes + * between them, so iterate over all of them to find out if we should + * unmap any of them. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Take the intersection of this VMA with the memory region + */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (!(vma->vm_flags & VM_PFNMAP)) { + gpa_t gpa = addr + (vm_start - memslot->userspace_addr); + unmap_stage2_range(kvm, gpa, vm_end - vm_start); + } + hva = vm_end; + } while (hva < reg_end); +} + +/** + * stage2_unmap_vm - Unmap Stage-2 RAM mappings + * @kvm: The struct kvm pointer + * + * Go through the memregions and unmap any reguler RAM + * backing memory already mapped to the VM. + */ +void stage2_unmap_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_unmap_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * kvm_free_stage2_pgd - free all stage-2 tables * @kvm: The KVM struct pointer for the VM. diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 5966ad5a356f..6e127e7ca687 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -74,6 +74,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,