From patchwork Tue Jun 30 10:49:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 50464 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 89706229DF for ; Tue, 30 Jun 2015 10:51:40 +0000 (UTC) Received: by wguu7 with SMTP id u7sf2520929wgu.0 for ; Tue, 30 Jun 2015 03:51:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=941m2BSA3GWynj0XxXvreedDQD4eaLwekGgn4hZhKtQ=; b=dqWQpuesJAeRZxUey6dzW9dU3gGu3Ly7EdCCZdLzUxOrBvrKfoE1yMbYviMFIjEuw8 yw73/nMP+gcAupYibauhvMLgW7cEk1oONLL3JFvRB0awxoRq6W7Rn+2Z1h4Tj5V/hPBt KAY0ZMfjvGHYDSzxg+Suwd+CXUQytLU+9C+SR6F1UYgi8wy0a4F4PlgJvNyag3rKxoLn OxDlqsPWEyWwSz7XC3wcSvE2oJLaHe0QjiMVmcvz6GCjsA6OZwCHSjTvEuNpQCVayKlH d6X9BsvqtfBKDbhneFmZOXpE71zotQpxNdP6uxeaTD5MNCeUfJ2suUTcOL5QmJZ+O+mc hfaQ== X-Gm-Message-State: ALoCoQk9hL6KoOCxklCTuDgil1MX5TKyp4QBxgh0pNcJWuK3wEUGLG4tB/Skk6/RTOs7cGU1qk9o X-Received: by 10.152.2.196 with SMTP id 4mr13721064law.10.1435661499836; Tue, 30 Jun 2015 03:51:39 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.133 with SMTP id y5ls26326laj.19.gmail; Tue, 30 Jun 2015 03:51:39 -0700 (PDT) X-Received: by 10.152.246.37 with SMTP id xt5mr18834263lac.83.1435661499619; Tue, 30 Jun 2015 03:51:39 -0700 (PDT) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id jv2si37762024lab.94.2015.06.30.03.51.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Jun 2015 03:51:39 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by lagc2 with SMTP id c2so8110876lag.3 for ; Tue, 30 Jun 2015 03:51:39 -0700 (PDT) X-Received: by 10.112.220.7 with SMTP id ps7mr18832111lbc.72.1435661499195; Tue, 30 Jun 2015 03:51:39 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2353428lbb; Tue, 30 Jun 2015 03:51:37 -0700 (PDT) X-Received: by 10.67.30.17 with SMTP id ka17mr41912184pad.24.1435661497096; Tue, 30 Jun 2015 03:51:37 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id lj11si69631040pab.25.2015.06.30.03.51.36; Tue, 30 Jun 2015 03:51:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751767AbbF3KvW (ORCPT + 2 others); Tue, 30 Jun 2015 06:51:22 -0400 Received: from mail-pd0-f180.google.com ([209.85.192.180]:34401 "EHLO mail-pd0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752098AbbF3KvS (ORCPT ); Tue, 30 Jun 2015 06:51:18 -0400 Received: by pdbep18 with SMTP id ep18so4313198pdb.1 for ; Tue, 30 Jun 2015 03:51:16 -0700 (PDT) X-Received: by 10.68.135.100 with SMTP id pr4mr42274958pbb.25.1435661476724; Tue, 30 Jun 2015 03:51:16 -0700 (PDT) Received: from localhost ([120.136.36.232]) by mx.google.com with ESMTPSA id vy6sm32892199pbc.72.2015.06.30.03.51.14 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 30 Jun 2015 03:51:15 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: gregkh@linuxfoundation.org, christoffer.dall@linaro.org, shannon.zhao@linaro.org Subject: [PATCH for 3.14.y stable 16/22] arm/arm64: KVM: Introduce stage2_unmap_vm Date: Tue, 30 Jun 2015 18:49:04 +0800 Message-Id: <1435661350-8060-17-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1435661350-8060-1-git-send-email-shannon.zhao@linaro.org> References: <1435661350-8060-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream. Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Shannon Zhao --- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 7 +++++ arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + 4 files changed, 74 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 630869e..9f79231 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -47,6 +47,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 077f82d0..039df03 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -675,6 +675,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, if (ret) return ret; + /* + * Ensure a rebooted VM will fault in RAM pages and detect if the + * guest MMU is turned off and flush the caches as needed. + */ + if (vcpu->arch.has_run_once) + stage2_unmap_vm(vcpu->kvm); + vcpu_reset_hcr(vcpu); /* diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 5b12c49..524b4b5 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -556,6 +556,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) unmap_range(kvm, kvm->arch.pgd, start, size); } +static void stage2_unmap_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + hva_t hva = memslot->userspace_addr; + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t size = PAGE_SIZE * memslot->npages; + hva_t reg_end = hva + size; + + /* + * A memory region could potentially cover multiple VMAs, and any holes + * between them, so iterate over all of them to find out if we should + * unmap any of them. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Take the intersection of this VMA with the memory region + */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (!(vma->vm_flags & VM_PFNMAP)) { + gpa_t gpa = addr + (vm_start - memslot->userspace_addr); + unmap_stage2_range(kvm, gpa, vm_end - vm_start); + } + hva = vm_end; + } while (hva < reg_end); +} + +/** + * stage2_unmap_vm - Unmap Stage-2 RAM mappings + * @kvm: The struct kvm pointer + * + * Go through the memregions and unmap any reguler RAM + * backing memory already mapped to the VM. + */ +void stage2_unmap_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_unmap_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * kvm_free_stage2_pgd - free all stage-2 tables * @kvm: The KVM struct pointer for the VM. diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a030d16..0d51874 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -74,6 +74,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,