From patchwork Thu Nov 27 18:40:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 41647 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 40D5E2486D for ; Thu, 27 Nov 2014 18:43:46 +0000 (UTC) Received: by mail-la0-f71.google.com with SMTP id s18sf3359540lam.10 for ; Thu, 27 Nov 2014 10:43:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=oKw+iC7GBgnXGOIyY7Cb28KzjjZgb/P4JqJpp+dejZg=; b=BVBl7UWGAWxzQ7YWzqWp2xu9TTGrldKDNU5iPwY+2eSOPwkItwwmqguC1U+r3tyBAo toU8BK4LgNE6xHFJBxlXDxilhuLQ/I+cDIIEX0+X+5QPJegc7ugXHNSHttTR5ILE02k6 4X4OrAYLq1E3+WtwvSSr8qezRoY6ozmspIi5HypQwksRg0mo5ahP1ua7LiUp7s+nyTaE zOgQV3LPBd3HQnqP3ukeN2wKVLrDgqThjMB/fH5dVbztiB20Od4RDEkmhnxN1kcQ/kE7 tP0ReISKG5v0OgLcwA9Jf/sNcssJdpLRaYfuQ6Ji07uxo2c4kYm0Oeo5b9rbn/LERMn0 bXGg== X-Gm-Message-State: ALoCoQkVUtxMGQh4sM2d5IQ/WL//JzubL3Y98m8N9Ju8jSz2yRCBsPCeaG2Heer19zWePkoEM8z4 X-Received: by 10.194.95.74 with SMTP id di10mr1366313wjb.0.1417113825205; Thu, 27 Nov 2014 10:43:45 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.7.232 with SMTP id m8ls800314laa.45.gmail; Thu, 27 Nov 2014 10:43:44 -0800 (PST) X-Received: by 10.112.63.70 with SMTP id e6mr39531707lbs.93.1417113824446; Thu, 27 Nov 2014 10:43:44 -0800 (PST) Received: from mail-la0-f41.google.com (mail-la0-f41.google.com. [209.85.215.41]) by mx.google.com with ESMTPS id lm12si8009140lac.44.2014.11.27.10.43.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 27 Nov 2014 10:43:44 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.41 as permitted sender) client-ip=209.85.215.41; Received: by mail-la0-f41.google.com with SMTP id gf13so4636573lab.0 for ; Thu, 27 Nov 2014 10:43:44 -0800 (PST) X-Received: by 10.112.38.4 with SMTP id c4mr24659043lbk.46.1417113824354; Thu, 27 Nov 2014 10:43:44 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp838089lbc; Thu, 27 Nov 2014 10:43:43 -0800 (PST) X-Received: by 10.68.243.3 with SMTP id wu3mr65205180pbc.133.1417113822466; Thu, 27 Nov 2014 10:43:42 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id y8si12941080pdl.5.2014.11.27.10.43.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Nov 2014 10:43:42 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xu413-0001GM-T9; Thu, 27 Nov 2014 18:42:09 +0000 Received: from mail-la0-f46.google.com ([209.85.215.46]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xu3zs-0000f8-Rm for linux-arm-kernel@lists.infradead.org; Thu, 27 Nov 2014 18:40:58 +0000 Received: by mail-la0-f46.google.com with SMTP id gd6so4555223lab.5 for ; Thu, 27 Nov 2014 10:40:34 -0800 (PST) X-Received: by 10.152.205.74 with SMTP id le10mr28513818lac.96.1417113634133; Thu, 27 Nov 2014 10:40:34 -0800 (PST) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id x1sm1682425lae.42.2014.11.27.10.40.32 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 27 Nov 2014 10:40:33 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH 4/5] arm/arm64: KVM: Introduce stage2_unmap_vm Date: Thu, 27 Nov 2014 19:40:59 +0100 Message-Id: <1417113660-23610-5-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1417113660-23610-1-git-send-email-christoffer.dall@linaro.org> References: <1417113660-23610-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141127_104057_149115_5D3AE817 X-CRM114-Status: GOOD ( 15.42 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.46 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.215.46 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Marc Zyngier , Peter Maydell , Christoffer Dall , kvm@vger.kernel.org, Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.41 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Cc: Ard Biesheuvel Signed-off-by: Christoffer Dall --- There is an alternative version with more code reuse available here: http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git vcpu_init_fixes-alternative That version improves code-reuse at the cost of reduced code-readibility and increased complexity. I didn't test the alternative version or spend huge amounts of time thinking about potential cleaner versions of the code, but chose to include a pointer to the version as I can't make up my mind about the preferred approach. Input is welcome. arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + 3 files changed, 67 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index acb0d57..4654c42 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -52,6 +52,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 57a403a..b1f3c9a 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -611,6 +611,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) unmap_range(kvm, kvm->arch.pgd, start, size); } +static void stage2_unmap_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + hva_t hva = memslot->userspace_addr; + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t size = PAGE_SIZE * memslot->npages; + hva_t reg_end = hva + size; + + /* + * A memory region could potentially cover multiple VMAs, and any holes + * between them, so iterate over all of them to find out if we should + * unmap any of them. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Take the intersection of this VMA with the memory region + */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (!(vma->vm_flags & VM_PFNMAP)) { + gpa_t gpa = addr + (vm_start - memslot->userspace_addr); + unmap_stage2_range(kvm, gpa, vm_end - vm_start); + } + hva = vm_end; + } while (hva < reg_end); +} + +/** + * stage2_unmap_vm - Unmap Stage-2 RAM mappings + * @kvm: The struct kvm pointer + * + * Go through the memregions and unmap any reguler RAM + * backing memory already mapped to the VM. + */ +void stage2_unmap_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_unmap_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * kvm_free_stage2_pgd - free all stage-2 tables * @kvm: The KVM struct pointer for the VM. diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 0caf7a5..061fed7 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -83,6 +83,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,