From patchwork Sun Jan 11 14:00:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 42952 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6AA4E25E5C for ; Sun, 11 Jan 2015 14:01:44 +0000 (UTC) Received: by mail-wi0-f197.google.com with SMTP id l15sf4507938wiw.0 for ; Sun, 11 Jan 2015 06:01:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent :cc:precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:content-type:content-transfer-encoding :sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list; bh=krFyUaIoYndx5PdBQhFJQnzR13ko0O1O128Ulw/1JTM=; b=gQdB0rZRc1lmQhJ69Iujd6A5z+Gy4JvhC39CeFNXgX2qlCjVqc3TbEA44Ngrz/pWkb l3wF3ozuvGK1u9BwrWiQhonSeWc6Tj2Wg4uRmTuS5ReOqTbYnAl40dn///G/2KAggO7w 7n+CJ3YrTcYKWwJp7plQFWJrtnVsa2dlto4heeubN/qUwU53qk7GagS4aLHMXuMbs0Uo c0EYg0L00UPmqhJ7l7/XkPqwF08XV+2aPLTcpNDBRMkatiIg2ifYbTwjuFIpNvurkorQ l/X/yNK72MVJ4smVY0BbAc9WvGtFnSB6j0rPso4zBDjC4fzmtD0sZzIT1UoIrIzdM0k4 GGaw== X-Gm-Message-State: ALoCoQm7kFcgMBOibUrVrLA3WKT2E+eV8ckigpPDwS9Fye022/Hn8KtKOUMfY7b9oCTI5y9QafKG X-Received: by 10.112.133.98 with SMTP id pb2mr508472lbb.2.1420984903625; Sun, 11 Jan 2015 06:01:43 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.6.34 with SMTP id cr2ls578614lad.30.gmail; Sun, 11 Jan 2015 06:01:43 -0800 (PST) X-Received: by 10.112.132.67 with SMTP id os3mr31420152lbb.90.1420984903412; Sun, 11 Jan 2015 06:01:43 -0800 (PST) Received: from mail-lb0-f177.google.com (mail-lb0-f177.google.com. [209.85.217.177]) by mx.google.com with ESMTPS id uj9si18719218lbb.97.2015.01.11.06.01.43 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Jan 2015 06:01:43 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) client-ip=209.85.217.177; Received: by mail-lb0-f177.google.com with SMTP id b6so14156241lbj.8 for ; Sun, 11 Jan 2015 06:01:43 -0800 (PST) X-Received: by 10.112.125.41 with SMTP id mn9mr31587695lbb.80.1420984903196; Sun, 11 Jan 2015 06:01:43 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp839759lbb; Sun, 11 Jan 2015 06:01:41 -0800 (PST) X-Received: by 10.68.232.33 with SMTP id tl1mr37851517pbc.65.1420984900653; Sun, 11 Jan 2015 06:01:40 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id gy1si20404695pbd.34.2015.01.11.06.01.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 11 Jan 2015 06:01:40 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YAJ3i-0001Gl-1B; Sun, 11 Jan 2015 14:00:02 +0000 Received: from mail-lb0-f171.google.com ([209.85.217.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YAJ3b-0001BF-U8 for linux-arm-kernel@lists.infradead.org; Sun, 11 Jan 2015 13:59:57 +0000 Received: by mail-lb0-f171.google.com with SMTP id w7so14538922lbi.2 for ; Sun, 11 Jan 2015 05:59:33 -0800 (PST) X-Received: by 10.152.219.3 with SMTP id pk3mr31881324lac.19.1420984773436; Sun, 11 Jan 2015 05:59:33 -0800 (PST) Received: from localhost (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id vl1sm3321236lbb.21.2015.01.11.05.59.32 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 11 Jan 2015 05:59:32 -0800 (PST) Date: Sun, 11 Jan 2015 15:00:03 +0100 From: Christoffer Dall To: Mario Smarduch Subject: Re: [PATCH RESEND v15 07/10] KVM: arm: page logging 2nd stage fault handling Message-ID: <20150111140003.GA2722@cbox> References: <1420863441-32592-1-git-send-email-m.smarduch@samsung.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1420863441-32592-1-git-send-email-m.smarduch@samsung.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150111_055956_357909_5A219F65 X-CRM114-Status: GOOD ( 37.57 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.217.171 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [209.85.217.171 listed in wl.mailspike.net] Cc: marc.zyngier@arm.com, pbonzini@redhat.com, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 On Fri, Jan 09, 2015 at 08:17:20PM -0800, Mario Smarduch wrote: > This patch adds support for 2nd stage page fault handling while dirty page > logging. On huge page faults, huge pages are dissolved to normal pages, and > rebuilding of 2nd stage huge pages is blocked. In case migration is > canceled this restriction is removed and huge pages may be rebuilt again. > > This patch applies cleanly on top of patch series posted Dec. 15'th: > https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012826.html In the future such information should also go under the --- separator. > > Patch #11 has been dropped, and should not be applied. this should go under the '---' separator too. > > Signed-off-by: Mario Smarduch > --- > > Change Log since last RESEND v1 --> v2: > - Disallow dirty page logging of IO region - fail for initial write protect > and disable logging code in 2nd stage page fault handler. > - Fixed auto spell correction errors > > Change Log RESEND v0 --> v1: > - fixed bug exposed by new generic __get_user_pages_fast(), when region is > writable, prevent write protection of pte on read fault > - Removed marking entire huge page dirty on initial access > - don't dissolve huge pages of non-writable regions > - Made updates based on Christoffers comments > - renamed logging status function to memslot_is_logging() > - changed few values to bool from longs > - streamlined user_mem_abort() to eliminate extra conditional checks > --- > arch/arm/kvm/mmu.c | 113 ++++++++++++++++++++++++++++++++++++++++++++++++---- > 1 file changed, 105 insertions(+), 8 deletions(-) > > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c > index 73d506f..b878236 100644 > --- a/arch/arm/kvm/mmu.c > +++ b/arch/arm/kvm/mmu.c > @@ -47,6 +47,18 @@ static phys_addr_t hyp_idmap_vector; > #define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x)) > #define kvm_pud_huge(_x) pud_huge(_x) > > +#define KVM_S2PTE_FLAG_IS_IOMAP (1UL << 0) > +#define KVM_S2PTE_FLAG_LOGGING_ACTIVE (1UL << 1) > + > +static bool memslot_is_logging(struct kvm_memory_slot *memslot) > +{ > +#ifdef CONFIG_ARM > + return !!memslot->dirty_bitmap; > +#else > + return false; > +#endif > +} > + > static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) > { > /* > @@ -59,6 +71,25 @@ static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) > kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa); > } > > +/** > + * stage2_dissolve_pmd() - clear and flush huge PMD entry > + * @kvm: pointer to kvm structure. > + * @addr: IPA > + * @pmd: pmd pointer for IPA > + * > + * Function clears a PMD entry, flushes addr 1st and 2nd stage TLBs. Marks all > + * pages in the range dirty. > + */ > +static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd) > +{ > + if (!kvm_pmd_huge(*pmd)) > + return; > + > + pmd_clear(pmd); > + kvm_tlb_flush_vmid_ipa(kvm, addr); > + put_page(virt_to_page(pmd)); > +} > + > static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, > int min, int max) > { > @@ -703,10 +734,13 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache > } > > static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, > - phys_addr_t addr, const pte_t *new_pte, bool iomap) > + phys_addr_t addr, const pte_t *new_pte, > + unsigned long flags) > { > pmd_t *pmd; > pte_t *pte, old_pte; > + bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP; > + bool logging_active = flags & KVM_S2PTE_FLAG_LOGGING_ACTIVE; > > /* Create stage-2 page table mapping - Levels 0 and 1 */ > pmd = stage2_get_pmd(kvm, cache, addr); > @@ -718,6 +752,13 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, > return 0; > } > > + /* > + * While dirty page logging - dissolve huge PMD, then continue on to > + * allocate page. > + */ > + if (logging_active) > + stage2_dissolve_pmd(kvm, addr, pmd); > + > /* Create stage-2 page mappings - Level 2 */ > if (pmd_none(*pmd)) { > if (!cache) > @@ -774,7 +815,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, > if (ret) > goto out; > spin_lock(&kvm->mmu_lock); > - ret = stage2_set_pte(kvm, &cache, addr, &pte, true); > + ret = stage2_set_pte(kvm, &cache, addr, &pte, > + KVM_S2PTE_FLAG_IS_IOMAP); > spin_unlock(&kvm->mmu_lock); > if (ret) > goto out; > @@ -1002,6 +1044,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > pfn_t pfn; > pgprot_t mem_type = PAGE_S2; > bool fault_ipa_uncached; > + bool can_set_pte_rw = true; > + unsigned long set_pte_flags = 0; > > write_fault = kvm_is_write_fault(vcpu); > if (fault_status == FSC_PERM && !write_fault) { > @@ -1009,6 +1053,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > return -EFAULT; > } > > + stray whitespace change? > /* Let's check if we will get back a huge page backed by hugetlbfs */ > down_read(¤t->mm->mmap_sem); > vma = find_vma_intersection(current->mm, hva, hva + 1); > @@ -1059,12 +1104,35 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > if (is_error_pfn(pfn)) > return -EFAULT; > > - if (kvm_is_device_pfn(pfn)) > + if (kvm_is_device_pfn(pfn)) { > mem_type = PAGE_S2_DEVICE; > + set_pte_flags = KVM_S2PTE_FLAG_IS_IOMAP; > + } > > spin_lock(&kvm->mmu_lock); > if (mmu_notifier_retry(kvm, mmu_seq)) > goto out_unlock; > + > + /* > + * When logging is enabled general page fault handling changes: > + * - Writable huge pages are dissolved on a read or write fault. why dissolve huge pages on a read fault? > + * - pte's are not allowed write permission on a read fault to > + * writable region so future writes can be marked dirty new line > + * Access to non-writable region is unchanged, and logging of IO > + * regions is not allowed. > + */ > + if (memslot_is_logging(memslot) && writable) { > + set_pte_flags = KVM_S2PTE_FLAG_LOGGING_ACTIVE; > + if (hugetlb) { > + gfn += pte_index(fault_ipa); > + pfn += pte_index(fault_ipa); > + hugetlb = false; > + } > + force_pte = true; uh, not this is not what I meant, see my example (untested, partial) patch in the end of this mail. > + if (!write_fault) > + can_set_pte_rw = false; > + } > + > if (!hugetlb && !force_pte) > hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); > > @@ -1082,16 +1150,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); > } else { > pte_t new_pte = pfn_pte(pfn, mem_type); > - if (writable) { > + > + /* > + * Don't set write permission, for non-writable region, and > + * for read fault to writable region while logging. > + */ > + if (writable && can_set_pte_rw) { > kvm_set_s2pte_writable(&new_pte); > kvm_set_pfn_dirty(pfn); > } > coherent_cache_guest_page(vcpu, hva, PAGE_SIZE, > fault_ipa_uncached); > ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, > - pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE)); > + set_pte_flags); > } > > + if (write_fault) > + mark_page_dirty(kvm, gfn); > > out_unlock: > spin_unlock(&kvm->mmu_lock); > @@ -1242,7 +1317,14 @@ static void kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, void *data) > { > pte_t *pte = (pte_t *)data; > > - stage2_set_pte(kvm, NULL, gpa, pte, false); > + /* > + * We can always call stage2_set_pte with KVM_S2PTE_FLAG_LOGGING_ACTIVE > + * flag clear because MMU notifiers will have unmapped a huge PMD before > + * calling ->change_pte() (which in turn calls kvm_set_spte_hva()) and > + * therefore stage2_set_pte() never needs to clear out a huge PMD > + * through this calling path. > + */ > + stage2_set_pte(kvm, NULL, gpa, pte, 0); > } > > > @@ -1396,7 +1478,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > bool writable = !(mem->flags & KVM_MEM_READONLY); > int ret = 0; > > - if (change != KVM_MR_CREATE && change != KVM_MR_MOVE) > + /* > + * Let - enable of dirty page logging through, later check if it's for > + * an IO region and fail. > + */ I don't understand this comment or find it helpful. > + if (change != KVM_MR_CREATE && change != KVM_MR_MOVE && > + change == KVM_MR_FLAGS_ONLY && > + !(memslot->flags & KVM_MEM_LOG_DIRTY_PAGES)) this looks wrong, because you can now remove all the other checks of change != and you are not returning early for KVM_MR_DELETE. I think you want to add a check simply for 'change != KVM_MR_FLAGS_ONLY' and then after the 'return 0' check the subconditions for change == KVM_MR_FLAGS_ONLY. > return 0; > > /* > @@ -1447,15 +1535,24 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) + > vm_start - vma->vm_start; > > - ret = kvm_phys_addr_ioremap(kvm, gpa, pa, > + if (change != KVM_MR_FLAGS_ONLY) > + ret = kvm_phys_addr_ioremap(kvm, gpa, pa, > vm_end - vm_start, > writable); > + else > + /* IO region dirty page logging not allowed */ > + return -EINVAL; > + this whole thing also looks weird. I think you just need to add a check before kvm_phys_addr_ioremap() for flags & KVM_MEM_LOG_DIRTY_PAGES and return an error in that case (you've identified a user attempting to set dirty page logging on something that points to device memory, it doesn't matter at this point through which 'change' it is done). > if (ret) > break; > } > hva = vm_end; > } while (hva < reg_end); > > + /* Anything after here doesn't apply to memslot flag changes */ > + if (change == KVM_MR_FLAGS_ONLY) > + return ret; > + > spin_lock(&kvm->mmu_lock); > if (ret) > unmap_stage2_range(kvm, mem->guest_phys_addr, mem->memory_size); > -- What I meant last time around concerning user_mem_abort was more something like this: Thanks, -Christoffer diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 1dc9778..38ea58e 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -935,7 +935,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (is_vm_hugetlb_page(vma)) { + /* + * Writes to pages in a memslot with logging enabled are always logged + * on a singe page-by-page basis. + */ + if (memslot_is_logging(memslot) && write_fault) + force_pte = true; + + if (is_vm_hugetlb_page(vma) && !force_pte) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else { @@ -976,6 +983,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (is_error_pfn(pfn)) return -EFAULT; + if (memslot_is_logging(memslot) && !write_fault) + writable = false; + if (kvm_is_device_pfn(pfn)) mem_type = PAGE_S2_DEVICE; @@ -998,15 +1008,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, fault_ipa_uncached); ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { + unsigned long flags = 0; pte_t new_pte = pfn_pte(pfn, mem_type); + if (writable) { kvm_set_s2pte_writable(&new_pte); kvm_set_pfn_dirty(pfn); } coherent_cache_guest_page(vcpu, hva, PAGE_SIZE, fault_ipa_uncached); - ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, - pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE)); + + if (pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE)) + flags |= KVM_S2PTE_FLAG_IS_IOMAP; + + if (memslot_is_logging(memslot)) + flags |= KVM_S2_FLAG_LOGGING_ACTIVE; + + ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags); }