From patchwork Fri Jan 23 10:02:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 43577 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f197.google.com (mail-we0-f197.google.com [74.125.82.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 4BAA2218DB for ; Fri, 23 Jan 2015 10:09:34 +0000 (UTC) Received: by mail-we0-f197.google.com with SMTP id l61sf3677771wev.0 for ; Fri, 23 Jan 2015 02:09:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-original-sender:x-original-authentication-results:mailing-list; bh=Q+vOhn5v7rdP7KMgQ7tCa4SLMKHQQEI41QKRk9XPVs8=; b=Zl2+9mTmASBiFNwgKeL3JP7RbsR1yfOu+OtmWZec5GBxCiPONfMN0AZ/U5OCjuHI0x pYUswOBCTUGKYiJLCCNdaKWb83S5yXdjXMTjoMkuh2Ly8Vw3nPu9+4L9BiFiLsEG5ORh bd05KjPXABvoZ629kKxDm7B6h8sB6twwYSxI4aVoVuMSmsmA+75RZOrg7AzWdGExUEGo D2DhYqyKaxBApEy63vJzxIjLZUh9BM0gWL7pD4Z5PenP0KIgVHwqebAVeiid8NFrqYiW BOLcoH9kfl0biQGaDygE3AFZR9qA2w96eyuIYL1poK/nXl5zWheYn5JPafiUkBBW3bQx Ed9w== X-Gm-Message-State: ALoCoQlbo6yw4nV/Gqig2Ze08lH3PqoWzVkPIi88Dt2AZ4qb6QLVkl9YNwEjIjmXWIOm8ofqkX3X X-Received: by 10.152.5.98 with SMTP id r2mr83195lar.6.1422007773579; Fri, 23 Jan 2015 02:09:33 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.29.133 with SMTP id k5ls217182lah.48.gmail; Fri, 23 Jan 2015 02:09:33 -0800 (PST) X-Received: by 10.112.170.69 with SMTP id ak5mr6360090lbc.71.1422007773439; Fri, 23 Jan 2015 02:09:33 -0800 (PST) Received: from mail-lb0-f182.google.com (mail-lb0-f182.google.com. [209.85.217.182]) by mx.google.com with ESMTPS id tf9si922322lbb.67.2015.01.23.02.09.33 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 02:09:33 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) client-ip=209.85.217.182; Received: by mail-lb0-f182.google.com with SMTP id l4so6168602lbv.13 for ; Fri, 23 Jan 2015 02:09:33 -0800 (PST) X-Received: by 10.112.125.41 with SMTP id mn9mr6357168lbb.80.1422007773260; Fri, 23 Jan 2015 02:09:33 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp194781lbb; Fri, 23 Jan 2015 02:09:32 -0800 (PST) X-Received: by 10.66.142.131 with SMTP id rw3mr9872381pab.35.1422007771266; Fri, 23 Jan 2015 02:09:31 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id xg5si1397138pbc.77.2015.01.23.02.09.30 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 02:09:31 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YEb9P-0007QS-Dh; Fri, 23 Jan 2015 10:07:39 +0000 Received: from mail-lb0-f180.google.com ([209.85.217.180]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YEb5T-0003pC-Qz for linux-arm-kernel@lists.infradead.org; Fri, 23 Jan 2015 10:03:37 +0000 Received: by mail-lb0-f180.google.com with SMTP id b6so6138956lbj.11 for ; Fri, 23 Jan 2015 02:03:17 -0800 (PST) X-Received: by 10.152.44.228 with SMTP id h4mr5260924lam.31.1422007397450; Fri, 23 Jan 2015 02:03:17 -0800 (PST) Received: from localhost.localdomain (188-178-240-98-static.dk.customer.tdc.net. [188.178.240.98]) by mx.google.com with ESMTPSA id pg3sm331848lbb.8.2015.01.23.02.03.15 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 23 Jan 2015 02:03:16 -0800 (PST) From: Christoffer Dall To: Paolo Bonzini , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [GIT PULL 07/36] KVM: x86: switch to kvm_get_dirty_log_protect Date: Fri, 23 Jan 2015 11:02:36 +0100 Message-Id: <1422007385-14730-8-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1422007385-14730-1-git-send-email-christoffer.dall@linaro.org> References: <1422007385-14730-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150123_020336_107333_888A2FE8 X-CRM114-Status: GOOD ( 20.18 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [209.85.217.180 listed in wl.mailspike.net] -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.217.180 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: Marc Zyngier , kvm@vger.kernel.org, Mario Smarduch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christoffer.dall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Paolo Bonzini We now have a generic function that does most of the work of kvm_vm_ioctl_get_dirty_log, now use it. Acked-by: Christoffer Dall Signed-off-by: Mario Smarduch --- arch/x86/include/asm/kvm_host.h | 3 -- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu.c | 4 +-- arch/x86/kvm/x86.c | 72 ++++++++--------------------------------- 4 files changed, 16 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index cb19d05..3ceddf4 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -821,9 +821,6 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot); -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask); void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm); unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index f9d16ff..d073594 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -39,6 +39,7 @@ config KVM select PERF_EVENTS select HAVE_KVM_MSI select HAVE_KVM_CPU_RELAX_INTERCEPT + select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_VFIO ---help--- Support hosting fully virtualized guest machines using hardware diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index a0985eb..3aa46aa 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1203,7 +1203,7 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, } /** - * kvm_mmu_write_protect_pt_masked - write protect selected PT level pages + * kvm_arch_mmu_write_protect_pt_masked - write protect selected PT level pages * @kvm: kvm instance * @slot: slot to protect * @gfn_offset: start of the BITS_PER_LONG pages we care about @@ -1212,7 +1212,7 @@ static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, * Used when we do not need to care about huge page mappings: e.g. during dirty * logging we do not have any such mappings. */ -void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, +void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 49ecda7..556dfb4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3748,83 +3748,37 @@ static int kvm_vm_ioctl_reinject(struct kvm *kvm, * @kvm: kvm instance * @log: slot id and address to which we copy the log * - * We need to keep it in mind that VCPU threads can write to the bitmap - * concurrently. So, to avoid losing data, we keep the following order for - * each bit: + * Steps 1-4 below provide general overview of dirty page logging. See + * kvm_get_dirty_log_protect() function description for additional details. + * + * We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we + * always flush the TLB (step 4) even if previous step failed and the dirty + * bitmap may be corrupt. Regardless of previous outcome the KVM logging API + * does not preclude user space subsequent dirty log read. Flushing TLB ensures + * writes will be marked dirty for next log read. * * 1. Take a snapshot of the bit and clear it if needed. * 2. Write protect the corresponding page. - * 3. Flush TLB's if needed. - * 4. Copy the snapshot to the userspace. - * - * Between 2 and 3, the guest may write to the page using the remaining TLB - * entry. This is not a problem because the page will be reported dirty at - * step 4 using the snapshot taken before and step 3 ensures that successive - * writes will be logged for the next call. + * 3. Copy the snapshot to the userspace. + * 4. Flush TLB's if needed. */ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) { - int r; - struct kvm_memory_slot *memslot; - unsigned long n, i; - unsigned long *dirty_bitmap; - unsigned long *dirty_bitmap_buffer; bool is_dirty = false; + int r; mutex_lock(&kvm->slots_lock); - r = -EINVAL; - if (log->slot >= KVM_USER_MEM_SLOTS) - goto out; - - memslot = id_to_memslot(kvm->memslots, log->slot); - - dirty_bitmap = memslot->dirty_bitmap; - r = -ENOENT; - if (!dirty_bitmap) - goto out; - - n = kvm_dirty_bitmap_bytes(memslot); - - dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); - memset(dirty_bitmap_buffer, 0, n); - - spin_lock(&kvm->mmu_lock); - - for (i = 0; i < n / sizeof(long); i++) { - unsigned long mask; - gfn_t offset; - - if (!dirty_bitmap[i]) - continue; - - is_dirty = true; - - mask = xchg(&dirty_bitmap[i], 0); - dirty_bitmap_buffer[i] = mask; - - offset = i * BITS_PER_LONG; - kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask); - } - - spin_unlock(&kvm->mmu_lock); - - /* See the comments in kvm_mmu_slot_remove_write_access(). */ - lockdep_assert_held(&kvm->slots_lock); + r = kvm_get_dirty_log_protect(kvm, log, &is_dirty); /* * All the TLBs can be flushed out of mmu lock, see the comments in * kvm_mmu_slot_remove_write_access(). */ + lockdep_assert_held(&kvm->slots_lock); if (is_dirty) kvm_flush_remote_tlbs(kvm); - r = -EFAULT; - if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) - goto out; - - r = 0; -out: mutex_unlock(&kvm->slots_lock); return r; }