From patchwork Tue Apr 28 20:05:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 283840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11F54C83000 for ; Tue, 28 Apr 2020 20:14:54 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BC3E0206D6 for ; Tue, 28 Apr 2020 20:14:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="N2QzmBcZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC3E0206D6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:49078 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jTWd2-0004pw-RI for qemu-devel@archiver.kernel.org; Tue, 28 Apr 2020 16:14:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36944) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jTWXW-0007yu-MM for qemu-devel@nongnu.org; Tue, 28 Apr 2020 16:11:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.90_1) (envelope-from ) id 1jTWU0-0004Em-6e for qemu-devel@nongnu.org; Tue, 28 Apr 2020 16:09:10 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:57413 helo=us-smtp-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1jTWTz-0004CQ-NE for qemu-devel@nongnu.org; Tue, 28 Apr 2020 16:05:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588104330; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V7cZKxuteEXSYDoI49xeS5MTvgBO9o2UUzrB1670/T4=; b=N2QzmBcZ5JhQLTlbWlrzzf9FYnibzsUNOfN+4foXj53hKiOknAVo5BFG/Nm6HGEEAFP+uF ZsBW4DrjgKfJ85t6QaVLj+zAAacUO6r3dcfgTBSA/7TEC9BOjAzJNwa2g7sah+68jTUgXu vmWhnOJ3aPNSVhfPXwxxJvE7orUlx+E= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-62-1gZVTwsTP56R6_PNZJ-7Dw-1; Tue, 28 Apr 2020 16:05:29 -0400 X-MC-Unique: 1gZVTwsTP56R6_PNZJ-7Dw-1 Received: by mail-qt1-f200.google.com with SMTP id v18so25812313qtq.22 for ; Tue, 28 Apr 2020 13:05:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fS4jWDgrSptjjyOMhj/BjINMgbDQWDJYgx5SYZkfOPA=; b=MYKGa5P8SenzLt9x94OT/mNhsgPYgjGMyfofnPBjfRzU7iGHKmOIOkc7psK9nI96n9 jhmFBe9yrWAN2YjBjJrM/Dn2oLIcWgX6inw7go6bQx3/tnFn7TdRodqn95SqxgQSUw4B YPWOmoDSakmtizVv3tMoRsmnuqd4YUmTgkWrIAOy8JJPwsso/27cUCbYirgsl0o89D1q UknJlCJGM7ZHFWCtqo5IxONBXz1ovCQyKIoapv49oWL8Y+PdNpTAW9ZIs99vCdh5jGxE acO9EF5jn8iphLknrTqFh5zzT1HUq8/vFJC7ZnVWopuh1eVu2EZS/ewxzy+dJGMrtSUY WnRA== X-Gm-Message-State: AGi0PubJKTHGIjjnD4g1j2WMhGGK9bICAZVUmpriVXI6+m4ZNGKXsJyj ryUV3eGn8BBs/u4eBAs5og3IuUoAVwz/B+L2gKBtXr2yLMlI138sUFJHDslyK4OqoUW3aMXrE64 QvLXf5vHnaGb0Ois= X-Received: by 2002:a37:a0d5:: with SMTP id j204mr28938604qke.112.1588104327809; Tue, 28 Apr 2020 13:05:27 -0700 (PDT) X-Google-Smtp-Source: APiQypJdkuV+Q1GelkdNKVrlzRd60hPhhQMlKsEH1BXxVtp545bSQiMeHkYXMVueq61LcbAks0bgWg== X-Received: by 2002:a37:a0d5:: with SMTP id j204mr28938561qke.112.1588104327211; Tue, 28 Apr 2020 13:05:27 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id k127sm14106585qkb.35.2020.04.28.13.05.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Apr 2020 13:05:26 -0700 (PDT) From: Peter Xu To: qemu-devel@nongnu.org Subject: [PATCH RFC v2 9/9] KVM: Dirty ring support Date: Tue, 28 Apr 2020 16:05:09 -0400 Message-Id: <20200428200509.13150-7-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200428195707.11980-1-peterx@redhat.com> References: <20200428195707.11980-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.120; envelope-from=peterx@redhat.com; helo=us-smtp-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/04/28 02:16:38 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] X-Received-From: 205.139.110.120 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, dgilbert@redhat.com, Peter Xu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" KVM dirty ring is a new interface to pass over dirty bits from kernel to the userspace. Instead of using a bitmap for each memory region, the dirty ring contains an array of dirtied GPAs to fetch (in the form of offset in slots). For each vcpu there will be one dirty ring that binds to it. kvm_dirty_ring_reap() is the major function to collect dirty rings. It can be called either by a standalone reaper thread that runs in the background, collecting dirty pages for the whole VM. It can also be called directly by any thread that has BQL taken. Signed-off-by: Peter Xu --- accel/kvm/kvm-all.c | 341 ++++++++++++++++++++++++++++++++++++++++- accel/kvm/trace-events | 7 + include/hw/core/cpu.h | 8 + 3 files changed, 353 insertions(+), 3 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index fbb0a3b1e9..236dbcd536 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -15,6 +15,7 @@ #include "qemu/osdep.h" #include +#include #include @@ -75,6 +76,25 @@ struct KVMParkedVcpu { QLIST_ENTRY(KVMParkedVcpu) node; }; +enum KVMDirtyRingReaperState { + KVM_DIRTY_RING_REAPER_NONE = 0, + /* The reaper is sleeping */ + KVM_DIRTY_RING_REAPER_WAIT, + /* The reaper is reaping for dirty pages */ + KVM_DIRTY_RING_REAPER_REAPING, +}; + +/* + * KVM reaper instance, responsible for collecting the KVM dirty bits + * via the dirty ring. + */ +struct KVMDirtyRingReaper { + /* The reaper thread */ + QemuThread reaper_thr; + volatile uint64_t reaper_iteration; /* iteration number of reaper thr */ + volatile enum KVMDirtyRingReaperState reaper_state; /* reap thr state */ +}; + struct KVMState { AccelState parent_obj; @@ -121,7 +141,6 @@ struct KVMState void *memcrypt_handle; int (*memcrypt_encrypt_data)(void *handle, uint8_t *ptr, uint64_t len); - /* For "info mtree -f" to tell if an MR is registered in KVM */ int nr_as; struct KVMAs { KVMMemoryListener *ml; @@ -130,6 +149,7 @@ struct KVMState bool kvm_dirty_ring_enabled; /* Whether KVM dirty ring is enabled */ uint64_t kvm_dirty_ring_size; /* Size of the per-vcpu dirty ring */ uint32_t kvm_dirty_gfn_count; /* Number of dirty GFNs per ring */ + struct KVMDirtyRingReaper reaper; }; KVMState *kvm_state; @@ -359,6 +379,13 @@ int kvm_destroy_vcpu(CPUState *cpu) goto err; } + if (cpu->kvm_dirty_gfns) { + ret = munmap(cpu->kvm_dirty_gfns, s->kvm_dirty_ring_size); + if (ret < 0) { + goto err; + } + } + vcpu = g_malloc0(sizeof(*vcpu)); vcpu->vcpu_id = kvm_arch_vcpu_id(cpu); vcpu->kvm_fd = cpu->kvm_fd; @@ -423,6 +450,19 @@ int kvm_init_vcpu(CPUState *cpu) (void *)cpu->kvm_run + s->coalesced_mmio * PAGE_SIZE; } + if (s->kvm_dirty_ring_enabled) { + /* Use MAP_SHARED to share pages with the kernel */ + cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_size, + PROT_READ | PROT_WRITE, MAP_SHARED, + cpu->kvm_fd, + PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET); + if (cpu->kvm_dirty_gfns == MAP_FAILED) { + ret = -errno; + DPRINTF("mmap'ing vcpu dirty gfns failed: %d\n", ret); + goto err; + } + } + ret = kvm_arch_init_vcpu(cpu); err: return ret; @@ -536,6 +576,11 @@ static void kvm_slot_sync_dirty_pages(KVMSlot *slot) cpu_physical_memory_set_dirty_lebitmap(slot->dirty_bmap, start, pages); } +static void kvm_slot_reset_dirty_pages(KVMSlot *slot) +{ + memset(slot->dirty_bmap, 0, slot->dirty_bmap_size); +} + #define ALIGN(x, y) (((x)+(y)-1) & ~((y)-1)) /* Allocate the dirty bitmap for a slot */ @@ -579,6 +624,198 @@ static void kvm_slot_get_dirty_log(KVMState *s, KVMSlot *slot) } } +/* Should be with all slots_lock held for the address spaces. */ +static void kvm_dirty_ring_mark_page(KVMState *s, uint32_t as_id, + uint32_t slot_id, uint64_t offset) +{ + KVMMemoryListener *kml; + KVMSlot *mem; + + if (as_id >= s->nr_as) { + return; + } + + kml = s->as[as_id].ml; + mem = &kml->slots[slot_id]; + + if (!mem->memory_size || offset >= (mem->memory_size / TARGET_PAGE_SIZE)) { + return; + } + + set_bit(offset, mem->dirty_bmap); +} + +static bool dirty_gfn_is_dirtied(struct kvm_dirty_gfn *gfn) +{ + return gfn->flags == KVM_DIRTY_GFN_F_DIRTY; +} + +static void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn) +{ + gfn->flags = KVM_DIRTY_GFN_F_RESET; +} + +/* + * Should be with all slots_lock held for the address spaces. It returns the + * dirty page we've collected on this dirty ring. + */ +static uint32_t kvm_dirty_ring_reap_one(KVMState *s, CPUState *cpu) +{ + struct kvm_dirty_gfn *dirty_gfns = cpu->kvm_dirty_gfns, *cur; + uint32_t gfn_count = s->kvm_dirty_gfn_count; + uint32_t count = 0, fetch = cpu->kvm_fetch_index; + + assert(dirty_gfns && gfn_count); + trace_kvm_dirty_ring_reap_vcpu(cpu->cpu_index); + + while (true) { + cur = &dirty_gfns[fetch % gfn_count]; + if (!dirty_gfn_is_dirtied(cur)) { + break; + } + kvm_dirty_ring_mark_page(s, cur->slot >> 16, cur->slot & 0xffff, + cur->offset); + dirty_gfn_set_collected(cur); + trace_kvm_dirty_ring_page(cpu->cpu_index, fetch, cur->offset); + fetch++; + count++; + } + cpu->kvm_fetch_index = fetch; + + return count; +} + +/* + * Currently for simplicity, we must hold BQL before calling this. We can + * consider to drop the BQL if we're clear with all the race conditions. + */ +static uint64_t kvm_dirty_ring_reap(KVMState *s) +{ + KVMMemoryListener *kml; + int ret, i, locked_count = s->nr_as; + CPUState *cpu; + uint64_t total = 0; + int64_t stamp; + + /* + * We need to lock all kvm slots for all address spaces here, + * because: + * + * (1) We need to mark dirty for dirty bitmaps in multiple slots + * and for tons of pages, so it's better to take the lock here + * once rather than once per page. And more importantly, + * + * (2) We must _NOT_ publish dirty bits to the other threads + * (e.g., the migration thread) via the kvm memory slot dirty + * bitmaps before correctly re-protect those dirtied pages. + * Otherwise we can have potential risk of data corruption if + * the page data is read in the other thread before we do + * reset below. + */ + for (i = 0; i < s->nr_as; i++) { + kml = s->as[i].ml; + if (!kml) { + /* + * This is tricky - we grow s->as[] dynamically now. Take + * care of that case. We also assumed the as[] will fill + * one by one starting from zero. Without this, we race + * with register_smram_listener. + * + * TODO: make all these prettier... + */ + locked_count = i; + break; + } + kvm_slots_lock(kml); + } + + stamp = get_clock(); + + CPU_FOREACH(cpu) { + total += kvm_dirty_ring_reap_one(s, cpu); + } + + if (total) { + ret = kvm_vm_ioctl(s, KVM_RESET_DIRTY_RINGS); + assert(ret == total); + } + + stamp = get_clock() - stamp; + + if (total) { + trace_kvm_dirty_ring_reap(total, stamp / 1000); + } + + /* Unlock whatever locks that we have locked */ + for (i = 0; i < locked_count; i++) { + kvm_slots_unlock(s->as[i].ml); + } + + return total; +} + +static void do_kvm_cpu_synchronize_kick(CPUState *cpu, run_on_cpu_data arg) +{ + /* No need to do anything */ +} + +/* + * Kick all vcpus out in a synchronized way. When returned, we + * guarantee that every vcpu has been kicked and at least returned to + * userspace once. + */ +static void kvm_cpu_synchronize_kick_all(void) +{ + CPUState *cpu; + + CPU_FOREACH(cpu) { + run_on_cpu(cpu, do_kvm_cpu_synchronize_kick, RUN_ON_CPU_NULL); + } +} + +/* + * Flush all the existing dirty pages to the KVM slot buffers. When + * this call returns, we guarantee that all the touched dirty pages + * before calling this function have been put into the per-kvmslot + * dirty bitmap. + * + * To achieve this, we need to: + * + * (1) Kick all vcpus out, this will make sure that we flush all the + * dirty buffers that potentially in the hardware (PML) into the + * dirty rings, after that, + * + * (2) Kick the reaper thread and make sure it reaps all the dirty + * page that is in the dirty rings. + * + * This function must be called with BQL held. + */ +static void kvm_dirty_ring_flush(struct KVMDirtyRingReaper *r) +{ + trace_kvm_dirty_ring_flush(0); + + /* + * The function needs to be serialized. Since this function + * should always be with BQL held, serialization is guaranteed. + * However, let's be sure of it. + */ + assert(qemu_mutex_iothread_locked()); + + /* + * First make sure to flush the hardware buffers by kicking all + * vcpus out in a synchronous way. + */ + kvm_cpu_synchronize_kick_all(); + + /* + * Recycle the dirty bits outside the reaper thread. We're safe because + * kvm_dirty_ring_reap() is internally protected by a mutex. + */ + kvm_dirty_ring_reap(kvm_state); + + trace_kvm_dirty_ring_flush(1); +} + /** * kvm_physical_sync_dirty_bitmap - Sync dirty bitmap from kernel space * @@ -1111,6 +1348,51 @@ out: kvm_slots_unlock(kml); } +static void *kvm_dirty_ring_reaper_thread(void *data) +{ + KVMState *s = data; + struct KVMDirtyRingReaper *r = &s->reaper; + + rcu_register_thread(); + + trace_kvm_dirty_ring_reaper("init"); + + while (true) { + r->reaper_state = KVM_DIRTY_RING_REAPER_WAIT; + trace_kvm_dirty_ring_reaper("wait"); + /* + * TODO: provide a smarter timeout rather than a constant? + */ + sleep(1); + + trace_kvm_dirty_ring_reaper("wakeup"); + r->reaper_state = KVM_DIRTY_RING_REAPER_REAPING; + + qemu_mutex_lock_iothread(); + kvm_dirty_ring_reap(s); + qemu_mutex_unlock_iothread(); + + r->reaper_iteration++; + } + + trace_kvm_dirty_ring_reaper("exit"); + + rcu_unregister_thread(); + + return NULL; +} + +static int kvm_dirty_ring_reaper_init(KVMState *s) +{ + struct KVMDirtyRingReaper *r = &s->reaper; + + qemu_thread_create(&r->reaper_thr, "kvm-reaper", + kvm_dirty_ring_reaper_thread, + s, QEMU_THREAD_JOINABLE); + + return 0; +} + static void kvm_region_add(MemoryListener *listener, MemoryRegionSection *section) { @@ -1139,6 +1421,36 @@ static void kvm_log_sync(MemoryListener *listener, kvm_slots_unlock(kml); } +static void kvm_log_sync_global(MemoryListener *l) +{ + KVMMemoryListener *kml = container_of(l, KVMMemoryListener, listener); + KVMState *s = kvm_state; + KVMSlot *mem; + int i; + + /* Flush all kernel dirty addresses into KVMSlot dirty bitmap */ + kvm_dirty_ring_flush(&s->reaper); + + /* + * TODO: make this faster when nr_slots is big while there are + * only a few used slots (small VMs). + */ + kvm_slots_lock(kml); + for (i = 0; i < s->nr_slots; i++) { + mem = &kml->slots[i]; + if (mem->memory_size && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) { + kvm_slot_sync_dirty_pages(mem); + /* + * This is not needed by KVM_GET_DIRTY_LOG because the + * ioctl will unconditionally overwrite the whole region. + * However kvm dirty ring has no such side effect. + */ + kvm_slot_reset_dirty_pages(mem); + } + } + kvm_slots_unlock(kml); +} + static void kvm_log_clear(MemoryListener *listener, MemoryRegionSection *section) { @@ -1245,10 +1557,15 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml, kml->listener.region_del = kvm_region_del; kml->listener.log_start = kvm_log_start; kml->listener.log_stop = kvm_log_stop; - kml->listener.log_sync = kvm_log_sync; - kml->listener.log_clear = kvm_log_clear; kml->listener.priority = 10; + if (s->kvm_dirty_ring_enabled) { + kml->listener.log_sync_global = kvm_log_sync_global; + } else { + kml->listener.log_sync = kvm_log_sync; + kml->listener.log_clear = kvm_log_clear; + } + memory_listener_register(&kml->listener, as); for (i = 0; i < s->nr_as; ++i) { @@ -2138,6 +2455,13 @@ static int kvm_init(MachineState *ms) qemu_balloon_inhibit(true); } + if (s->kvm_dirty_ring_enabled) { + ret = kvm_dirty_ring_reaper_init(s); + if (ret) { + goto err; + } + } + return 0; err: @@ -2445,6 +2769,17 @@ int kvm_cpu_exec(CPUState *cpu) case KVM_EXIT_INTERNAL_ERROR: ret = kvm_handle_internal_error(cpu, run); break; + case KVM_EXIT_DIRTY_RING_FULL: + /* + * We shouldn't continue if the dirty ring of this vcpu is + * still full. Got kicked by KVM_RESET_DIRTY_RINGS. + */ + trace_kvm_dirty_ring_full(cpu->cpu_index); + qemu_mutex_lock_iothread(); + kvm_dirty_ring_reap(kvm_state); + qemu_mutex_unlock_iothread(); + ret = 0; + break; case KVM_EXIT_SYSTEM_EVENT: switch (run->system_event.type) { case KVM_SYSTEM_EVENT_SHUTDOWN: diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index 4fb6e59d19..89ef99569f 100644 --- a/accel/kvm/trace-events +++ b/accel/kvm/trace-events @@ -16,4 +16,11 @@ kvm_set_ioeventfd_mmio(int fd, uint64_t addr, uint32_t val, bool assign, uint32_ kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint32_t val, bool assign, uint32_t size, bool datamatch) "fd: %d @0x%x val=0x%x assign: %d size: %d match: %d" kvm_set_user_memory(uint32_t slot, uint32_t flags, uint64_t guest_phys_addr, uint64_t memory_size, uint64_t userspace_addr, int ret) "Slot#%d flags=0x%x gpa=0x%"PRIx64 " size=0x%"PRIx64 " ua=0x%"PRIx64 " ret=%d" kvm_clear_dirty_log(uint32_t slot, uint64_t start, uint32_t size) "slot#%"PRId32" start 0x%"PRIx64" size 0x%"PRIx32 +kvm_dirty_ring_full(int id) "vcpu %d" +kvm_dirty_ring_reap_vcpu(int id) "vcpu %d" +kvm_dirty_ring_page(int vcpu, uint32_t slot, uint64_t offset) "vcpu %d fetch %"PRIu32" offset 0x%"PRIx64 +kvm_dirty_ring_reaper(const char *s) "%s" +kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"PRIu64" pages (took %"PRIi64" us)" +kvm_dirty_ring_reaper_kick(const char *reason) "%s" +kvm_dirty_ring_flush(int finished) "%d" diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index b26fdb5ab8..2d2bf61629 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -340,6 +340,11 @@ struct qemu_work_item; * @ignore_memory_transaction_failures: Cached copy of the MachineState * flag of the same name: allows the board to suppress calling of the * CPU do_transaction_failed hook function. + * @kvm_dirty_ring_full: + * Whether the kvm dirty ring of this vcpu is soft-full. + * @kvm_dirty_ring_avail: + * Semaphore to be posted when the kvm dirty ring of the vcpu is + * available again. * * State of one CPU core or thread. */ @@ -407,9 +412,12 @@ struct CPUState { */ uintptr_t mem_io_pc; + /* Only used in KVM */ int kvm_fd; struct KVMState *kvm_state; struct kvm_run *kvm_run; + struct kvm_dirty_gfn *kvm_dirty_gfns; + uint32_t kvm_fetch_index; /* Used for events with 'vcpu' and *without* the 'disabled' properties */ DECLARE_BITMAP(trace_dstate_delayed, CPU_TRACE_DSTATE_MAX_EVENTS);