From patchwork Tue Sep 15 11:35:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 305407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E744C433E2 for ; Tue, 15 Sep 2020 11:39:27 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7C7AA20872 for ; Tue, 15 Sep 2020 11:39:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CkzISO+j" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C7AA20872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:60524 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9Iz-0005Oq-DS for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:39:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35592) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9FR-0000mw-Lu for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:45 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:27209 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9FP-0002nS-Fz for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169741; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4oEz0TBGTtbLmlvwV3PzdX1f6omttDzCATINy+/Rhd8=; b=CkzISO+jbMCYVD7YWBTQ7n3CwwAdIIonHL1+vnOsg4g14n7+NunA76Q88nhLJKRh8I/SLL acoIRIAJI31SKJ5iT0Dld9HKJ0FUTBI+CiGUdsbojFrRKLLsA1P6P9Hv1zWmhKGnITHE6c SnZzdU7WhP+/gvtN+5fOyEPmgGM2fF8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-221-omZW3y61MCSSQBYcQpHtug-1; Tue, 15 Sep 2020 07:35:37 -0400 X-MC-Unique: omZW3y61MCSSQBYcQpHtug-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 53F96393B7; Tue, 15 Sep 2020 11:35:36 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8BB9475136; Tue, 15 Sep 2020 11:35:32 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 1/9] block: push error reporting into bdrv_all_*_snapshot functions Date: Tue, 15 Sep 2020 12:35:15 +0100 Message-Id: <20200915113523.2520317-2-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.003 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=207.211.31.81; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 05:35:09 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The bdrv_all_*_snapshot functions return a BlockDriverState pointer for the invalid backend, which the callers then use to report an error message. In some cases multiple callers are reporting the same error message, but with slightly different text. In the future there will be more error scenarios for some of these methods, which will benefit from fine grained error message reporting. So it is helpful to push error reporting down a level. Signed-off-by: Daniel P. Berrangé --- block/monitor/block-hmp-cmds.c | 7 ++-- block/snapshot.c | 77 +++++++++++++++++----------------- include/block/snapshot.h | 14 +++---- migration/savevm.c | 37 +++++----------- monitor/hmp-cmds.c | 7 +--- tests/qemu-iotests/267.out | 10 ++--- 6 files changed, 65 insertions(+), 87 deletions(-) diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index 4d3db5ed3c..bec794253d 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -898,10 +898,11 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) ImageEntry *image_entry, *next_ie; SnapshotEntry *snapshot_entry; + Error *err = NULL; - bs = bdrv_all_find_vmstate_bs(); + bs = bdrv_all_find_vmstate_bs(&err); if (!bs) { - monitor_printf(mon, "No available block device supports snapshots\n"); + error_report_err(err); return; } aio_context = bdrv_get_aio_context(bs); @@ -951,7 +952,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) total = 0; for (i = 0; i < nb_sns; i++) { SnapshotEntry *next_sn; - if (bdrv_all_find_snapshot(sn_tab[i].name, &bs1) == 0) { + if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) { global_snapshots[total] = i; total++; QTAILQ_FOREACH(image_entry, &image_list, next) { diff --git a/block/snapshot.c b/block/snapshot.c index a2bf3a54eb..50e35bb9fa 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -462,14 +462,14 @@ static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs) * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers) */ -bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs) +bool bdrv_all_can_snapshot(Error **errp) { - bool ok = true; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + bool ok; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs)) { @@ -477,26 +477,25 @@ bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs) } aio_context_release(ctx); if (!ok) { + error_setg(errp, "Device '%s' is writable but does not support " + "snapshots", bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return false; } } -fail: - *first_bad_bs = bs; - return ok; + return true; } -int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs, - Error **errp) +int bdrv_all_delete_snapshot(const char *name, Error **errp) { - int ret = 0; BlockDriverState *bs; BdrvNextIterator it; QEMUSnapshotInfo sn1, *snapshot = &sn1; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs) && @@ -507,26 +506,25 @@ int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs, } aio_context_release(ctx); if (ret < 0) { + error_prepend(errp, "Could not delete snapshot '%s' on '%s': ", + name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return ret; + return 0; } -int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs, - Error **errp) +int bdrv_all_goto_snapshot(const char *name, Error **errp) { - int ret = 0; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs)) { @@ -534,75 +532,75 @@ int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs, } aio_context_release(ctx); if (ret < 0) { + error_prepend(errp, "Could not load snapshot '%s' on '%s': ", + name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return ret; + return 0; } -int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs) +int bdrv_all_find_snapshot(const char *name, Error **errp) { QEMUSnapshotInfo sn; - int err = 0; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs)) { - err = bdrv_snapshot_find(bs, &sn, name); + ret = bdrv_snapshot_find(bs, &sn, name); } aio_context_release(ctx); - if (err < 0) { + if (ret < 0) { + error_setg(errp, "Could not find snapshot '%s' on '%s'", + name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return err; + return 0; } int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, - BlockDriverState **first_bad_bs) + Error **errp) { - int err = 0; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bs == vm_state_bs) { sn->vm_state_size = vm_state_size; - err = bdrv_snapshot_create(bs, sn); + ret = bdrv_snapshot_create(bs, sn); } else if (bdrv_all_snapshots_includes_bs(bs)) { sn->vm_state_size = 0; - err = bdrv_snapshot_create(bs, sn); + ret = bdrv_snapshot_create(bs, sn); } aio_context_release(ctx); - if (err < 0) { + if (ret < 0) { + error_setg(errp, "Could not create snapshot '%s' on '%s'", + sn->name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return err; + return 0; } -BlockDriverState *bdrv_all_find_vmstate_bs(void) +BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp) { BlockDriverState *bs; BdrvNextIterator it; @@ -620,5 +618,8 @@ BlockDriverState *bdrv_all_find_vmstate_bs(void) break; } } + if (!bs) { + error_setg(errp, "No block device supports snapshots"); + } return bs; } diff --git a/include/block/snapshot.h b/include/block/snapshot.h index 2bfcd57578..ba1528eee0 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -76,17 +76,15 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs, * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers */ -bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs); -int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bsd_bs, - Error **errp); -int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs, - Error **errp); -int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs); +bool bdrv_all_can_snapshot(Error **errp); +int bdrv_all_delete_snapshot(const char *name, Error **errp); +int bdrv_all_goto_snapshot(const char *name, Error **errp); +int bdrv_all_find_snapshot(const char *name, Error **errp); int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, - BlockDriverState **first_bad_bs); + Error **errp); -BlockDriverState *bdrv_all_find_vmstate_bs(void); +BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 304d98ff78..3826c437cc 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2660,7 +2660,7 @@ int qemu_load_device_state(QEMUFile *f) int save_snapshot(const char *name, Error **errp) { - BlockDriverState *bs, *bs1; + BlockDriverState *bs; QEMUSnapshotInfo sn1, *sn = &sn1, old_sn1, *old_sn = &old_sn1; int ret = -1, ret2; QEMUFile *f; @@ -2680,25 +2680,19 @@ int save_snapshot(const char *name, Error **errp) return ret; } - if (!bdrv_all_can_snapshot(&bs)) { - error_setg(errp, "Device '%s' is writable but does not support " - "snapshots", bdrv_get_device_or_node_name(bs)); + if (!bdrv_all_can_snapshot(errp)) { return ret; } /* Delete old snapshots of the same name */ if (name) { - ret = bdrv_all_delete_snapshot(name, &bs1, errp); - if (ret < 0) { - error_prepend(errp, "Error while deleting snapshot on device " - "'%s': ", bdrv_get_device_or_node_name(bs1)); + if (bdrv_all_delete_snapshot(name, errp) < 0) { return ret; } } - bs = bdrv_all_find_vmstate_bs(); + bs = bdrv_all_find_vmstate_bs(errp); if (bs == NULL) { - error_setg(errp, "No block device can accept snapshots"); return ret; } aio_context = bdrv_get_aio_context(bs); @@ -2763,10 +2757,8 @@ int save_snapshot(const char *name, Error **errp) aio_context_release(aio_context); aio_context = NULL; - ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, &bs); + ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, errp); if (ret < 0) { - error_setg(errp, "Error while creating snapshot on '%s'", - bdrv_get_device_or_node_name(bs)); goto the_end; } @@ -2868,7 +2860,7 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp) int load_snapshot(const char *name, Error **errp) { - BlockDriverState *bs, *bs_vm_state; + BlockDriverState *bs_vm_state; QEMUSnapshotInfo sn; QEMUFile *f; int ret; @@ -2881,23 +2873,16 @@ int load_snapshot(const char *name, Error **errp) return -EINVAL; } - if (!bdrv_all_can_snapshot(&bs)) { - error_setg(errp, - "Device '%s' is writable but does not support snapshots", - bdrv_get_device_or_node_name(bs)); + if (!bdrv_all_can_snapshot(errp)) { return -ENOTSUP; } - ret = bdrv_all_find_snapshot(name, &bs); + ret = bdrv_all_find_snapshot(name, errp); if (ret < 0) { - error_setg(errp, - "Device '%s' does not have the requested snapshot '%s'", - bdrv_get_device_or_node_name(bs), name); return ret; } - bs_vm_state = bdrv_all_find_vmstate_bs(); + bs_vm_state = bdrv_all_find_vmstate_bs(errp); if (!bs_vm_state) { - error_setg(errp, "No block device supports snapshots"); return -ENOTSUP; } aio_context = bdrv_get_aio_context(bs_vm_state); @@ -2917,10 +2902,8 @@ int load_snapshot(const char *name, Error **errp) /* Flush all IO requests so they don't interfere with the new state. */ bdrv_drain_all_begin(); - ret = bdrv_all_goto_snapshot(name, &bs, errp); + ret = bdrv_all_goto_snapshot(name, errp); if (ret < 0) { - error_prepend(errp, "Could not load snapshot '%s' on '%s': ", - name, bdrv_get_device_or_node_name(bs)); goto err_drain; } diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 7711726fd2..9bb50b9abf 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1137,15 +1137,10 @@ void hmp_savevm(Monitor *mon, const QDict *qdict) void hmp_delvm(Monitor *mon, const QDict *qdict) { - BlockDriverState *bs; Error *err = NULL; const char *name = qdict_get_str(qdict, "name"); - if (bdrv_all_delete_snapshot(name, &bs, &err) < 0) { - error_prepend(&err, - "deleting snapshot on device '%s': ", - bdrv_get_device_name(bs)); - } + bdrv_all_delete_snapshot(name, &err); hmp_handle_error(mon, err); } diff --git a/tests/qemu-iotests/267.out b/tests/qemu-iotests/267.out index 215902b3ad..c65cce893a 100644 --- a/tests/qemu-iotests/267.out +++ b/tests/qemu-iotests/267.out @@ -6,9 +6,9 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 Testing: QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 -Error: No block device can accept snapshots +Error: No block device supports snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: No block device supports snapshots (qemu) quit @@ -22,7 +22,7 @@ QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 Error: Device 'none0' is writable but does not support snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: Device 'none0' is writable but does not support snapshots (qemu) quit @@ -58,7 +58,7 @@ QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 Error: Device 'virtio0' is writable but does not support snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: Device 'virtio0' is writable but does not support snapshots (qemu) quit @@ -83,7 +83,7 @@ QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 Error: Device 'file' is writable but does not support snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: Device 'file' is writable but does not support snapshots (qemu) quit From patchwork Tue Sep 15 11:35:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 305408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42CACC43461 for ; Tue, 15 Sep 2020 11:37:28 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BFC7520872 for ; Tue, 15 Sep 2020 11:37:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bCKRmP8b" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFC7520872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:53782 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9H4-0002cR-TO for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:37:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35608) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9FW-0000q8-Gd for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:50 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:47410 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9FU-0002nv-98 for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169747; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B4NDq7xY/p5nEHU+fj0NRxwrZ3KkfBVlXtATn0X/fuM=; b=bCKRmP8bvpmTR96yCs12CVabjIqjwBQQCuZIW54nyR9aO3izDAWVEeznKNmtKu8DQlE0fx Uxh6rSBtTjwG/HdncQPEc0wso0nCTK2smm5Oit5I/6AY8pWM+oiojtF3c7b/OpVgC9azw4 jQ+HkGgmIB5i6LubyAchW8VdU5sCjnM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-149-J1N9i6duM_ipMkerFHgLhA-1; Tue, 15 Sep 2020 07:35:45 -0400 X-MC-Unique: J1N9i6duM_ipMkerFHgLhA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 27AD210BBEC8; Tue, 15 Sep 2020 11:35:44 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id C6B327513D; Tue, 15 Sep 2020 11:35:36 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 2/9] migration: stop returning errno from load_snapshot() Date: Tue, 15 Sep 2020 12:35:16 +0100 Message-Id: <20200915113523.2520317-3-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.003 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=207.211.31.81; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 05:35:09 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" None of the callers care about the errno value since there is a full Error object populated. This gives consistency with save_snapshot() which already just returns -1. Reviewed-by: Dr. David Alan Gilbert Signed-off-by: Daniel P. Berrangé --- migration/savevm.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/migration/savevm.c b/migration/savevm.c index 3826c437cc..711137bcbe 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2870,20 +2870,20 @@ int load_snapshot(const char *name, Error **errp) if (!replay_can_snapshot()) { error_setg(errp, "Record/replay does not allow loading snapshot " "right now. Try once more later."); - return -EINVAL; + return -1; } if (!bdrv_all_can_snapshot(errp)) { - return -ENOTSUP; + return -1; } ret = bdrv_all_find_snapshot(name, errp); if (ret < 0) { - return ret; + return -1; } bs_vm_state = bdrv_all_find_vmstate_bs(errp); if (!bs_vm_state) { - return -ENOTSUP; + return -1; } aio_context = bdrv_get_aio_context(bs_vm_state); @@ -2892,11 +2892,11 @@ int load_snapshot(const char *name, Error **errp) ret = bdrv_snapshot_find(bs_vm_state, &sn, name); aio_context_release(aio_context); if (ret < 0) { - return ret; + return -1; } else if (sn.vm_state_size == 0) { error_setg(errp, "This is a disk-only snapshot. Revert to it " " offline using qemu-img"); - return -EINVAL; + return -1; } /* Flush all IO requests so they don't interfere with the new state. */ @@ -2911,7 +2911,6 @@ int load_snapshot(const char *name, Error **errp) f = qemu_fopen_bdrv(bs_vm_state, 0); if (!f) { error_setg(errp, "Could not open VM state file"); - ret = -EINVAL; goto err_drain; } @@ -2927,14 +2926,14 @@ int load_snapshot(const char *name, Error **errp) if (ret < 0) { error_setg(errp, "Error %d while loading VM state", ret); - return ret; + return -1; } return 0; err_drain: bdrv_drain_all_end(); - return ret; + return -1; } void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev) From patchwork Tue Sep 15 11:35:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 273739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B419C433E2 for ; Tue, 15 Sep 2020 11:39:47 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B935921D1B for ; Tue, 15 Sep 2020 11:39:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YhuP9gJI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B935921D1B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:34056 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9JJ-00069Q-Qc for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:39:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35670) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fc-00010s-8e for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:56 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:32019 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9FZ-0002pR-Em for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169752; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O5a5383IP+92fVrakUvQgrMZPKZPCZG9iJylTTnXUio=; b=YhuP9gJI986oSyU2kswzK/InoU9SYV4mzNpcSKRwAW1YY3NbiJA8jgU7REQhCeu/lYtrLn HqlJYEj792Dw3ACsuBdhTyff1qH2BWvsASnIhC3yiNUxiIcUI0+gpGaRT9Va2+ZlXLKMEp gJhJsIoJSiS1Y1hR7rVmek+ftXblg/8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-36-h3gJGL8zOjqdW342y5o3Ug-1; Tue, 15 Sep 2020 07:35:49 -0400 X-MC-Unique: h3gJGL8zOjqdW342y5o3Ug-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 752961005E5C; Tue, 15 Sep 2020 11:35:48 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 876AB747B0; Tue, 15 Sep 2020 11:35:44 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 3/9] block: add ability to specify list of blockdevs during snapshot Date: Tue, 15 Sep 2020 12:35:17 +0100 Message-Id: <20200915113523.2520317-4-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.61; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 03:21:13 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" When running snapshot operations, there are various rules for which blockdevs are included/excluded. While this provides reasonable default behaviour, there are scenarios that are not well handled by the default logic. Some of the conditions do not have a single correct answer. Thus there needs to be a way for the mgmt app to provide an explicit list of blockdevs to perform snapshots across. This can be achieved by passing a list of node names that should be used. Signed-off-by: Daniel P. Berrangé --- block/monitor/block-hmp-cmds.c | 4 +- block/snapshot.c | 180 ++++++++++++++++++++++++--------- include/block/snapshot.h | 22 ++-- migration/savevm.c | 16 +-- monitor/hmp-cmds.c | 2 +- 5 files changed, 160 insertions(+), 64 deletions(-) diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index bec794253d..30279bd15a 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -900,7 +900,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) SnapshotEntry *snapshot_entry; Error *err = NULL; - bs = bdrv_all_find_vmstate_bs(&err); + bs = bdrv_all_find_vmstate_bs(false, NULL, &err); if (!bs) { error_report_err(err); return; @@ -952,7 +952,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) total = 0; for (i = 0; i < nb_sns; i++) { SnapshotEntry *next_sn; - if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) { + if (bdrv_all_find_snapshot(sn_tab[i].name, false, NULL, NULL) == 0) { global_snapshots[total] = i; total++; QTAILQ_FOREACH(image_entry, &image_list, next) { diff --git a/block/snapshot.c b/block/snapshot.c index 50e35bb9fa..155b8aad88 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -447,6 +447,41 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs, return ret; } + +static int bdrv_all_get_snapshot_devices(bool has_devices, strList *devices, + GList **all_bdrvs, + Error **errp) +{ + g_autoptr(GList) bdrvs = NULL; + + if (has_devices) { + if (!devices) { + error_setg(errp, "At least one device is required for snapshot"); + return -1; + } + + while (devices) { + BlockDriverState *bs = bdrv_find_node(devices->value); + if (!bs) { + error_setg(errp, "No block device node '%s'", devices->value); + return -1; + } + bdrvs = g_list_append(bdrvs, bs); + devices = devices->next; + } + } else { + BlockDriverState *bs; + BdrvNextIterator it; + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + bdrvs = g_list_append(bdrvs, bs); + } + } + + *all_bdrvs = g_steal_pointer(&bdrvs); + return 0; +} + + static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs) { if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) { @@ -462,43 +497,59 @@ static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs) * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers) */ -bool bdrv_all_can_snapshot(Error **errp) +bool bdrv_all_can_snapshot(bool has_devices, strList *devices, + Error **errp) { - BlockDriverState *bs; - BdrvNextIterator it; + g_autoptr(GList) bdrvs = NULL; + GList *iterbdrvs; - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { + return false; + } + + iterbdrvs = bdrvs; + while (iterbdrvs) { + BlockDriverState *bs = iterbdrvs->data; AioContext *ctx = bdrv_get_aio_context(bs); bool ok; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs)) { + if (devices || bdrv_all_snapshots_includes_bs(bs)) { ok = bdrv_can_snapshot(bs); } aio_context_release(ctx); if (!ok) { error_setg(errp, "Device '%s' is writable but does not support " "snapshots", bdrv_get_device_or_node_name(bs)); - bdrv_next_cleanup(&it); return false; } + + iterbdrvs = iterbdrvs->next; } return true; } -int bdrv_all_delete_snapshot(const char *name, Error **errp) +int bdrv_all_delete_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp) { - BlockDriverState *bs; - BdrvNextIterator it; - QEMUSnapshotInfo sn1, *snapshot = &sn1; + g_autoptr(GList) bdrvs = NULL; + GList *iterbdrvs; + + if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { + return -1; + } - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + iterbdrvs = bdrvs; + while (iterbdrvs) { + BlockDriverState *bs = iterbdrvs->data; AioContext *ctx = bdrv_get_aio_context(bs); - int ret; + QEMUSnapshotInfo sn1, *snapshot = &sn1; + int ret = 0; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs) && + if ((devices || bdrv_all_snapshots_includes_bs(bs)) && bdrv_snapshot_find(bs, snapshot, name) >= 0) { ret = bdrv_snapshot_delete(bs, snapshot->id_str, @@ -508,61 +559,80 @@ int bdrv_all_delete_snapshot(const char *name, Error **errp) if (ret < 0) { error_prepend(errp, "Could not delete snapshot '%s' on '%s': ", name, bdrv_get_device_or_node_name(bs)); - bdrv_next_cleanup(&it); return -1; } + + iterbdrvs = iterbdrvs->next; } return 0; } -int bdrv_all_goto_snapshot(const char *name, Error **errp) +int bdrv_all_goto_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp) { - BlockDriverState *bs; - BdrvNextIterator it; + g_autoptr(GList) bdrvs = NULL; + GList *iterbdrvs; - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { + return -1; + } + + iterbdrvs = bdrvs; + while (iterbdrvs) { + BlockDriverState *bs = iterbdrvs->data; AioContext *ctx = bdrv_get_aio_context(bs); - int ret; + int ret = 0; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs)) { + if (devices || bdrv_all_snapshots_includes_bs(bs)) { ret = bdrv_snapshot_goto(bs, name, errp); } aio_context_release(ctx); if (ret < 0) { error_prepend(errp, "Could not load snapshot '%s' on '%s': ", name, bdrv_get_device_or_node_name(bs)); - bdrv_next_cleanup(&it); return -1; } + + iterbdrvs = iterbdrvs->next; } return 0; } -int bdrv_all_find_snapshot(const char *name, Error **errp) +int bdrv_all_find_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp) { - QEMUSnapshotInfo sn; - BlockDriverState *bs; - BdrvNextIterator it; + g_autoptr(GList) bdrvs = NULL; + GList *iterbdrvs; + + if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { + return -1; + } - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + iterbdrvs = bdrvs; + while (iterbdrvs) { + BlockDriverState *bs = iterbdrvs->data; AioContext *ctx = bdrv_get_aio_context(bs); - int ret; + QEMUSnapshotInfo sn; + int ret = 0; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs)) { + if (devices || bdrv_all_snapshots_includes_bs(bs)) { ret = bdrv_snapshot_find(bs, &sn, name); } aio_context_release(ctx); if (ret < 0) { error_setg(errp, "Could not find snapshot '%s' on '%s'", name, bdrv_get_device_or_node_name(bs)); - bdrv_next_cleanup(&it); return -1; } + + iterbdrvs = iterbdrvs->next; } return 0; @@ -571,20 +641,27 @@ int bdrv_all_find_snapshot(const char *name, Error **errp) int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, + bool has_devices, strList *devices, Error **errp) { - BlockDriverState *bs; - BdrvNextIterator it; + g_autoptr(GList) bdrvs = NULL; + GList *iterbdrvs; + + if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { + return -1; + } - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + iterbdrvs = bdrvs; + while (iterbdrvs) { + BlockDriverState *bs = iterbdrvs->data; AioContext *ctx = bdrv_get_aio_context(bs); - int ret; + int ret = 0; aio_context_acquire(ctx); if (bs == vm_state_bs) { sn->vm_state_size = vm_state_size; ret = bdrv_snapshot_create(bs, sn); - } else if (bdrv_all_snapshots_includes_bs(bs)) { + } else if (devices || bdrv_all_snapshots_includes_bs(bs)) { sn->vm_state_size = 0; ret = bdrv_snapshot_create(bs, sn); } @@ -592,34 +669,43 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, if (ret < 0) { error_setg(errp, "Could not create snapshot '%s' on '%s'", sn->name, bdrv_get_device_or_node_name(bs)); - bdrv_next_cleanup(&it); return -1; } + + iterbdrvs = iterbdrvs->next; } return 0; } -BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp) +BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices, + Error **errp) { - BlockDriverState *bs; - BdrvNextIterator it; + g_autoptr(GList) bdrvs = NULL; + GList *iterbdrvs; - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + if (bdrv_all_get_snapshot_devices(has_devices, devices, &bdrvs, errp) < 0) { + return NULL; + } + + iterbdrvs = bdrvs; + while (iterbdrvs) { + BlockDriverState *bs = iterbdrvs->data; AioContext *ctx = bdrv_get_aio_context(bs); - bool found; + bool found = false; aio_context_acquire(ctx); - found = bdrv_all_snapshots_includes_bs(bs) && bdrv_can_snapshot(bs); + found = (devices || bdrv_all_snapshots_includes_bs(bs)) && + bdrv_can_snapshot(bs); aio_context_release(ctx); if (found) { - bdrv_next_cleanup(&it); - break; + return bs; } + + iterbdrvs = iterbdrvs->next; } - if (!bs) { - error_setg(errp, "No block device supports snapshots"); - } - return bs; + + error_setg(errp, "No block device supports snapshots"); + return NULL; } diff --git a/include/block/snapshot.h b/include/block/snapshot.h index ba1528eee0..bac3409669 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -25,7 +25,7 @@ #ifndef SNAPSHOT_H #define SNAPSHOT_H - +#include "qapi/qapi-builtin-types.h" #define SNAPSHOT_OPT_BASE "snapshot." #define SNAPSHOT_OPT_ID "snapshot.id" @@ -76,15 +76,25 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs, * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers */ -bool bdrv_all_can_snapshot(Error **errp); -int bdrv_all_delete_snapshot(const char *name, Error **errp); -int bdrv_all_goto_snapshot(const char *name, Error **errp); -int bdrv_all_find_snapshot(const char *name, Error **errp); +bool bdrv_all_can_snapshot(bool has_devices, strList *devices, + Error **errp); +int bdrv_all_delete_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp); +int bdrv_all_goto_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp); +int bdrv_all_find_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp); int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, + bool has_devices, + strList *devices, Error **errp); -BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp); +BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices, + Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 711137bcbe..0b3454f2e8 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2680,18 +2680,18 @@ int save_snapshot(const char *name, Error **errp) return ret; } - if (!bdrv_all_can_snapshot(errp)) { + if (!bdrv_all_can_snapshot(false, NULL, errp)) { return ret; } /* Delete old snapshots of the same name */ if (name) { - if (bdrv_all_delete_snapshot(name, errp) < 0) { + if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) { return ret; } } - bs = bdrv_all_find_vmstate_bs(errp); + bs = bdrv_all_find_vmstate_bs(false ,NULL, errp); if (bs == NULL) { return ret; } @@ -2757,7 +2757,7 @@ int save_snapshot(const char *name, Error **errp) aio_context_release(aio_context); aio_context = NULL; - ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, errp); + ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, false, NULL, errp); if (ret < 0) { goto the_end; } @@ -2873,15 +2873,15 @@ int load_snapshot(const char *name, Error **errp) return -1; } - if (!bdrv_all_can_snapshot(errp)) { + if (!bdrv_all_can_snapshot(false, NULL, errp)) { return -1; } - ret = bdrv_all_find_snapshot(name, errp); + ret = bdrv_all_find_snapshot(name, false, NULL, errp); if (ret < 0) { return -1; } - bs_vm_state = bdrv_all_find_vmstate_bs(errp); + bs_vm_state = bdrv_all_find_vmstate_bs(false, NULL, errp); if (!bs_vm_state) { return -1; } @@ -2902,7 +2902,7 @@ int load_snapshot(const char *name, Error **errp) /* Flush all IO requests so they don't interfere with the new state. */ bdrv_drain_all_begin(); - ret = bdrv_all_goto_snapshot(name, errp); + ret = bdrv_all_goto_snapshot(name, false, NULL, errp); if (ret < 0) { goto err_drain; } diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 9bb50b9abf..98c98ae182 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1140,7 +1140,7 @@ void hmp_delvm(Monitor *mon, const QDict *qdict) Error *err = NULL; const char *name = qdict_get_str(qdict, "name"); - bdrv_all_delete_snapshot(name, &err); + bdrv_all_delete_snapshot(name, false, NULL, &err); hmp_handle_error(mon, err); } From patchwork Tue Sep 15 11:35:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 273740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4907C433E2 for ; Tue, 15 Sep 2020 11:37:40 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 52CC220872 for ; Tue, 15 Sep 2020 11:37:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZSibIvQt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 52CC220872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54576 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9HH-0002xo-8v for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:37:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35698) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fe-00015z-8d for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:58 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:39285) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fc-0002t0-CW for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:35:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VIHBo0+PGzZGYjd+Q9x3Rb3YHDy7kVsnXfqxvTTnOFM=; b=ZSibIvQtBr3sgLAO/UeDAm3oX2JrFMsP4zcaapI0wDGcBUyKoONYKYT7MaZuVI4FfcbbZz xHTUlMrhQFkrcsJta51BIllC2p+J2CXUZCtw7Qgm81ro7T/Rk94rghYfdtgHOuqCSPnk6g 8/XDQ0gpTxmd6I7jD6RSvW9BzZNSsCc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-163-0L9gRBW_Os-LyS_kVHcm_Q-1; Tue, 15 Sep 2020 07:35:54 -0400 X-MC-Unique: 0L9gRBW_Os-LyS_kVHcm_Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D2C11420E9; Tue, 15 Sep 2020 11:35:52 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id EE75D7513D; Tue, 15 Sep 2020 11:35:48 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 4/9] block: allow specifying name of block device for vmstate storage Date: Tue, 15 Sep 2020 12:35:18 +0100 Message-Id: <20200915113523.2520317-5-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.003 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 02:10:32 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Currently the vmstate will be stored in the first block device that supports snapshots. Historically this would have usually been the root device, but with UEFI it might be the variable store. There needs to be a way to override the choice of block device to store the state in. Signed-off-by: Daniel P. Berrangé --- block/monitor/block-hmp-cmds.c | 2 +- block/snapshot.c | 23 ++++++++++++++++++++--- include/block/snapshot.h | 3 ++- migration/savevm.c | 4 ++-- 4 files changed, 25 insertions(+), 7 deletions(-) diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index 30279bd15a..4a1fc1d6b0 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -900,7 +900,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) SnapshotEntry *snapshot_entry; Error *err = NULL; - bs = bdrv_all_find_vmstate_bs(false, NULL, &err); + bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, &err); if (!bs) { error_report_err(err); return; diff --git a/block/snapshot.c b/block/snapshot.c index 155b8aad88..d2caaa88b5 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -678,7 +678,9 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, return 0; } -BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices, + +BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs, + bool has_devices, strList *devices, Error **errp) { g_autoptr(GList) bdrvs = NULL; @@ -699,13 +701,28 @@ BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices, bdrv_can_snapshot(bs); aio_context_release(ctx); - if (found) { + if (vmstate_bs) { + if (g_str_equal(vmstate_bs, + bdrv_get_node_name(bs))) { + if (found) { + return bs; + } else { + error_setg(errp, "vmstate block device '%s' does not support snapshots", + vmstate_bs); + return NULL; + } + } + } else if (found) { return bs; } iterbdrvs = iterbdrvs->next; } - error_setg(errp, "No block device supports snapshots"); + if (vmstate_bs) { + error_setg(errp, "vmstate block device '%s' does not exist", vmstate_bs); + } else { + error_setg(errp, "no block device can store vmstate for snapshot"); + } return NULL; } diff --git a/include/block/snapshot.h b/include/block/snapshot.h index bac3409669..4d25f17728 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -94,7 +94,8 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, strList *devices, Error **errp); -BlockDriverState *bdrv_all_find_vmstate_bs(bool has_devices, strList *devices, +BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs, + bool has_devices, strList *devices, Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 0b3454f2e8..76972d69b0 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2691,7 +2691,7 @@ int save_snapshot(const char *name, Error **errp) } } - bs = bdrv_all_find_vmstate_bs(false ,NULL, errp); + bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp); if (bs == NULL) { return ret; } @@ -2881,7 +2881,7 @@ int load_snapshot(const char *name, Error **errp) return -1; } - bs_vm_state = bdrv_all_find_vmstate_bs(false, NULL, errp); + bs_vm_state = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp); if (!bs_vm_state) { return -1; } From patchwork Tue Sep 15 11:35:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 305405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 993FDC43461 for ; Tue, 15 Sep 2020 11:42:41 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 33EFE20684 for ; Tue, 15 Sep 2020 11:42:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="jBx/J2xF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33EFE20684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40774 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9M8-0000aq-8H for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:42:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35752) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fi-0001Hz-KJ for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:02 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:23224) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fg-0002v7-Km for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169759; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DtuxTbvivpopwMih3T8OTF9gKc2VvboMBy2HDvvfgTY=; b=jBx/J2xF6wxh6F5PVsGnfF1O7yrrbSV9AxufWbvrgtueJkie51HSnmsehvtxQ5wG3JoRpq u1YWryEp4jhEyhEboKVrR/km2JlKV/xYAg23MzZng5EvboogFyKtm09pDNWEH0BBmzhEn1 BecWy4SsqygWbBc7spZvZ8DPQFzYvuk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-226-Jf9LklmLNaqA63XDwZLG1w-1; Tue, 15 Sep 2020 07:35:58 -0400 X-MC-Unique: Jf9LklmLNaqA63XDwZLG1w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EA8A5420EC; Tue, 15 Sep 2020 11:35:56 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 42EBB747B0; Tue, 15 Sep 2020 11:35:53 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 5/9] migration: control whether snapshots are ovewritten Date: Tue, 15 Sep 2020 12:35:19 +0100 Message-Id: <20200915113523.2520317-6-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=63.128.21.124; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 02:23:19 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The traditional HMP "savevm" command will overwrite an existing snapshot if it already exists with the requested name. This new flag allows this to be controlled allowing for safer behaviour with a future QMP command. Signed-off-by: Daniel P. Berrangé --- include/migration/snapshot.h | 2 +- migration/savevm.c | 4 ++-- monitor/hmp-cmds.c | 2 +- replay/replay-snapshot.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h index c85b6ec75b..d7db1174ef 100644 --- a/include/migration/snapshot.h +++ b/include/migration/snapshot.h @@ -15,7 +15,7 @@ #ifndef QEMU_MIGRATION_SNAPSHOT_H #define QEMU_MIGRATION_SNAPSHOT_H -int save_snapshot(const char *name, Error **errp); +int save_snapshot(const char *name, bool overwrite, Error **errp); int load_snapshot(const char *name, Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 76972d69b0..2025e3e579 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2658,7 +2658,7 @@ int qemu_load_device_state(QEMUFile *f) return 0; } -int save_snapshot(const char *name, Error **errp) +int save_snapshot(const char *name, bool overwrite, Error **errp) { BlockDriverState *bs; QEMUSnapshotInfo sn1, *sn = &sn1, old_sn1, *old_sn = &old_sn1; @@ -2685,7 +2685,7 @@ int save_snapshot(const char *name, Error **errp) } /* Delete old snapshots of the same name */ - if (name) { + if (name && overwrite) { if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) { return ret; } diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 98c98ae182..c1b8533d0c 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1131,7 +1131,7 @@ void hmp_savevm(Monitor *mon, const QDict *qdict) { Error *err = NULL; - save_snapshot(qdict_get_try_str(qdict, "name"), &err); + save_snapshot(qdict_get_try_str(qdict, "name"), true, &err); hmp_handle_error(mon, err); } diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c index e26fa4c892..8e7ff97d11 100644 --- a/replay/replay-snapshot.c +++ b/replay/replay-snapshot.c @@ -77,7 +77,7 @@ void replay_vmstate_init(void) if (replay_snapshot) { if (replay_mode == REPLAY_MODE_RECORD) { - if (save_snapshot(replay_snapshot, &err) != 0) { + if (save_snapshot(replay_snapshot, true, &err) != 0) { error_report_err(err); error_report("Could not create snapshot for icount record"); exit(1); From patchwork Tue Sep 15 11:35:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 305406 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99607C433E2 for ; Tue, 15 Sep 2020 11:40:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C39B20872 for ; Tue, 15 Sep 2020 11:40:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hwWcN146" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C39B20872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:35614 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9Js-0006md-IT for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:40:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35820) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fo-0001PG-EC for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:10 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:44607 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fk-0002vj-WF for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169763; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P9iiBhaZ4EZPuVwfELWDmms0fDtNc9iEMg7v0gMULxA=; b=hwWcN146UGpVwVVWcHywWRCrND8scK2chIDhBSFMsvcRaxP2ojfFYLae400rJBm2KXzBi6 sJedHh0XjtyG/+beATjn7xR0GjmqElvXnnmB7bUfHb2MlmEbzpZ1otC9SU0PbY2Nt7AWyo Qw7Ayv3926J2A8xa4cbfPmaVM36ZNFA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-594-b5YBrXi-PgaUhQ0h3DQBhw-1; Tue, 15 Sep 2020 07:36:02 -0400 X-MC-Unique: b5YBrXi-PgaUhQ0h3DQBhw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2622B1005E64; Tue, 15 Sep 2020 11:36:01 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E0AA747B0; Tue, 15 Sep 2020 11:35:57 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 6/9] migration: wire up support for snapshot device selection Date: Tue, 15 Sep 2020 12:35:20 +0100 Message-Id: <20200915113523.2520317-7-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.003 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.61; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 03:21:13 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Modify load_snapshot/save_snapshot to accept the device list and vmstate node name parameters previously added to the block layer. Signed-off-by: Daniel P. Berrangé Reviewed-by: Dr. David Alan Gilbert --- include/migration/snapshot.h | 12 ++++++++++-- migration/savevm.c | 24 ++++++++++++++---------- monitor/hmp-cmds.c | 4 ++-- replay/replay-snapshot.c | 4 ++-- softmmu/vl.c | 2 +- 5 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h index d7db1174ef..b2c72e0a1b 100644 --- a/include/migration/snapshot.h +++ b/include/migration/snapshot.h @@ -15,7 +15,15 @@ #ifndef QEMU_MIGRATION_SNAPSHOT_H #define QEMU_MIGRATION_SNAPSHOT_H -int save_snapshot(const char *name, bool overwrite, Error **errp); -int load_snapshot(const char *name, Error **errp); +#include "qapi/qapi-builtin-types.h" + +int save_snapshot(const char *name, bool overwrite, + const char *vmstate, + bool has_devices, strList *devices, + Error **errp); +int load_snapshot(const char *name, + const char *vmstate, + bool has_devices, strList *devices, + Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 2025e3e579..b3d2ce7045 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -43,6 +43,8 @@ #include "qapi/error.h" #include "qapi/qapi-commands-migration.h" #include "qapi/qapi-commands-misc.h" +#include "qapi/clone-visitor.h" +#include "qapi/qapi-builtin-visit.h" #include "qapi/qmp/qerror.h" #include "qemu/error-report.h" #include "sysemu/cpus.h" @@ -2658,7 +2660,8 @@ int qemu_load_device_state(QEMUFile *f) return 0; } -int save_snapshot(const char *name, bool overwrite, Error **errp) +int save_snapshot(const char *name, bool overwrite, const char *vmstate, + bool has_devices, strList *devices, Error **errp) { BlockDriverState *bs; QEMUSnapshotInfo sn1, *sn = &sn1, old_sn1, *old_sn = &old_sn1; @@ -2680,18 +2683,18 @@ int save_snapshot(const char *name, bool overwrite, Error **errp) return ret; } - if (!bdrv_all_can_snapshot(false, NULL, errp)) { + if (!bdrv_all_can_snapshot(has_devices, devices, errp)) { return ret; } /* Delete old snapshots of the same name */ if (name && overwrite) { - if (bdrv_all_delete_snapshot(name, false, NULL, errp) < 0) { + if (bdrv_all_delete_snapshot(name, has_devices, devices, errp) < 0) { return ret; } } - bs = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp); + bs = bdrv_all_find_vmstate_bs(vmstate, has_devices, devices, errp); if (bs == NULL) { return ret; } @@ -2757,7 +2760,7 @@ int save_snapshot(const char *name, bool overwrite, Error **errp) aio_context_release(aio_context); aio_context = NULL; - ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, false, NULL, errp); + ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, has_devices, devices, errp); if (ret < 0) { goto the_end; } @@ -2858,7 +2861,8 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp) migration_incoming_state_destroy(); } -int load_snapshot(const char *name, Error **errp) +int load_snapshot(const char *name, const char *vmstate, + bool has_devices, strList *devices, Error **errp) { BlockDriverState *bs_vm_state; QEMUSnapshotInfo sn; @@ -2873,15 +2877,15 @@ int load_snapshot(const char *name, Error **errp) return -1; } - if (!bdrv_all_can_snapshot(false, NULL, errp)) { + if (!bdrv_all_can_snapshot(has_devices, devices, errp)) { return -1; } - ret = bdrv_all_find_snapshot(name, false, NULL, errp); + ret = bdrv_all_find_snapshot(name, has_devices, devices, errp); if (ret < 0) { return -1; } - bs_vm_state = bdrv_all_find_vmstate_bs(NULL, false, NULL, errp); + bs_vm_state = bdrv_all_find_vmstate_bs(vmstate, has_devices, devices, errp); if (!bs_vm_state) { return -1; } @@ -2902,7 +2906,7 @@ int load_snapshot(const char *name, Error **errp) /* Flush all IO requests so they don't interfere with the new state. */ bdrv_drain_all_begin(); - ret = bdrv_all_goto_snapshot(name, false, NULL, errp); + ret = bdrv_all_goto_snapshot(name, has_devices, devices, errp); if (ret < 0) { goto err_drain; } diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index c1b8533d0c..1191aac3ae 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1121,7 +1121,7 @@ void hmp_loadvm(Monitor *mon, const QDict *qdict) vm_stop(RUN_STATE_RESTORE_VM); - if (load_snapshot(name, &err) == 0 && saved_vm_running) { + if (load_snapshot(name, NULL, false, NULL, &err) == 0 && saved_vm_running) { vm_start(); } hmp_handle_error(mon, err); @@ -1131,7 +1131,7 @@ void hmp_savevm(Monitor *mon, const QDict *qdict) { Error *err = NULL; - save_snapshot(qdict_get_try_str(qdict, "name"), true, &err); + save_snapshot(qdict_get_try_str(qdict, "name"), true, NULL, false, NULL, &err); hmp_handle_error(mon, err); } diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c index 8e7ff97d11..8ed09177b5 100644 --- a/replay/replay-snapshot.c +++ b/replay/replay-snapshot.c @@ -77,13 +77,13 @@ void replay_vmstate_init(void) if (replay_snapshot) { if (replay_mode == REPLAY_MODE_RECORD) { - if (save_snapshot(replay_snapshot, true, &err) != 0) { + if (save_snapshot(replay_snapshot, true, NULL, false, NULL, &err) != 0) { error_report_err(err); error_report("Could not create snapshot for icount record"); exit(1); } } else if (replay_mode == REPLAY_MODE_PLAY) { - if (load_snapshot(replay_snapshot, &err) != 0) { + if (load_snapshot(replay_snapshot, NULL, false, NULL, &err) != 0) { error_report_err(err); error_report("Could not load snapshot for icount replay"); exit(1); diff --git a/softmmu/vl.c b/softmmu/vl.c index f7b103467c..820d380229 100644 --- a/softmmu/vl.c +++ b/softmmu/vl.c @@ -4459,7 +4459,7 @@ void qemu_init(int argc, char **argv, char **envp) register_global_state(); if (loadvm) { Error *local_err = NULL; - if (load_snapshot(loadvm, &local_err) < 0) { + if (load_snapshot(loadvm, NULL, false, NULL, &local_err) < 0) { error_report_err(local_err); autostart = 0; exit(1); From patchwork Tue Sep 15 11:35:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 273737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EC72C43461 for ; Tue, 15 Sep 2020 11:43:43 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C917020684 for ; Tue, 15 Sep 2020 11:43:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="etWK75aA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C917020684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42086 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9N8-0001DE-0K for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:43:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35852) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fs-0001Rt-Oi for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:12 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:31430) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fq-0002wA-Jv for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169768; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dgsVgOPmz5v22A33aAweAqfqvDypWHJiRuWX13LkLD4=; b=etWK75aATfVQmcfMeAJ31thRhFK3TzLjNWrgjD4G928VcQQfCFG3RB7L+aa51r9fi2ZLi1 KZMwwG8GkVGE56IthB8LN9g5B96UDiKCSPlM+vdacmIqakezxOZsTRuiIgGM9+EX4IW2Im vkhe8R1V6kUdLbOOiYDOfBRqeEJKMm4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-289-dkSG3o08MK-c6uMgC5nowg-1; Tue, 15 Sep 2020 07:36:06 -0400 X-MC-Unique: dkSG3o08MK-c6uMgC5nowg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3ECB8801AC2; Tue, 15 Sep 2020 11:36:05 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 89AF8747B0; Tue, 15 Sep 2020 11:36:01 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 7/9] migration: introduce a delete_snapshot wrapper Date: Tue, 15 Sep 2020 12:35:21 +0100 Message-Id: <20200915113523.2520317-8-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 02:10:32 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Make snapshot deletion consistent with the snapshot save and load commands by using a wrapper around the blockdev layer. The main difference is that we get upfront validation of the passed in device list (if any). Signed-off-by: Daniel P. Berrangé --- include/migration/snapshot.h | 4 +++- migration/savevm.c | 13 +++++++++++++ monitor/hmp-cmds.c | 2 +- 3 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h index b2c72e0a1b..f46f28745d 100644 --- a/include/migration/snapshot.h +++ b/include/migration/snapshot.h @@ -25,5 +25,7 @@ int load_snapshot(const char *name, const char *vmstate, bool has_devices, strList *devices, Error **errp); - +int delete_snapshot(const char *name, + bool has_devices, strList *devices, + Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index b3d2ce7045..56f85be250 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2940,6 +2940,19 @@ err_drain: return -1; } +int delete_snapshot(const char *name, bool has_devices, strList *devices, Error **errp) +{ + if (!bdrv_all_can_snapshot(has_devices, devices, errp)) { + return -1; + } + + if (bdrv_all_delete_snapshot(name, has_devices, devices, errp) < 0) { + return -1; + } + + return 0; +} + void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev) { qemu_ram_set_idstr(mr->ram_block, diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 1191aac3ae..278dca59b0 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1140,7 +1140,7 @@ void hmp_delvm(Monitor *mon, const QDict *qdict) Error *err = NULL; const char *name = qdict_get_str(qdict, "name"); - bdrv_all_delete_snapshot(name, false, NULL, &err); + delete_snapshot(name, false, NULL, &err); hmp_handle_error(mon, err); } From patchwork Tue Sep 15 11:35:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 273738 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB248C43461 for ; Tue, 15 Sep 2020 11:41:04 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4CE25208E4 for ; Tue, 15 Sep 2020 11:41:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MB5lWX5Q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CE25208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:36566 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9KZ-0007CG-FG for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:41:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35898) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fw-0001UD-8Y for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:16 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:30803 helo=us-smtp-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fs-0002wd-SC for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6MTm9GZUU94JUI9HFeqfI2EzkRE/Yym1IGbvVhzy8w4=; b=MB5lWX5QyyHnA/pJAKc24IpAhn0NOqTFmGdckOT0uf10Y8GOf/17uxYs0o4+9kPodlVT81 r2Hin8iZyHKS2G1XE0tcfwCUFd/oMRmqLebYiYLfZl3YlMMZKCXJAXlKAD797lcQe8U3Bp ugGdmZLfNNR3ZHAVlj4zzs77LlsyQTA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-317-linSK4AIPuq6IMIwxc697Q-1; Tue, 15 Sep 2020 07:36:10 -0400 X-MC-Unique: linSK4AIPuq6IMIwxc697Q-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 09F1B188C138; Tue, 15 Sep 2020 11:36:09 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9FFDF75136; Tue, 15 Sep 2020 11:36:05 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 8/9] iotests: add support for capturing and matching QMP events Date: Tue, 15 Sep 2020 12:35:22 +0100 Message-Id: <20200915113523.2520317-9-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=207.211.31.120; envelope-from=berrange@redhat.com; helo=us-smtp-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 02:29:39 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" When using the _launch_qemu and _send_qemu_cmd functions from common.qemu, any QMP events get mixed in with the output from the commands and responses. This makes it difficult to write a test case as the ordering of events in the output is not stable. This introduces a variable 'capture_events' which can be set to a list of event names. Any events listed in this variable will not be printed, instead collected in the $QEMU_EVENTS environment variable. A new '_wait_event' function can be invoked to collect events at a fixed point in time. The function will first pull events cached in $QEMU_EVENTS variable, and if none are found, will then read more from QMP. Signed-off-by: Daniel P. Berrangé --- tests/qemu-iotests/common.qemu | 107 ++++++++++++++++++++++++++++++++- 1 file changed, 106 insertions(+), 1 deletion(-) diff --git a/tests/qemu-iotests/common.qemu b/tests/qemu-iotests/common.qemu index de680cf1c7..87d7a54001 100644 --- a/tests/qemu-iotests/common.qemu +++ b/tests/qemu-iotests/common.qemu @@ -53,6 +53,15 @@ _in_fd=4 # If $mismatch_only is set, only non-matching responses will # be echoed. # +# If $capture_events is non-empty, then any QMP event names it lists +# will not be echoed out, but instead collected in the $QEMU_EVENTS +# variable. The _wait_event function can later be used to received +# the cached events. +# +# If $only_capture_events is set to anything but an empty string, +# when an error will be raised if a QMP message is seen which is +# not an event listed in $capture_events. +# # If $success_or_failure is set, the meaning of the arguments is # changed as follows: # $2: A string to search for in the response; if found, this indicates @@ -78,6 +87,32 @@ _timed_wait_for() QEMU_STATUS[$h]=0 while IFS= read -t ${QEMU_COMM_TIMEOUT} resp <&${QEMU_OUT[$h]} do + if [ -n "$capture_events" ]; then + capture=0 + local evname + for evname in $capture_events + do + grep -q "\"event\": \"${evname}\"" < <(echo "${resp}") + if [ $? -eq 0 ]; then + capture=1 + fi + done + if [ $capture = 1 ]; + then + ev=$(echo "${resp}" | tr -d '\r' | tr % .) + QEMU_EVENTS="${QEMU_EVENTS:+${QEMU_EVENTS}%}${ev}" + if [ -n "$only_capture_events" ]; then + return + else + continue + fi + fi + fi + if [ -n "$only_capture_events" ]; then + echo "Only expected $capture_events but got ${resp}" + exit 1 + fi + if [ -z "${silent}" ] && [ -z "${mismatch_only}" ]; then echo "${resp}" | _filter_testdir | _filter_qemu \ | _filter_qemu_io | _filter_qmp | _filter_hmp @@ -177,12 +212,82 @@ _send_qemu_cmd() let count--; done if [ ${QEMU_STATUS[$h]} -ne 0 ] && [ -z "${qemu_error_no_exit}" ]; then - echo "Timeout waiting for ${1} on handle ${h}" + echo "Timeout waiting for command ${1} response on handle ${h}" exit 1 #Timeout means the test failed fi } +# Check event cache for a named QMP event +# +# Input parameters: +# $1: Name of the QMP event to check for +# +# Checks if the named QMP event that was previously captured +# into $QEMU_EVENTS. When matched, the QMP event will be echoed +# and the $matched variable set to 1. +# +# _wait_event is more suitable for test usage in most cases +_check_cached_events() +{ + local evname=${1} + + local match="\"event\": \"$evname\"" + + matched=0 + if [ -n "$QEMU_EVENTS" ]; then + CURRENT_QEMU_EVENTS=$QEMU_EVENTS + QEMU_EVENTS= + old_IFS=$IFS + IFS="%" + for ev in $CURRENT_QEMU_EVENTS + do + grep -q "$match" < <(echo "${ev}") + if [ $? -eq 0 -a $matched = 0 ]; then + echo "${ev}" | _filter_testdir | _filter_qemu \ + | _filter_qemu_io | _filter_qmp | _filter_hmp + matched=1 + else + QEMU_EVENTS="${QEMU_EVENTS:+${QEMU_EVENTS}%}${ev}" + fi + done + IFS=$old_IFS + fi +} + +# Wait for a named QMP event +# +# Input parameters: +# $1: QEMU handle to use +# $2: Name of the QMP event to wait for +# +# Checks if the named QMP event that was previously captured +# into $QEMU_EVENTS. If none are present, then waits for the +# event to arrive on the QMP channel. When matched, the QMP +# event will be echoed +_wait_event() +{ + local h=${1} + local evname=${2} + + while true + do + _check_cached_events $evname + + if [ $matched = 1 ]; + then + return + fi + + only_capture_events=1 qemu_error_no_exit=1 _timed_wait_for ${h} + + if [ ${QEMU_STATUS[$h]} -ne 0 ] ; then + echo "Timeout waiting for event ${evname} on handle ${h}" + exit 1 #Timeout means the test failed + fi + done +} + # Launch a QEMU process. # # Input parameters: From patchwork Tue Sep 15 11:35:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 273736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77B9FC43461 for ; Tue, 15 Sep 2020 11:45:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9107620684 for ; Tue, 15 Sep 2020 11:45:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EcaBt27Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9107620684 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:47436 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kI9PG-0003RO-N0 for qemu-devel@archiver.kernel.org; Tue, 15 Sep 2020 07:45:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:35942) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kI9G5-0001lY-7R for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:25 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:38965 helo=us-smtp-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kI9Fz-0002xX-2x for qemu-devel@nongnu.org; Tue, 15 Sep 2020 07:36:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600169778; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rRV5wlhixDSx8E6LibeWTI488W6QUTJbSbRg/SexRjg=; b=EcaBt27ZNQLLJhf9Rq+pRyZhefRtOAcVYDfIBiSalMOK6WaxUAGoWCBRlRUYhvCIwIqou8 C4BRrUS3iLsl3LCsvI/n//amR0+yFjPV4X70nQnMnQUUhDH3exR+OTTJEw2HOQ1szpIoXp rW81a5KiA2wBNf3N1mYRZ9FQMQlCvZ4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-252-GfclKoYFMLq2u5M8CXAU8w-1; Tue, 15 Sep 2020 07:36:14 -0400 X-MC-Unique: GfclKoYFMLq2u5M8CXAU8w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 615B41005E65; Tue, 15 Sep 2020 11:36:13 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6C3BA747B0; Tue, 15 Sep 2020 11:36:09 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v4 9/9] migration: introduce snapshot-{save, load, delete} QMP commands Date: Tue, 15 Sep 2020 12:35:23 +0100 Message-Id: <20200915113523.2520317-10-berrange@redhat.com> In-Reply-To: <20200915113523.2520317-1-berrange@redhat.com> References: <20200915113523.2520317-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com X-Mimecast-Spam-Score: 0.004 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.120; envelope-from=berrange@redhat.com; helo=us-smtp-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/15 02:11:06 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -50 X-Spam_score: -5.1 X-Spam_bar: ----- X-Spam_report: (-5.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-2.999, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Markus Armbruster , Pavel Dovgalyuk , Paolo Bonzini , Max Reitz , John Snow Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" savevm, loadvm and delvm are some of the few HMP commands that have never been converted to use QMP. The reasons for the lack of conversion are that they blocked execution of the event thread, and the semantics around choice of disks were ill-defined. Despite this downside, however, libvirt and applications using libvirt have used these commands for as long as QMP has existed, via the "human-monitor-command" passthrough command. IOW, while it is clearly desirable to be able to fix the problems, they are not a blocker to all real world usage. Meanwhile there is a need for other features which involve adding new parameters to the commands. This is possible with HMP passthrough, but it provides no reliable way for apps to introspect features, so using QAPI modelling is highly desirable. This patch thus introduces new snapshot-{load,save,delete} commands to QMP that are intended to replace the old HMP counterparts. The new commands are given different names, because they will be using the new QEMU job framework and thus will have diverging behaviour from the HMP originals. It would thus be misleading to keep the same name. While this design uses the generic job framework, the current impl is still blocking. The intention that the blocking problem is fixed later. None the less applications using these new commands should assume that they are asynchronous and thus wait for the job status change event to indicate completion. In addition to using the job framework, the new commands require the caller to be explicit about all the block device nodes used in the snapshot operations, with no built-in default heuristics in use. Signed-off-by: Daniel P. Berrangé --- migration/savevm.c | 183 ++++++++++++++++ qapi/job.json | 9 +- qapi/migration.json | 120 +++++++++++ tests/qemu-iotests/310 | 338 ++++++++++++++++++++++++++++++ tests/qemu-iotests/310.out | 412 +++++++++++++++++++++++++++++++++++++ tests/qemu-iotests/group | 1 + 6 files changed, 1062 insertions(+), 1 deletion(-) create mode 100755 tests/qemu-iotests/310 create mode 100644 tests/qemu-iotests/310.out diff --git a/migration/savevm.c b/migration/savevm.c index 56f85be250..85b50953ad 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2980,3 +2980,186 @@ bool vmstate_check_only_migratable(const VMStateDescription *vmsd) return !(vmsd && vmsd->unmigratable); } + +typedef struct SnapshotJob { + Job common; + char *tag; + char *vmstate; + strList *devices; + Coroutine *co; + Error **errp; + int ret; +} SnapshotJob; + +static void qmp_snapshot_job_free(SnapshotJob *s) +{ + g_free(s->tag); + g_free(s->vmstate); + qapi_free_strList(s->devices); +} + + +static void snapshot_load_job_bh(void *opaque) +{ + Job *job = opaque; + SnapshotJob *s = container_of(job, SnapshotJob, common); + int saved_vm_running; + + job_progress_set_remaining(&s->common, 1); + + saved_vm_running = runstate_is_running(); + vm_stop(RUN_STATE_RESTORE_VM); + + s->ret = load_snapshot(s->tag, s->vmstate, true, s->devices, s->errp); + if (s->ret == 0 && saved_vm_running) { + vm_start(); + } + + job_progress_update(&s->common, 1); + + qmp_snapshot_job_free(s); + aio_co_wake(s->co); +} + +static void snapshot_save_job_bh(void *opaque) +{ + Job *job = opaque; + SnapshotJob *s = container_of(job, SnapshotJob, common); + + job_progress_set_remaining(&s->common, 1); + s->ret = save_snapshot(s->tag, false, s->vmstate, true, s->devices, s->errp); + job_progress_update(&s->common, 1); + + qmp_snapshot_job_free(s); + aio_co_wake(s->co); +} + +static void snapshot_delete_job_bh(void *opaque) +{ + Job *job = opaque; + SnapshotJob *s = container_of(job, SnapshotJob, common); + + job_progress_set_remaining(&s->common, 1); + s->ret = delete_snapshot(s->tag, true, s->devices, s->errp); + job_progress_update(&s->common, 1); + + qmp_snapshot_job_free(s); + aio_co_wake(s->co); +} + +static int coroutine_fn snapshot_save_job_run(Job *job, Error **errp) +{ + SnapshotJob *s = container_of(job, SnapshotJob, common); + s->errp = errp; + s->co = qemu_coroutine_self(); + aio_bh_schedule_oneshot(qemu_get_aio_context(), + snapshot_save_job_bh, job); + qemu_coroutine_yield(); + return s->ret; +} + +static int coroutine_fn snapshot_load_job_run(Job *job, Error **errp) +{ + SnapshotJob *s = container_of(job, SnapshotJob, common); + s->errp = errp; + s->co = qemu_coroutine_self(); + aio_bh_schedule_oneshot(qemu_get_aio_context(), + snapshot_load_job_bh, job); + qemu_coroutine_yield(); + return s->ret; +} + +static int coroutine_fn snapshot_delete_job_run(Job *job, Error **errp) +{ + SnapshotJob *s = container_of(job, SnapshotJob, common); + s->errp = errp; + s->co = qemu_coroutine_self(); + aio_bh_schedule_oneshot(qemu_get_aio_context(), + snapshot_delete_job_bh, job); + qemu_coroutine_yield(); + return s->ret; +} + + +static const JobDriver snapshot_load_job_driver = { + .instance_size = sizeof(SnapshotJob), + .job_type = JOB_TYPE_SNAPSHOT_LOAD, + .run = snapshot_load_job_run, +}; + +static const JobDriver snapshot_save_job_driver = { + .instance_size = sizeof(SnapshotJob), + .job_type = JOB_TYPE_SNAPSHOT_SAVE, + .run = snapshot_save_job_run, +}; + +static const JobDriver snapshot_delete_job_driver = { + .instance_size = sizeof(SnapshotJob), + .job_type = JOB_TYPE_SNAPSHOT_DELETE, + .run = snapshot_delete_job_run, +}; + + +void qmp_snapshot_save(const char *job_id, + const char *tag, + const char *vmstate, + strList *devices, + Error **errp) +{ + SnapshotJob *s; + + s = job_create(job_id, &snapshot_save_job_driver, NULL, + qemu_get_aio_context(), JOB_MANUAL_DISMISS, + NULL, NULL, errp); + if (!s) { + return; + } + + s->tag = g_strdup(tag); + s->vmstate = g_strdup(vmstate); + s->devices = QAPI_CLONE(strList, devices); + + job_start(&s->common); +} + +void qmp_snapshot_load(const char *job_id, + const char *tag, + const char *vmstate, + strList *devices, + Error **errp) +{ + SnapshotJob *s; + + s = job_create(job_id, &snapshot_load_job_driver, NULL, + qemu_get_aio_context(), JOB_MANUAL_DISMISS, + NULL, NULL, errp); + if (!s) { + return; + } + + s->tag = g_strdup(tag); + s->vmstate = g_strdup(vmstate); + s->devices = QAPI_CLONE(strList, devices); + + job_start(&s->common); +} + +void qmp_snapshot_delete(const char *job_id, + const char *tag, + strList *devices, + Error **errp) +{ + SnapshotJob *s; + + s = job_create(job_id, &snapshot_delete_job_driver, NULL, + qemu_get_aio_context(), JOB_MANUAL_DISMISS, + NULL, NULL, errp); + if (!s) { + return; + } + + s->tag = g_strdup(tag); + s->devices = QAPI_CLONE(strList, devices); + + job_start(&s->common); +} diff --git a/qapi/job.json b/qapi/job.json index 280c2f76f1..b2cbb4fead 100644 --- a/qapi/job.json +++ b/qapi/job.json @@ -22,10 +22,17 @@ # # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1) # +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 5.2) +# +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 5.2) +# +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 5.2) +# # Since: 1.7 ## { 'enum': 'JobType', - 'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] } + 'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend', + 'snapshot-load', 'snapshot-save', 'snapshot-delete'] } ## # @JobStatus: diff --git a/qapi/migration.json b/qapi/migration.json index 675f70bb67..b584c0be31 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -1720,3 +1720,123 @@ ## { 'event': 'UNPLUG_PRIMARY', 'data': { 'device-id': 'str' } } + +## +# @snapshot-save: +# +# Save a VM snapshot +# +# @job-id: identifier for the newly created job +# @tag: name of the snapshot to create +# @devices: list of block device node names to save a snapshot to +# @vmstate: block device node name to save vmstate to +# +# Applications should not assume that the snapshot save is complete +# when this command returns. The job commands / events must be used +# to determine completion and to fetch details of any errors that arise. +# +# Note that the VM CPUs will be paused during the time it takes to +# save the snapshot +# +# It is strongly recommended that @devices contain all writable +# block device nodes if a consistent snapshot is required. +# +# If @tag already exists, an error will be reported +# +# Returns: nothing +# +# Example: +# +# -> { "execute": "snapshot-save", +# "data": { +# "job-id": "snapsave0", +# "tag": "my-snap", +# "vmstate": "disk0", +# "devices": ["disk0", "disk1"] +# } +# } +# <- { "return": { } } +# +# Since: 5.2 +## +{ 'command': 'snapshot-save', + 'data': { 'job-id': 'str', + 'tag': 'str', + 'vmstate': 'str', + 'devices': ['str'] } } + +## +# @snapshot-load: +# +# Load a VM snapshot +# +# @job-id: identifier for the newly created job +# @tag: name of the snapshot to load. +# @devices: list of block device node names to load a snapshot from +# @vmstate: block device node name to load vmstate from +# +# Applications should not assume that the snapshot save is complete +# when this command returns. The job commands / events must be used +# to determine completion and to fetch details of any errors that arise. +# +# Note that the VM CPUs will be paused during the time it takes to +# save the snapshot +# +# It is strongly recommended that @devices contain all writable +# block device nodes that can have changed since the original +# @snapshot-save command execution. +# +# Returns: nothing +# +# Example: +# +# -> { "execute": "snapshot-load", +# "data": { +# "job-id": "snapload0", +# "tag": "my-snap", +# "vmstate": "disk0", +# "devices": ["disk0", "disk1"] +# } +# } +# <- { "return": { } } +# +# Since: 5.2 +## +{ 'command': 'snapshot-load', + 'data': { 'job-id': 'str', + 'tag': 'str', + 'vmstate': 'str', + 'devices': ['str'] } } + +## +# @snapshot-delete: +# +# Delete a VM snapshot +# +# @job-id: identifier for the newly created job +# @tag: name of the snapshot to delete. +# @devices: list of block device node names to delete a snapshot from +# +# Applications should not assume that the snapshot save is complete +# when this command returns. The job commands / events must be used +# to determine completion and to fetch details of any errors that arise. +# +# Returns: nothing +# +# Example: +# +# -> { "execute": "snapshot-delete", +# "data": { +# "job-id": "snapdelete0", +# "tag": "my-snap", +# "devices": ["disk0", "disk1"] +# } +# } +# <- { "return": { } } +# +# Since: 5.2 +## +{ 'command': 'snapshot-delete', + 'data': { 'job-id': 'str', + 'tag': 'str', + 'devices': ['str'] } } diff --git a/tests/qemu-iotests/310 b/tests/qemu-iotests/310 new file mode 100755 index 0000000000..622451c79f --- /dev/null +++ b/tests/qemu-iotests/310 @@ -0,0 +1,338 @@ +#!/usr/bin/env bash +# +# Test which nodes are involved in internal snapshots +# +# Copyright (C) 2020 Red Hat, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# + +# creator +owner=berrange@redhat.com + +seq=`basename $0` +echo "QA output created by $seq" + +status=1 # failure is the default! + +_cleanup() +{ + _cleanup_qemu + _cleanup_test_img + TEST_IMG="$TEST_IMG.alt1" _cleanup_test_img + TEST_IMG="$TEST_IMG.alt2" _cleanup_test_img + rm -f "$SOCK_DIR/nbd" +} +trap "_cleanup; exit \$status" 0 1 2 3 15 + +# get standard environment, filters and checks +. ./common.rc +. ./common.filter +. ./common.qemu + +_supported_fmt qcow2 +_supported_proto file +_supported_os Linux +_require_drivers copy-on-read + +# Internal snapshots are (currently) impossible with refcount_bits=1, +# and generally impossible with external data files +_unsupported_imgopts 'refcount_bits=1[^0-9]' data_file + +_require_devices virtio-blk + + +size=128M + +if [ -n "$BACKING_FILE" ]; then + _make_test_img -b "$BACKING_FILE" -F $IMGFMT $size +else + _make_test_img $size +fi +TEST_IMG="$TEST_IMG.alt1" _make_test_img $size +IMGOPTS= IMGFMT=raw TEST_IMG="$TEST_IMG.alt2" _make_test_img $size + +export capture_events="JOB_STATUS_CHANGE STOP RESUME" + +wait_job() +{ + local job=$1 + shift + + # All jobs start with two events... + # + # created + _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE" + # running + _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE" + + # Next events vary depending on job type and + # whether it succeeds or not. + for evname in $@ + do + _wait_event $QEMU_HANDLE $evname + done + + # All jobs finish off with two more events... + # concluded + _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE" + _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"query-jobs\"}" "return" + _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"job-dismiss\", \"arguments\": {\"id\": \"$job\"}}" "return" + # null + _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE" +} + +run_save() +{ + local job=$1 + local vmstate=$2 + local devices=$3 + local fail=$4 + + _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-save\", + \"arguments\": { + \"job-id\": \"$job\", + \"tag\": \"snap0\", + \"vmstate\": \"$vmstate\", + \"devices\": $devices}}" "return" + + if [ $fail = 0 ]; then + # job status: waiting, pending + wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE" + else + # job status: aborting + wait_job $job "JOB_STATUS_CHANGE" + fi +} + +run_load() +{ + local job=$1 + local vmstate=$2 + local devices=$3 + local fail=$4 + + _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-load\", + \"arguments\": { + \"job-id\": \"$job\", + \"tag\": \"snap0\", + \"vmstate\": \"$vmstate\", + \"devices\": $devices}}" "return" + if [ $fail = 0 ]; then + # job status: waiting, pending + wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE" + else + # job status: aborting + wait_job $job "STOP" "JOB_STATUS_CHANGE" + fi +} + +run_delete() +{ + local job=$1 + local devices=$2 + local fail=$3 + + _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-delete\", + \"arguments\": { + \"job-id\": \"$job\", + \"tag\": \"snap0\", + \"devices\": $devices}}" "return" + if [ $fail = 0 ]; then + # job status: waiting, pending + wait_job $job "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE" + else + # job status: aborting + wait_job $job "JOB_STATUS_CHANGE" + fi +} + +start_qemu() +{ + keep_stderr=y + _launch_qemu -nodefaults -nographic "$@" + + _send_qemu_cmd $QEMU_HANDLE '{"execute": "qmp_capabilities"}' 'return' +} + +stop_qemu() +{ + _send_qemu_cmd $QEMU_HANDLE '{"execute": "quit"}' 'return' + + wait=1 _cleanup_qemu +} + +run_test() +{ + local job=$1 + local vmstate=$2 + local devices=$3 + local savefail=$4 + local loadfail=$5 + local delfail=$6 + shift + shift + shift + shift + shift + shift + + start_qemu $@ + run_save "save-$job" "$vmstate" "$devices" "$savefail" + run_load "load-$job" "$vmstate" "$devices" "$loadfail" + run_delete "delete-$job" "$devices" "$delfail" + stop_qemu +} + + +echo +echo "===== Snapshot single qcow2 image =====" +echo + +run_test "simple" "diskfmt0" "[\"diskfmt0\"]" 0 0 0 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" + + +echo +echo "===== Snapshot no image =====" +echo + +# When snapshotting we need to pass at least one writable disk +# otherwise there's no work to do + +run_test "no-image" "diskfmt0" "[]" 1 1 1 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" + + +echo +echo "===== Snapshot missing image =====" +echo + +# The block node names we pass need to actually exist + +run_test "missing-image" "diskfmt1729" "[\"diskfmt1729\"]" 1 1 1 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" + + +echo +echo "===== Snapshot vmstate not in devices list =====" +echo + +# The node name referred to for vmstate must be one of the nodes +# being included in the snapshot, otherwise the vmstate that is +# captured is liable to be overwritten making subsequent load +# impossible + +run_test "excluded-vmstate" "diskfmt0" "[\"diskfmt1\"]" 1 1 0 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \ + -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \ + -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" + + +echo +echo "===== Snapshot protocol instead of format =====" +echo + +# The snapshot has to be done against the qcow2 format layer +# not the underlying file protocol layer + +run_test "proto-not-fmt" "disk0" "[\"disk0\"]" 1 1 1 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" + + +echo +echo "===== Snapshot dual qcow2 image =====" +echo + +# We can snapshot multiple qcow2 disks at the same time + +run_test "dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0 0 0 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \ + -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \ + -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" + + +echo +echo "===== Snapshot error with raw image =====" +echo + +# If we're snapshotting multiple disks, all must be capable +# of supporting snapshots. A raw disk in the list must cause +# an error. + +run_test "raw-fmt" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\", \"diskfmt2\"]" 1 1 1 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \ + -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \ + -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \ + -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \ + -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}" + + +echo +echo "===== Snapshot with raw image excluded =====" +echo + +# If we're snapshotting multiple disks, all must be capable +# of supporting snapshots. A writable raw disk can be excluded +# from the snapshot, though it means its data won't be restored +# by later snapshot load operation. + +run_test "skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0 0 0 \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \ + -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \ + -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \ + -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \ + -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}" + + +echo +echo "===== Snapshot bad error reporting to stderr =====" +echo + +# This demonstrates that we're not capturing vmstate loading failures +# into QMP errors, they're ending up in stderr instead. vmstate needs +# to report errors via Error object but that is a major piece of work +# for the future. This test case's expected output log will need +# adjusting when that is done. + +start_qemu \ + -device virtio-rng \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" + +run_save "save-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 0 +stop_qemu + +# leave off virtio-rng to provoke vmstate failure +start_qemu \ + -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \ + -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" + +run_load "load-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 1 +run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0 + +stop_qemu + +# success, all done +echo "*** done" +rm -f $seq.full +status=0 diff --git a/tests/qemu-iotests/310.out b/tests/qemu-iotests/310.out new file mode 100644 index 0000000000..b2001b2551 --- /dev/null +++ b/tests/qemu-iotests/310.out @@ -0,0 +1,412 @@ +QA output created by 310 +Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 +Formatting 'TEST_DIR/t.IMGFMT.alt1', fmt=IMGFMT size=134217728 +Formatting 'TEST_DIR/t.qcow2.alt2', fmt=IMGFMT size=134217728 + +===== Snapshot single qcow2 image ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-simple", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-simple"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-simple"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-simple"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-simple"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-simple", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-simple"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-simple"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-simple"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-simple"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-simple", "tag": "snap0", "devices": ["diskfmt0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-simple"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-simple"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-simple"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-simple"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-simple"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot no image ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-no-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": []}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-no-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-no-image", "error": "At least one device is required for snapshot"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-no-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-no-image"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-no-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": []}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-no-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-no-image", "error": "At least one device is required for snapshot"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-no-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-no-image"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-no-image", "tag": "snap0", "devices": []}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "delete-no-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-no-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-no-image", "error": "At least one device is required for snapshot"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-no-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-no-image"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot missing image ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-missing-image", "tag": "snap0", "vmstate": "diskfmt1729", "devices": ["diskfmt1729"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-missing-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-missing-image", "error": "No block device node 'diskfmt1729'"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-missing-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-missing-image"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-missing-image", "tag": "snap0", "vmstate": "diskfmt1729", "devices": ["diskfmt1729"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-missing-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-missing-image", "error": "No block device node 'diskfmt1729'"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-missing-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-missing-image"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-missing-image", "tag": "snap0", "devices": ["diskfmt1729"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "delete-missing-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-missing-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-missing-image", "error": "No block device node 'diskfmt1729'"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-missing-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-missing-image"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot vmstate not in devices list ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": { "job-id": "save-excluded-vmstate", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-excluded-vmstate"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-excluded-vmstate", "error": "vmstate block device 'diskfmt0' does not exist"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-excluded-vmstate"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-excluded-vmstate"}} +{"execute": "snapshot-load", "arguments": { "job-id": "load-excluded-vmstate", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-excluded-vmstate"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-excluded-vmstate", "error": "Could not find snapshot 'snap0' on 'diskfmt1'"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-excluded-vmstate"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-excluded-vmstate"}} +{"execute": "snapshot-delete", "arguments": { "job-id": "delete-excluded-vmstate", "tag": "snap0", "devices": ["diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-excluded-vmstate"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-excluded-vmstate"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-excluded-vmstate"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-excluded-vmstate"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-excluded-vmstate"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot protocol instead of format ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-proto-not-fmt", "tag": "snap0", "vmstate": "disk0", "devices": ["disk0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-proto-not-fmt"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-proto-not-fmt"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-proto-not-fmt"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-proto-not-fmt", "tag": "snap0", "vmstate": "disk0", "devices": ["disk0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-proto-not-fmt"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-proto-not-fmt"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-proto-not-fmt"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-proto-not-fmt", "tag": "snap0", "devices": ["disk0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "delete-proto-not-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-proto-not-fmt"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-proto-not-fmt"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-proto-not-fmt"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot dual qcow2 image ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-dual-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-dual-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-dual-image"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-dual-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-dual-image"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-dual-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-dual-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-dual-image"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-dual-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-dual-image"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-dual-image", "tag": "snap0", "devices": ["diskfmt0", "diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-dual-image"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-dual-image"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-dual-image"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-dual-image"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-dual-image"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot error with raw image ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-raw-fmt", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-raw-fmt"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-raw-fmt"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-raw-fmt"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-raw-fmt", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-raw-fmt"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-raw-fmt"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-raw-fmt"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-raw-fmt", "tag": "snap0", "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "delete-raw-fmt"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-raw-fmt"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-raw-fmt"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-raw-fmt"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot with raw image excluded ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-skip-raw", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-skip-raw"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-skip-raw"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-skip-raw"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-skip-raw"}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-skip-raw", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-skip-raw"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-skip-raw"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-skip-raw"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-skip-raw"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-skip-raw", "tag": "snap0", "devices": ["diskfmt0", "diskfmt1"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-skip-raw"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-skip-raw"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-skip-raw"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-skip-raw"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-skip-raw"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} + +===== Snapshot bad error reporting to stderr ===== + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-save", "arguments": {"job-id": "save-err-stderr", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "RESUME"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr"}]} +{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "snapshot-load", "arguments": {"job-id": "load-err-stderr", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}} +qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "STOP"} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-err-stderr"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-err-stderr", "error": "Error -22 while loading VM state"}]} +{"execute": "job-dismiss", "arguments": {"id": "load-err-stderr"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-err-stderr"}} +{"execute": "snapshot-delete", "arguments": {"job-id": "delete-err-stderr", "tag": "snap0", "devices": ["diskfmt0"]}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}} +{"execute": "query-jobs"} +{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]} +{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}} +{"execute": "quit"} +{"return": {}} +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}} +*** done diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group index 5cad015231..30a04b38c8 100644 --- a/tests/qemu-iotests/group +++ b/tests/qemu-iotests/group @@ -291,6 +291,7 @@ 277 rw quick 279 rw backing quick 280 rw migration quick +310 rw quick 281 rw quick 282 rw img quick 283 auto quick