From patchwork Mon Jul 27 15:08:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 277411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7C68C433E3 for ; Mon, 27 Jul 2020 15:10:16 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9EE272064B for ; Mon, 27 Jul 2020 15:10:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MKg7uAwT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EE272064B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40878 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k04lb-0007vc-Ok for qemu-devel@archiver.kernel.org; Mon, 27 Jul 2020 11:10:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37632) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k04kV-0006El-72 for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:07 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:28701 helo=us-smtp-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1k04kS-0007dm-1r for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595862543; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R0Th0UswnVz9gYW4KO6h6mHyTLrvC+U1InlHF1lf9jE=; b=MKg7uAwTwVU5EcVKYiJw5Tx6uV/AiOSox5DUUqk8Zh30yP+GvKBwiCgW2OVx5rtwjj18VU n82nuW6FUA7FRZOCkA7ieb5smb58uW8GXrn4d4AyX0DCbXM7wdujjB1J4D2xsX+LQ+mp9H yfPIdODBd+WrJo+s5ns5mRdtegEza6I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-257-9JIy-wSwNKSz0FyOi6W1Sg-1; Mon, 27 Jul 2020 11:08:59 -0400 X-MC-Unique: 9JIy-wSwNKSz0FyOi6W1Sg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 16A78805723; Mon, 27 Jul 2020 15:08:58 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.5]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1054B60BF4; Mon, 27 Jul 2020 15:08:54 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 2/6] block: push error reporting into bdrv_all_*_snapshot functions Date: Mon, 27 Jul 2020 16:08:39 +0100 Message-Id: <20200727150843.3419256-3-berrange@redhat.com> In-Reply-To: <20200727150843.3419256-1-berrange@redhat.com> References: <20200727150843.3419256-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=207.211.31.120; envelope-from=berrange@redhat.com; helo=us-smtp-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/07/27 00:16:29 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , Markus Armbruster , "Dr. David Alan Gilbert" , Max Reitz , John Snow , Pavel Dovgalyuk , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The bdrv_all_*_snapshot functions return a BlockDriverState pointer for the invalid backend, which the callers then use to report an error message. In some cases multiple callers are reporting the same error message, but with slightly different text. In the future there will be more error scenarios for some of these methods, which will benefit from fine grained error message reporting. So it is helpful to push error reporting down a level. Signed-off-by: Daniel P. Berrangé --- block/monitor/block-hmp-cmds.c | 7 ++-- block/snapshot.c | 77 +++++++++++++++++----------------- include/block/snapshot.h | 14 +++---- migration/savevm.c | 37 +++++----------- monitor/hmp-cmds.c | 7 +--- tests/qemu-iotests/267.out | 10 ++--- 6 files changed, 65 insertions(+), 87 deletions(-) diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index 4c8c375172..9df11494d6 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -898,10 +898,11 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) ImageEntry *image_entry, *next_ie; SnapshotEntry *snapshot_entry; + Error *err = NULL; - bs = bdrv_all_find_vmstate_bs(); + bs = bdrv_all_find_vmstate_bs(&err); if (!bs) { - monitor_printf(mon, "No available block device supports snapshots\n"); + error_report_err(err); return; } aio_context = bdrv_get_aio_context(bs); @@ -951,7 +952,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) total = 0; for (i = 0; i < nb_sns; i++) { SnapshotEntry *next_sn; - if (bdrv_all_find_snapshot(sn_tab[i].name, &bs1) == 0) { + if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) { global_snapshots[total] = i; total++; QTAILQ_FOREACH(image_entry, &image_list, next) { diff --git a/block/snapshot.c b/block/snapshot.c index bd9fb01817..6839060622 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -400,14 +400,14 @@ static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs) * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers) */ -bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs) +bool bdrv_all_can_snapshot(Error **errp) { - bool ok = true; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + bool ok; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs)) { @@ -415,26 +415,25 @@ bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs) } aio_context_release(ctx); if (!ok) { + error_setg(errp, "Device '%s' is writable but does not support " + "snapshots", bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return false; } } -fail: - *first_bad_bs = bs; - return ok; + return true; } -int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs, - Error **errp) +int bdrv_all_delete_snapshot(const char *name, Error **errp) { - int ret = 0; BlockDriverState *bs; BdrvNextIterator it; QEMUSnapshotInfo sn1, *snapshot = &sn1; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs) && @@ -445,26 +444,25 @@ int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs, } aio_context_release(ctx); if (ret < 0) { + error_prepend(errp, "Could not delete snapshot '%s' on '%s': ", + name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return ret; + return 0; } -int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs, - Error **errp) +int bdrv_all_goto_snapshot(const char *name, Error **errp) { - int ret = 0; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs)) { @@ -472,75 +470,75 @@ int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs, } aio_context_release(ctx); if (ret < 0) { + error_prepend(errp, "Could not load snapshot '%s' on '%s': ", + name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return ret; + return 0; } -int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs) +int bdrv_all_find_snapshot(const char *name, Error **errp) { QEMUSnapshotInfo sn; - int err = 0; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bdrv_all_snapshots_includes_bs(bs)) { - err = bdrv_snapshot_find(bs, &sn, name); + ret = bdrv_snapshot_find(bs, &sn, name); } aio_context_release(ctx); - if (err < 0) { + if (ret < 0) { + error_setg(errp, "Could not find snapshot '%s' on '%s'", + name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return err; + return 0; } int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, - BlockDriverState **first_bad_bs) + Error **errp) { - int err = 0; BlockDriverState *bs; BdrvNextIterator it; for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { AioContext *ctx = bdrv_get_aio_context(bs); + int ret; aio_context_acquire(ctx); if (bs == vm_state_bs) { sn->vm_state_size = vm_state_size; - err = bdrv_snapshot_create(bs, sn); + ret = bdrv_snapshot_create(bs, sn); } else if (bdrv_all_snapshots_includes_bs(bs)) { sn->vm_state_size = 0; - err = bdrv_snapshot_create(bs, sn); + ret = bdrv_snapshot_create(bs, sn); } aio_context_release(ctx); - if (err < 0) { + if (ret < 0) { + error_setg(errp, "Could not create snapshot '%s' on '%s'", + sn->name, bdrv_get_device_or_node_name(bs)); bdrv_next_cleanup(&it); - goto fail; + return -1; } } -fail: - *first_bad_bs = bs; - return err; + return 0; } -BlockDriverState *bdrv_all_find_vmstate_bs(void) +BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp) { BlockDriverState *bs; BdrvNextIterator it; @@ -558,5 +556,8 @@ BlockDriverState *bdrv_all_find_vmstate_bs(void) break; } } + if (!bs) { + error_setg(errp, "No block device supports snapshots"); + } return bs; } diff --git a/include/block/snapshot.h b/include/block/snapshot.h index 2bfcd57578..ba1528eee0 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -76,17 +76,15 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs, * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers */ -bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs); -int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bsd_bs, - Error **errp); -int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs, - Error **errp); -int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs); +bool bdrv_all_can_snapshot(Error **errp); +int bdrv_all_delete_snapshot(const char *name, Error **errp); +int bdrv_all_goto_snapshot(const char *name, Error **errp); +int bdrv_all_find_snapshot(const char *name, Error **errp); int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, - BlockDriverState **first_bad_bs); + Error **errp); -BlockDriverState *bdrv_all_find_vmstate_bs(void); +BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index cffee6cab7..19259ef7c0 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2633,7 +2633,7 @@ int qemu_load_device_state(QEMUFile *f) int save_snapshot(const char *name, Error **errp) { - BlockDriverState *bs, *bs1; + BlockDriverState *bs; QEMUSnapshotInfo sn1, *sn = &sn1, old_sn1, *old_sn = &old_sn1; int ret = -1, ret2; QEMUFile *f; @@ -2653,25 +2653,19 @@ int save_snapshot(const char *name, Error **errp) return ret; } - if (!bdrv_all_can_snapshot(&bs)) { - error_setg(errp, "Device '%s' is writable but does not support " - "snapshots", bdrv_get_device_or_node_name(bs)); + if (!bdrv_all_can_snapshot(errp)) { return ret; } /* Delete old snapshots of the same name */ if (name) { - ret = bdrv_all_delete_snapshot(name, &bs1, errp); - if (ret < 0) { - error_prepend(errp, "Error while deleting snapshot on device " - "'%s': ", bdrv_get_device_or_node_name(bs1)); + if (bdrv_all_delete_snapshot(name, errp) < 0) { return ret; } } - bs = bdrv_all_find_vmstate_bs(); + bs = bdrv_all_find_vmstate_bs(errp); if (bs == NULL) { - error_setg(errp, "No block device can accept snapshots"); return ret; } aio_context = bdrv_get_aio_context(bs); @@ -2736,10 +2730,8 @@ int save_snapshot(const char *name, Error **errp) aio_context_release(aio_context); aio_context = NULL; - ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, &bs); + ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, errp); if (ret < 0) { - error_setg(errp, "Error while creating snapshot on '%s'", - bdrv_get_device_or_node_name(bs)); goto the_end; } @@ -2841,7 +2833,7 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp) int load_snapshot(const char *name, Error **errp) { - BlockDriverState *bs, *bs_vm_state; + BlockDriverState *bs_vm_state; QEMUSnapshotInfo sn; QEMUFile *f; int ret; @@ -2854,23 +2846,16 @@ int load_snapshot(const char *name, Error **errp) return -EINVAL; } - if (!bdrv_all_can_snapshot(&bs)) { - error_setg(errp, - "Device '%s' is writable but does not support snapshots", - bdrv_get_device_or_node_name(bs)); + if (!bdrv_all_can_snapshot(errp)) { return -ENOTSUP; } - ret = bdrv_all_find_snapshot(name, &bs); + ret = bdrv_all_find_snapshot(name, errp); if (ret < 0) { - error_setg(errp, - "Device '%s' does not have the requested snapshot '%s'", - bdrv_get_device_or_node_name(bs), name); return ret; } - bs_vm_state = bdrv_all_find_vmstate_bs(); + bs_vm_state = bdrv_all_find_vmstate_bs(errp); if (!bs_vm_state) { - error_setg(errp, "No block device supports snapshots"); return -ENOTSUP; } aio_context = bdrv_get_aio_context(bs_vm_state); @@ -2890,10 +2875,8 @@ int load_snapshot(const char *name, Error **errp) /* Flush all IO requests so they don't interfere with the new state. */ bdrv_drain_all_begin(); - ret = bdrv_all_goto_snapshot(name, &bs, errp); + ret = bdrv_all_goto_snapshot(name, errp); if (ret < 0) { - error_prepend(errp, "Could not load snapshot '%s' on '%s': ", - name, bdrv_get_device_or_node_name(bs)); goto err_drain; } diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index ae4b6a4246..52f7d322e1 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1111,15 +1111,10 @@ void hmp_savevm(Monitor *mon, const QDict *qdict) void hmp_delvm(Monitor *mon, const QDict *qdict) { - BlockDriverState *bs; Error *err = NULL; const char *name = qdict_get_str(qdict, "name"); - if (bdrv_all_delete_snapshot(name, &bs, &err) < 0) { - error_prepend(&err, - "deleting snapshot on device '%s': ", - bdrv_get_device_name(bs)); - } + bdrv_all_delete_snapshot(name, &err); hmp_handle_error(mon, err); } diff --git a/tests/qemu-iotests/267.out b/tests/qemu-iotests/267.out index 215902b3ad..c65cce893a 100644 --- a/tests/qemu-iotests/267.out +++ b/tests/qemu-iotests/267.out @@ -6,9 +6,9 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 Testing: QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 -Error: No block device can accept snapshots +Error: No block device supports snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: No block device supports snapshots (qemu) quit @@ -22,7 +22,7 @@ QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 Error: Device 'none0' is writable but does not support snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: Device 'none0' is writable but does not support snapshots (qemu) quit @@ -58,7 +58,7 @@ QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 Error: Device 'virtio0' is writable but does not support snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: Device 'virtio0' is writable but does not support snapshots (qemu) quit @@ -83,7 +83,7 @@ QEMU X.Y.Z monitor - type 'help' for more information (qemu) savevm snap0 Error: Device 'file' is writable but does not support snapshots (qemu) info snapshots -No available block device supports snapshots +No block device supports snapshots (qemu) loadvm snap0 Error: Device 'file' is writable but does not support snapshots (qemu) quit From patchwork Mon Jul 27 15:08:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 277409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE770C433E8 for ; Mon, 27 Jul 2020 15:13:37 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99F4C20672 for ; Mon, 27 Jul 2020 15:13:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EVVETpOt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 99F4C20672 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:53206 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k04oq-00052o-UE for qemu-devel@archiver.kernel.org; Mon, 27 Jul 2020 11:13:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37716) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k04kb-0006VL-3O for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:13 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:28383 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1k04kY-0007fA-Ph for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595862549; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CyrAupdjp1FZJ7XqvB1Q3fWCCNFsIvMDAc2cDP/R0dc=; b=EVVETpOteiFjpY5HFC/ZRo5i+zRoL/YEMZ4TqCCdUPqMrJkidS0Vn3ZH3m/0F5FSJOajG2 yA+l7BWHcLSnqKcX+Y7y7qYwnJ6X7d+RLjs7Pch9tJC33gBj9ZJZQjgWzeZH+RbfHRfduC iIWO5T6OdEFkTAWkwlAuiheGgDXhHtU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-253-jtLqRVxMO_iMuZy7SrMVbA-1; Mon, 27 Jul 2020 11:09:06 -0400 X-MC-Unique: jtLqRVxMO_iMuZy7SrMVbA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 32D141932480; Mon, 27 Jul 2020 15:09:05 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.5]) by smtp.corp.redhat.com (Postfix) with ESMTP id 009BF60BF4; Mon, 27 Jul 2020 15:09:01 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 4/6] block: add ability to specify list of blockdevs during snapshot Date: Mon, 27 Jul 2020 16:08:41 +0100 Message-Id: <20200727150843.3419256-5-berrange@redhat.com> In-Reply-To: <20200727150843.3419256-1-berrange@redhat.com> References: <20200727150843.3419256-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=207.211.31.81; envelope-from=berrange@redhat.com; helo=us-smtp-delivery-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/07/27 03:37:14 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , Markus Armbruster , "Dr. David Alan Gilbert" , Max Reitz , John Snow , Pavel Dovgalyuk , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" When running snapshot operations, there are various rules for which blockdevs are included/excluded. While this provides reasonable default behaviour, there are scenarios that are not well handled by the default logic. Some of the conditions do not have a single correct answer. Thus there needs to be a way for the mgmt app to provide an explicit list of blockdevs to perform snapshots across. This can be achieved by passing a list of node names that should be used. Signed-off-by: Daniel P. Berrangé --- block/monitor/block-hmp-cmds.c | 4 +-- block/snapshot.c | 48 ++++++++++++++++++++++------------ include/block/snapshot.h | 13 ++++----- migration/savevm.c | 16 ++++++------ monitor/hmp-cmds.c | 2 +- 5 files changed, 49 insertions(+), 34 deletions(-) diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index 9df11494d6..db76c43cc2 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -900,7 +900,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) SnapshotEntry *snapshot_entry; Error *err = NULL; - bs = bdrv_all_find_vmstate_bs(&err); + bs = bdrv_all_find_vmstate_bs(NULL, &err); if (!bs) { error_report_err(err); return; @@ -952,7 +952,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) total = 0; for (i = 0; i < nb_sns; i++) { SnapshotEntry *next_sn; - if (bdrv_all_find_snapshot(sn_tab[i].name, NULL) == 0) { + if (bdrv_all_find_snapshot(sn_tab[i].name, NULL, NULL) == 0) { global_snapshots[total] = i; total++; QTAILQ_FOREACH(image_entry, &image_list, next) { diff --git a/block/snapshot.c b/block/snapshot.c index 6839060622..f2600a8c7f 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -385,22 +385,34 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs, return ret; } -static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs) +static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs, + strList *devices) { - if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) { + if (devices) { + const char *node_name = bdrv_get_node_name(bs); + while (devices) { + if (g_str_equal(node_name, devices->value)) { + return true; + } + devices = devices->next; + } return false; - } + } else { + if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) { + return false; + } - /* Include all nodes that are either in use by a BlockBackend, or that - * aren't attached to any node, but owned by the monitor. */ - return bdrv_has_blk(bs) || QLIST_EMPTY(&bs->parents); + /* Include all nodes that are either in use by a BlockBackend, or that + * aren't attached to any node, but owned by the monitor. */ + return bdrv_has_blk(bs) || QLIST_EMPTY(&bs->parents); + } } /* Group operations. All block drivers are involved. * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers) */ -bool bdrv_all_can_snapshot(Error **errp) +bool bdrv_all_can_snapshot(strList *devices, Error **errp) { BlockDriverState *bs; BdrvNextIterator it; @@ -410,7 +422,7 @@ bool bdrv_all_can_snapshot(Error **errp) bool ok; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs)) { + if (bdrv_all_snapshots_includes_bs(bs, devices)) { ok = bdrv_can_snapshot(bs); } aio_context_release(ctx); @@ -425,7 +437,7 @@ bool bdrv_all_can_snapshot(Error **errp) return true; } -int bdrv_all_delete_snapshot(const char *name, Error **errp) +int bdrv_all_delete_snapshot(const char *name, strList *devices, Error **errp) { BlockDriverState *bs; BdrvNextIterator it; @@ -436,7 +448,7 @@ int bdrv_all_delete_snapshot(const char *name, Error **errp) int ret; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs) && + if (bdrv_all_snapshots_includes_bs(bs, devices) && bdrv_snapshot_find(bs, snapshot, name) >= 0) { ret = bdrv_snapshot_delete(bs, snapshot->id_str, @@ -455,7 +467,7 @@ int bdrv_all_delete_snapshot(const char *name, Error **errp) } -int bdrv_all_goto_snapshot(const char *name, Error **errp) +int bdrv_all_goto_snapshot(const char *name, strList *devices, Error **errp) { BlockDriverState *bs; BdrvNextIterator it; @@ -465,7 +477,7 @@ int bdrv_all_goto_snapshot(const char *name, Error **errp) int ret; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs)) { + if (bdrv_all_snapshots_includes_bs(bs, devices)) { ret = bdrv_snapshot_goto(bs, name, errp); } aio_context_release(ctx); @@ -480,7 +492,7 @@ int bdrv_all_goto_snapshot(const char *name, Error **errp) return 0; } -int bdrv_all_find_snapshot(const char *name, Error **errp) +int bdrv_all_find_snapshot(const char *name, strList *devices, Error **errp) { QEMUSnapshotInfo sn; BlockDriverState *bs; @@ -491,7 +503,7 @@ int bdrv_all_find_snapshot(const char *name, Error **errp) int ret; aio_context_acquire(ctx); - if (bdrv_all_snapshots_includes_bs(bs)) { + if (bdrv_all_snapshots_includes_bs(bs, devices)) { ret = bdrv_snapshot_find(bs, &sn, name); } aio_context_release(ctx); @@ -509,6 +521,7 @@ int bdrv_all_find_snapshot(const char *name, Error **errp) int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, + strList *devices, Error **errp) { BlockDriverState *bs; @@ -522,7 +535,7 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, if (bs == vm_state_bs) { sn->vm_state_size = vm_state_size; ret = bdrv_snapshot_create(bs, sn); - } else if (bdrv_all_snapshots_includes_bs(bs)) { + } else if (bdrv_all_snapshots_includes_bs(bs, devices)) { sn->vm_state_size = 0; ret = bdrv_snapshot_create(bs, sn); } @@ -538,7 +551,7 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, return 0; } -BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp) +BlockDriverState *bdrv_all_find_vmstate_bs(strList *devices, Error **errp) { BlockDriverState *bs; BdrvNextIterator it; @@ -548,7 +561,8 @@ BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp) bool found; aio_context_acquire(ctx); - found = bdrv_all_snapshots_includes_bs(bs) && bdrv_can_snapshot(bs); + found = bdrv_all_snapshots_includes_bs(bs, devices) && + bdrv_can_snapshot(bs); aio_context_release(ctx); if (found) { diff --git a/include/block/snapshot.h b/include/block/snapshot.h index ba1528eee0..1c5b0705a9 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -25,7 +25,7 @@ #ifndef SNAPSHOT_H #define SNAPSHOT_H - +#include "qapi/qapi-builtin-types.h" #define SNAPSHOT_OPT_BASE "snapshot." #define SNAPSHOT_OPT_ID "snapshot.id" @@ -76,15 +76,16 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs, * These functions will properly handle dataplane (take aio_context_acquire * when appropriate for appropriate block drivers */ -bool bdrv_all_can_snapshot(Error **errp); -int bdrv_all_delete_snapshot(const char *name, Error **errp); -int bdrv_all_goto_snapshot(const char *name, Error **errp); -int bdrv_all_find_snapshot(const char *name, Error **errp); +bool bdrv_all_can_snapshot(strList *devices, Error **errp); +int bdrv_all_delete_snapshot(const char *name, strList *devices, Error **errp); +int bdrv_all_goto_snapshot(const char *name, strList *devices, Error **errp); +int bdrv_all_find_snapshot(const char *name, strList *devices, Error **errp); int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, BlockDriverState *vm_state_bs, uint64_t vm_state_size, + strList *devices, Error **errp); -BlockDriverState *bdrv_all_find_vmstate_bs(Error **errp); +BlockDriverState *bdrv_all_find_vmstate_bs(strList *devices, Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 6c4d80fc5a..cdc1f2f2d8 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2653,18 +2653,18 @@ int save_snapshot(const char *name, Error **errp) return ret; } - if (!bdrv_all_can_snapshot(errp)) { + if (!bdrv_all_can_snapshot(NULL, errp)) { return ret; } /* Delete old snapshots of the same name */ if (name) { - if (bdrv_all_delete_snapshot(name, errp) < 0) { + if (bdrv_all_delete_snapshot(name, NULL, errp) < 0) { return ret; } } - bs = bdrv_all_find_vmstate_bs(errp); + bs = bdrv_all_find_vmstate_bs(NULL, errp); if (bs == NULL) { return ret; } @@ -2730,7 +2730,7 @@ int save_snapshot(const char *name, Error **errp) aio_context_release(aio_context); aio_context = NULL; - ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, errp); + ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, NULL, errp); if (ret < 0) { goto the_end; } @@ -2846,15 +2846,15 @@ int load_snapshot(const char *name, Error **errp) return -1; } - if (!bdrv_all_can_snapshot(errp)) { + if (!bdrv_all_can_snapshot(NULL, errp)) { return -1; } - ret = bdrv_all_find_snapshot(name, errp); + ret = bdrv_all_find_snapshot(name, NULL, errp); if (ret < 0) { return -1; } - bs_vm_state = bdrv_all_find_vmstate_bs(errp); + bs_vm_state = bdrv_all_find_vmstate_bs(NULL, errp); if (!bs_vm_state) { return -1; } @@ -2875,7 +2875,7 @@ int load_snapshot(const char *name, Error **errp) /* Flush all IO requests so they don't interfere with the new state. */ bdrv_drain_all_begin(); - ret = bdrv_all_goto_snapshot(name, errp); + ret = bdrv_all_goto_snapshot(name, NULL, errp); if (ret < 0) { goto err_drain; } diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 52f7d322e1..90e717f0c4 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1114,7 +1114,7 @@ void hmp_delvm(Monitor *mon, const QDict *qdict) Error *err = NULL; const char *name = qdict_get_str(qdict, "name"); - bdrv_all_delete_snapshot(name, &err); + bdrv_all_delete_snapshot(name, NULL, &err); hmp_handle_error(mon, err); } From patchwork Mon Jul 27 15:08:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 277410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D033C433E0 for ; Mon, 27 Jul 2020 15:12:16 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DF34420672 for ; Mon, 27 Jul 2020 15:12:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IA/T3Nbp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF34420672 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:47976 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k04nX-0002ni-4y for qemu-devel@archiver.kernel.org; Mon, 27 Jul 2020 11:12:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37838) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k04ki-0006na-S7 for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:20 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:21415 helo=us-smtp-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1k04kg-0007gw-5Y for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595862557; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WCkLAXBjLGBbuz5WvnpxcNCWRuwJufaoH+ZkmO8gEUo=; b=IA/T3Nbp2YLuRfzKBKy8DHumr5RdvOVlGv/+aFUDbN6jnfRCXqXxYXMs+kvR5BojRSegS2 Fz9lVOOQ1NWQwrVuT9GA08sqKmdUEhbw8u0yASrH1Eg3jHVqHxGSWp8ZWIfsQyENoNigvZ L9ippMb2sFIcWSn3t54b8be+Gl1kRZU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-442-u1eDLRbGOzecB4PXvwGilQ-1; Mon, 27 Jul 2020 11:09:15 -0400 X-MC-Unique: u1eDLRbGOzecB4PXvwGilQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 708881932483; Mon, 27 Jul 2020 15:09:14 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.5]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9048E60BF4; Mon, 27 Jul 2020 15:09:05 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 5/6] block: allow specifying name of block device for vmstate storage Date: Mon, 27 Jul 2020 16:08:42 +0100 Message-Id: <20200727150843.3419256-6-berrange@redhat.com> In-Reply-To: <20200727150843.3419256-1-berrange@redhat.com> References: <20200727150843.3419256-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.120; envelope-from=berrange@redhat.com; helo=us-smtp-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/07/27 01:44:14 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -40 X-Spam_score: -4.1 X-Spam_bar: ---- X-Spam_report: (-4.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , Markus Armbruster , "Dr. David Alan Gilbert" , Max Reitz , John Snow , Pavel Dovgalyuk , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Currently the vmstate will be stored in the first block device that supports snapshots. Historically this would have usually been the root device, but with UEFI it might be the variable store. There needs to be a way to override the choice of block device to store the state in. Signed-off-by: Daniel P. Berrangé --- block/monitor/block-hmp-cmds.c | 2 +- block/snapshot.c | 64 ++++++++++++++++++++++++++-------- include/block/snapshot.h | 4 ++- migration/savevm.c | 4 +-- 4 files changed, 56 insertions(+), 18 deletions(-) diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c index db76c43cc2..81d1b52262 100644 --- a/block/monitor/block-hmp-cmds.c +++ b/block/monitor/block-hmp-cmds.c @@ -900,7 +900,7 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) SnapshotEntry *snapshot_entry; Error *err = NULL; - bs = bdrv_all_find_vmstate_bs(NULL, &err); + bs = bdrv_all_find_vmstate_bs(NULL, NULL, &err); if (!bs) { error_report_err(err); return; diff --git a/block/snapshot.c b/block/snapshot.c index f2600a8c7f..b1ad70e278 100644 --- a/block/snapshot.c +++ b/block/snapshot.c @@ -551,27 +551,63 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, return 0; } -BlockDriverState *bdrv_all_find_vmstate_bs(strList *devices, Error **errp) +BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs, + strList *devices, + Error **errp) { BlockDriverState *bs; BdrvNextIterator it; - for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { - AioContext *ctx = bdrv_get_aio_context(bs); - bool found; + if (vmstate_bs) { + bool usable = false; + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + AioContext *ctx = bdrv_get_aio_context(bs); + bool match; - aio_context_acquire(ctx); - found = bdrv_all_snapshots_includes_bs(bs, devices) && - bdrv_can_snapshot(bs); - aio_context_release(ctx); + aio_context_acquire(ctx); + if (g_str_equal(vmstate_bs, bdrv_get_node_name(bs))) { + match = true; + usable = bdrv_can_snapshot(bs); + } + aio_context_release(ctx); + if (match) { + bdrv_next_cleanup(&it); + break; + } + } + if (!bs) { + error_setg(errp, + "block device '%s' does not exist", + vmstate_bs); + return NULL; + } - if (found) { - bdrv_next_cleanup(&it); - break; + if (!usable) { + error_setg(errp, + "block device '%s' does not support snapshots", + vmstate_bs); + return NULL; + } + } else { + for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) { + AioContext *ctx = bdrv_get_aio_context(bs); + bool found; + + aio_context_acquire(ctx); + found = bdrv_all_snapshots_includes_bs(bs, devices) && + bdrv_can_snapshot(bs); + aio_context_release(ctx); + + if (found) { + bdrv_next_cleanup(&it); + break; + } + } + + if (!bs) { + error_setg(errp, "No block device supports snapshots"); + return NULL; } - } - if (!bs) { - error_setg(errp, "No block device supports snapshots"); } return bs; } diff --git a/include/block/snapshot.h b/include/block/snapshot.h index 1c5b0705a9..05550e5da1 100644 --- a/include/block/snapshot.h +++ b/include/block/snapshot.h @@ -86,6 +86,8 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn, strList *devices, Error **errp); -BlockDriverState *bdrv_all_find_vmstate_bs(strList *devices, Error **errp); +BlockDriverState *bdrv_all_find_vmstate_bs(const char *vmstate_bs, + strList *devices, + Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index cdc1f2f2d8..1707fa30db 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2664,7 +2664,7 @@ int save_snapshot(const char *name, Error **errp) } } - bs = bdrv_all_find_vmstate_bs(NULL, errp); + bs = bdrv_all_find_vmstate_bs(NULL, NULL, errp); if (bs == NULL) { return ret; } @@ -2854,7 +2854,7 @@ int load_snapshot(const char *name, Error **errp) return -1; } - bs_vm_state = bdrv_all_find_vmstate_bs(NULL, errp); + bs_vm_state = bdrv_all_find_vmstate_bs(NULL, NULL, errp); if (!bs_vm_state) { return -1; } From patchwork Mon Jul 27 15:08:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= X-Patchwork-Id: 277408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E71BC433DF for ; Mon, 27 Jul 2020 15:15:45 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC0AB20672 for ; Mon, 27 Jul 2020 15:15:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MFJGlN9+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC0AB20672 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:58046 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1k04qu-0007Pz-5P for qemu-devel@archiver.kernel.org; Mon, 27 Jul 2020 11:15:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:38208) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1k04lH-00083M-Dy for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:55 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:58632 helo=us-smtp-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1k04lE-0007nk-5p for qemu-devel@nongnu.org; Mon, 27 Jul 2020 11:09:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595862590; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/Zy7Mn3kcExFnf9k5vuQODWSC9AL71T7bebtFYFHT/I=; b=MFJGlN9+WKLghLtM5fOWyfT6Jhh0I5Gpl0AUn2HU3FNVjtUAS2Jb7Il67wTKxS4xvZrZY8 Pf0OKYva2RBdD1lhvN+S/ERPgFY7eZ1At0EAGlw92y4aOipjNkaYuCk8fczuGcZfWH5tQg yTCVHN7EdPfEoU3tBcKe8Hu/62T3S28= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-324-y5Edv29yNuC2FB5YDPMGyw-1; Mon, 27 Jul 2020 11:09:39 -0400 X-MC-Unique: y5Edv29yNuC2FB5YDPMGyw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C910D79EC2; Mon, 27 Jul 2020 15:09:17 +0000 (UTC) Received: from localhost.localdomain.com (unknown [10.36.110.5]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2C8560BF4; Mon, 27 Jul 2020 15:09:14 +0000 (UTC) From: =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 6/6] migration: introduce snapshot-{save, load, delete} QMP commands Date: Mon, 27 Jul 2020 16:08:43 +0100 Message-Id: <20200727150843.3419256-7-berrange@redhat.com> In-Reply-To: <20200727150843.3419256-1-berrange@redhat.com> References: <20200727150843.3419256-1-berrange@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=205.139.110.120; envelope-from=berrange@redhat.com; helo=us-smtp-1.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/07/27 01:44:14 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -40 X-Spam_score: -4.1 X-Spam_bar: ---- X-Spam_report: (-4.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Peter Krempa , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , "Denis V. Lunev" , qemu-block@nongnu.org, Juan Quintela , Markus Armbruster , "Dr. David Alan Gilbert" , Max Reitz , John Snow , Pavel Dovgalyuk , Paolo Bonzini Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" savevm, loadvm and delvm are some of the few HMP commands that have never been converted to use QMP. The primary reason for this lack of conversion is that they block execution of the thread for as long as they run. Despite this downside, however, libvirt and applications using libvirt have used these commands for as long as QMP has existed, via the "human-monitor-command" passthrough command. IOW, while it is clearly desirable to be able to fix the blocking problem, this is not an immediate obstacle to real world usage. Meanwhile there is a need for other features which involve adding new parameters to the commands. This is possible with HMP passthrough, but it provides no reliable way for apps to introspect features, so using QAPI modelling is highly desirable. This patch thus introduces new snapshot-{load,save,delete} commands to QMP that are intended to replace the old HMP counterparts. The new commands are given different names, because they will be using the new QEMU job framework and thus will have diverging behaviour from the HMP originals. It would thus be misleading to keep the same name. While this design uses the generic job framework, the current impl is still blocking. The intention that the blocking problem is fixed later. None the less applications using these new commands should assume that they are asynchronous and thus wait for the job status change event to indicate completion. Signed-off-by: Daniel P. Berrangé --- include/migration/snapshot.h | 10 +- migration/savevm.c | 172 +++++++++++++++++++++++++++++++++-- monitor/hmp-cmds.c | 4 +- qapi/job.json | 9 +- qapi/migration.json | 112 +++++++++++++++++++++++ replay/replay-snapshot.c | 4 +- softmmu/vl.c | 2 +- tests/qemu-iotests/310 | 125 +++++++++++++++++++++++++ tests/qemu-iotests/310.out | 0 tests/qemu-iotests/group | 1 + 10 files changed, 421 insertions(+), 18 deletions(-) create mode 100755 tests/qemu-iotests/310 create mode 100644 tests/qemu-iotests/310.out diff --git a/include/migration/snapshot.h b/include/migration/snapshot.h index c85b6ec75b..f2ed9d1e43 100644 --- a/include/migration/snapshot.h +++ b/include/migration/snapshot.h @@ -15,7 +15,13 @@ #ifndef QEMU_MIGRATION_SNAPSHOT_H #define QEMU_MIGRATION_SNAPSHOT_H -int save_snapshot(const char *name, Error **errp); -int load_snapshot(const char *name, Error **errp); +#include "qapi/qapi-builtin-types.h" + +int save_snapshot(const char *name, + const char *vmstate, strList *devices, + Error **errp); +int load_snapshot(const char *name, + const char *vmstate, strList *devices, + Error **errp); #endif diff --git a/migration/savevm.c b/migration/savevm.c index 1707fa30db..13c5a54aae 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -43,6 +43,8 @@ #include "qapi/error.h" #include "qapi/qapi-commands-migration.h" #include "qapi/qapi-commands-misc.h" +#include "qapi/clone-visitor.h" +#include "qapi/qapi-builtin-visit.h" #include "qapi/qmp/qerror.h" #include "qemu/error-report.h" #include "sysemu/cpus.h" @@ -2631,7 +2633,8 @@ int qemu_load_device_state(QEMUFile *f) return 0; } -int save_snapshot(const char *name, Error **errp) +int save_snapshot(const char *name, const char *vmstate, + strList *devices, Error **errp) { BlockDriverState *bs; QEMUSnapshotInfo sn1, *sn = &sn1, old_sn1, *old_sn = &old_sn1; @@ -2653,18 +2656,18 @@ int save_snapshot(const char *name, Error **errp) return ret; } - if (!bdrv_all_can_snapshot(NULL, errp)) { + if (!bdrv_all_can_snapshot(devices, errp)) { return ret; } /* Delete old snapshots of the same name */ if (name) { - if (bdrv_all_delete_snapshot(name, NULL, errp) < 0) { + if (bdrv_all_delete_snapshot(name, devices, errp) < 0) { return ret; } } - bs = bdrv_all_find_vmstate_bs(NULL, NULL, errp); + bs = bdrv_all_find_vmstate_bs(vmstate, devices, errp); if (bs == NULL) { return ret; } @@ -2730,7 +2733,7 @@ int save_snapshot(const char *name, Error **errp) aio_context_release(aio_context); aio_context = NULL; - ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, NULL, errp); + ret = bdrv_all_create_snapshot(sn, bs, vm_state_size, devices, errp); if (ret < 0) { goto the_end; } @@ -2831,7 +2834,8 @@ void qmp_xen_load_devices_state(const char *filename, Error **errp) migration_incoming_state_destroy(); } -int load_snapshot(const char *name, Error **errp) +int load_snapshot(const char *name, const char *vmstate, + strList *devices, Error **errp) { BlockDriverState *bs_vm_state; QEMUSnapshotInfo sn; @@ -2846,15 +2850,15 @@ int load_snapshot(const char *name, Error **errp) return -1; } - if (!bdrv_all_can_snapshot(NULL, errp)) { + if (!bdrv_all_can_snapshot(devices, errp)) { return -1; } - ret = bdrv_all_find_snapshot(name, NULL, errp); + ret = bdrv_all_find_snapshot(name, devices, errp); if (ret < 0) { return -1; } - bs_vm_state = bdrv_all_find_vmstate_bs(NULL, NULL, errp); + bs_vm_state = bdrv_all_find_vmstate_bs(vmstate, devices, errp); if (!bs_vm_state) { return -1; } @@ -2875,7 +2879,7 @@ int load_snapshot(const char *name, Error **errp) /* Flush all IO requests so they don't interfere with the new state. */ bdrv_drain_all_begin(); - ret = bdrv_all_goto_snapshot(name, NULL, errp); + ret = bdrv_all_goto_snapshot(name, devices, errp); if (ret < 0) { goto err_drain; } @@ -2936,3 +2940,151 @@ bool vmstate_check_only_migratable(const VMStateDescription *vmsd) return !(vmsd && vmsd->unmigratable); } + +typedef struct SnapshotJob { + Job common; + char *tag; + char *vmstate; + strList *devices; +} SnapshotJob; + +static void qmp_snapshot_job_free(SnapshotJob *s) +{ + g_free(s->tag); + g_free(s->vmstate); + qapi_free_strList(s->devices); +} + +static int coroutine_fn snapshot_load_job_run(Job *job, Error **errp) +{ + SnapshotJob *s = container_of(job, SnapshotJob, common); + int ret; + int saved_vm_running; + + job_progress_set_remaining(&s->common, 1); + + saved_vm_running = runstate_is_running(); + vm_stop(RUN_STATE_RESTORE_VM); + + ret = load_snapshot(s->tag, s->vmstate, s->devices, errp); + if (ret == 0 && saved_vm_running) { + vm_start(); + } + + job_progress_update(&s->common, 1); + + qmp_snapshot_job_free(s); + + return ret; +} + +static int coroutine_fn snapshot_save_job_run(Job *job, Error **errp) +{ + SnapshotJob *s = container_of(job, SnapshotJob, common); + int ret; + + job_progress_set_remaining(&s->common, 1); + ret = save_snapshot(s->tag, s->vmstate, s->devices, errp); + job_progress_update(&s->common, 1); + + qmp_snapshot_job_free(s); + + return ret; +} + +static int coroutine_fn snapshot_delete_job_run(Job *job, Error **errp) +{ + SnapshotJob *s = container_of(job, SnapshotJob, common); + int ret; + + job_progress_set_remaining(&s->common, 1); + ret = bdrv_all_delete_snapshot(s->tag, s->devices, errp); + job_progress_update(&s->common, 1); + + qmp_snapshot_job_free(s); + + return ret; +} + +static const JobDriver snapshot_load_job_driver = { + .instance_size = sizeof(SnapshotJob), + .job_type = JOB_TYPE_SNAPSHOT_LOAD, + .run = snapshot_load_job_run, +}; + +static const JobDriver snapshot_save_job_driver = { + .instance_size = sizeof(SnapshotJob), + .job_type = JOB_TYPE_SNAPSHOT_SAVE, + .run = snapshot_save_job_run, +}; + +static const JobDriver snapshot_delete_job_driver = { + .instance_size = sizeof(SnapshotJob), + .job_type = JOB_TYPE_SNAPSHOT_DELETE, + .run = snapshot_delete_job_run, +}; + + +void qmp_snapshot_save(const char *job_id, + const char *tag, + bool has_vmstate, const char *vmstate, + bool has_devices, strList *devices, + Error **errp) +{ + SnapshotJob *s; + + s = job_create(job_id, &snapshot_save_job_driver, NULL, + qemu_get_aio_context(), JOB_MANUAL_DISMISS, + NULL, NULL, errp); + if (!s) { + return; + } + + s->tag = g_strdup(tag); + s->vmstate = has_vmstate ? g_strdup(vmstate) : NULL; + s->devices = has_devices ? QAPI_CLONE(strList, devices) : NULL; + + job_start(&s->common); +} + +void qmp_snapshot_load(const char *job_id, + const char *tag, + bool has_vmstate, const char *vmstate, + bool has_devices, strList *devices, + Error **errp) +{ + SnapshotJob *s; + + s = job_create(job_id, &snapshot_load_job_driver, NULL, + qemu_get_aio_context(), JOB_MANUAL_DISMISS, + NULL, NULL, errp); + if (!s) { + return; + } + + s->tag = g_strdup(tag); + s->vmstate = has_vmstate ? g_strdup(vmstate) : NULL; + s->devices = has_devices ? QAPI_CLONE(strList, devices) : NULL; + + job_start(&s->common); +} + +void qmp_snapshot_delete(const char *job_id, + const char *tag, + bool has_devices, strList *devices, + Error **errp) +{ + SnapshotJob *s; + + s = job_create(job_id, &snapshot_delete_job_driver, NULL, + qemu_get_aio_context(), JOB_MANUAL_DISMISS, + NULL, NULL, errp); + if (!s) { + return; + } + + s->tag = g_strdup(tag); + s->devices = has_devices ? QAPI_CLONE(strList, devices) : NULL; + + job_start(&s->common); +} diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c index 90e717f0c4..2b51273720 100644 --- a/monitor/hmp-cmds.c +++ b/monitor/hmp-cmds.c @@ -1095,7 +1095,7 @@ void hmp_loadvm(Monitor *mon, const QDict *qdict) vm_stop(RUN_STATE_RESTORE_VM); - if (load_snapshot(name, &err) == 0 && saved_vm_running) { + if (load_snapshot(name, NULL, NULL, &err) == 0 && saved_vm_running) { vm_start(); } hmp_handle_error(mon, err); @@ -1105,7 +1105,7 @@ void hmp_savevm(Monitor *mon, const QDict *qdict) { Error *err = NULL; - save_snapshot(qdict_get_try_str(qdict, "name"), &err); + save_snapshot(qdict_get_try_str(qdict, "name"), NULL, NULL, &err); hmp_handle_error(mon, err); } diff --git a/qapi/job.json b/qapi/job.json index c48a0c3e34..6aa03a7aff 100644 --- a/qapi/job.json +++ b/qapi/job.json @@ -21,10 +21,17 @@ # # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1) # +# @snapshot-load: snapshot load job type, see "loadvm" (since 5.2) +# +# @snapshot-save: snapshot save job type, see "savevm" (since 5.2) +# +# @snapshot-delete: snapshot delete job type, see "delvm" (since 5.2) +# # Since: 1.7 ## { 'enum': 'JobType', - 'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] } + 'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend', + 'snapshot-load', 'snapshot-save', 'snapshot-delete'] } ## # @JobStatus: diff --git a/qapi/migration.json b/qapi/migration.json index d5000558c6..0ca556c158 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -1621,3 +1621,115 @@ ## { 'event': 'UNPLUG_PRIMARY', 'data': { 'device-id': 'str' } } + +## +# @snapshot-save: +# +# Save a VM snapshot +# +# @job-id: identifier for the newly created job +# @tag: name of the snapshot to create. If it already +# exists it will be replaced. +# @devices: list of block device node names to save a snapshot to +# @vmstate: block device node name to save vmstate to +# +# Applications should not assume that the snapshot save is complete +# when this command returns. Completion is indicated by the job +# status. Clients can wait for the JOB_STATUS_CHANGE event. +# +# Note that the VM CPUs will be paused during the time it takes to +# save the snapshot +# +# Returns: nothing +# +# Example: +# +# -> { "execute": "snapshot-save", +# "data": { +# "job-id": "snapsave0", +# "tag": "my-snap", +# "vmstate": "disk0", +# "devices": ["disk0", "disk1"] +# } +# } +# <- { "return": { } } +# +# Since: 5.2 +## +{ 'command': 'snapshot-save', + 'data': { 'job-id': 'str', + 'tag': 'str', + '*vmstate': 'str', + '*devices': ['str'] } } + +## +# @snapshot-load: +# +# Load a VM snapshot +# +# @job-id: identifier for the newly created job +# @tag: name of the snapshot to load. +# @devices: list of block device node names to load a snapshot from +# @vmstate: block device node name to load vmstate from +# +# Applications should not assume that the snapshot load is complete +# when this command returns. Completion is indicated by the job +# status. Clients can wait for the JOB_STATUS_CHANGE event. +# +# Returns: nothing +# +# Example: +# +# -> { "execute": "snapshot-load", +# "data": { +# "job-id": "snapload0", +# "tag": "my-snap", +# "vmstate": "disk0", +# "devices": ["disk0", "disk1"] +# } +# } +# <- { "return": { } } +# +# Since: 5.2 +## +{ 'command': 'snapshot-load', + 'data': { 'job-id': 'str', + 'tag': 'str', + '*vmstate': 'str', + '*devices': ['str'] } } + +## +# @snapshot-delete: +# +# Delete a VM snapshot +# +# @job-id: identifier for the newly created job +# @tag: name of the snapshot to delete. +# @devices: list of block device node names to delete a snapshot from +# +# Applications should not assume that the snapshot load is complete +# when this command returns. Completion is indicated by the job +# status. Clients can wait for the JOB_STATUS_CHANGE event. +# +# Note that the VM CPUs will be paused during the time it takes to +# delete the snapshot +# +# Returns: nothing +# +# Example: +# +# -> { "execute": "snapshot-delete", +# "data": { +# "job-id": "snapdelete0", +# "tag": "my-snap", +# "devices": ["disk0", "disk1"] +# } +# } +# <- { "return": { } } +# +# Since: 5.2 +## +{ 'command': 'snapshot-delete', + 'data': { 'job-id': 'str', + 'tag': 'str', + '*devices': ['str'] } } diff --git a/replay/replay-snapshot.c b/replay/replay-snapshot.c index e26fa4c892..f0f45a4f24 100644 --- a/replay/replay-snapshot.c +++ b/replay/replay-snapshot.c @@ -77,13 +77,13 @@ void replay_vmstate_init(void) if (replay_snapshot) { if (replay_mode == REPLAY_MODE_RECORD) { - if (save_snapshot(replay_snapshot, &err) != 0) { + if (save_snapshot(replay_snapshot, NULL, NULL, &err) != 0) { error_report_err(err); error_report("Could not create snapshot for icount record"); exit(1); } } else if (replay_mode == REPLAY_MODE_PLAY) { - if (load_snapshot(replay_snapshot, &err) != 0) { + if (load_snapshot(replay_snapshot, NULL, NULL, &err) != 0) { error_report_err(err); error_report("Could not load snapshot for icount replay"); exit(1); diff --git a/softmmu/vl.c b/softmmu/vl.c index 3416241557..9d2d38360a 100644 --- a/softmmu/vl.c +++ b/softmmu/vl.c @@ -4456,7 +4456,7 @@ void qemu_init(int argc, char **argv, char **envp) register_global_state(); if (loadvm) { Error *local_err = NULL; - if (load_snapshot(loadvm, &local_err) < 0) { + if (load_snapshot(loadvm, NULL, NULL, &local_err) < 0) { error_report_err(local_err); autostart = 0; exit(1); diff --git a/tests/qemu-iotests/310 b/tests/qemu-iotests/310 new file mode 100755 index 0000000000..b84b3a6dd6 --- /dev/null +++ b/tests/qemu-iotests/310 @@ -0,0 +1,125 @@ +#!/usr/bin/env bash +# +# Test which nodes are involved in internal snapshots +# +# Copyright (C) 2019 Red Hat, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# + +# creator +owner=berrange@redhat.com + +seq=`basename $0` +echo "QA output created by $seq" + +status=1 # failure is the default! + +_cleanup() +{ + _cleanup_test_img + rm -f "$SOCK_DIR/nbd" +} +trap "_cleanup; exit \$status" 0 1 2 3 15 + +# get standard environment, filters and checks +. ./common.rc +. ./common.filter + +_supported_fmt qcow2 +_supported_proto file +_supported_os Linux +_require_drivers copy-on-read + +# Internal snapshots are (currently) impossible with refcount_bits=1, +# and generally impossible with external data files +_unsupported_imgopts 'refcount_bits=1[^0-9]' data_file + +_require_devices virtio-blk + +do_run_qemu() +{ + echo Testing: "$@" + ( + if ! test -t 0; then + while read cmd; do + echo $cmd + done + fi + echo quit + ) | $QEMU -nographic -qmp stdio -nodefaults "$@" + echo +} + +run_qemu() +{ + do_run_qemu "$@" 2>&1 | _filter_testdir | _filter_qemu | _filter_hmp | + _filter_generated_node_ids | _filter_imgfmt | _filter_vmstate_size +} + +size=128M + +run_test() +{ + if [ -n "$BACKING_FILE" ]; then + _make_test_img -b "$BACKING_FILE" -F $IMGFMT $size + else + _make_test_img $size + fi + run_qemu "$@" <