diff mbox series

[v5,10/10] migration: introduce snapshot-{save, load, delete} QMP commands

Message ID 20201002162747.3123597-11-berrange@redhat.com
State New
Headers show
Series migration: bring improved savevm/loadvm/delvm to QMP | expand

Commit Message

Daniel P. Berrangé Oct. 2, 2020, 4:27 p.m. UTC
savevm, loadvm and delvm are some of the few HMP commands that have never
been converted to use QMP. The reasons for the lack of conversion are
that they blocked execution of the event thread, and the semantics
around choice of disks were ill-defined.

Despite this downside, however, libvirt and applications using libvirt
have used these commands for as long as QMP has existed, via the
"human-monitor-command" passthrough command. IOW, while it is clearly
desirable to be able to fix the problems, they are not a blocker to
all real world usage.

Meanwhile there is a need for other features which involve adding new
parameters to the commands. This is possible with HMP passthrough, but
it provides no reliable way for apps to introspect features, so using
QAPI modelling is highly desirable.

This patch thus introduces new snapshot-{load,save,delete} commands to
QMP that are intended to replace the old HMP counterparts. The new
commands are given different names, because they will be using the new
QEMU job framework and thus will have diverging behaviour from the HMP
originals. It would thus be misleading to keep the same name.

While this design uses the generic job framework, the current impl is
still blocking. The intention that the blocking problem is fixed later.
None the less applications using these new commands should assume that
they are asynchronous and thus wait for the job status change event to
indicate completion.

In addition to using the job framework, the new commands require the
caller to be explicit about all the block device nodes used in the
snapshot operations, with no built-in default heuristics in use.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
---
 migration/savevm.c         | 184 +++++++++++++++++
 qapi/job.json              |   9 +-
 qapi/migration.json        | 120 +++++++++++
 tests/qemu-iotests/310     | 385 +++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/310.out | 407 +++++++++++++++++++++++++++++++++++++
 tests/qemu-iotests/group   |   1 +
 6 files changed, 1105 insertions(+), 1 deletion(-)
 create mode 100755 tests/qemu-iotests/310
 create mode 100644 tests/qemu-iotests/310.out

Comments

Eric Blake Oct. 2, 2020, 7:46 p.m. UTC | #1
On 10/2/20 11:27 AM, Daniel P. Berrangé wrote:
> savevm, loadvm and delvm are some of the few HMP commands that have never

> been converted to use QMP. The reasons for the lack of conversion are

> that they blocked execution of the event thread, and the semantics

> around choice of disks were ill-defined.

> 

> Despite this downside, however, libvirt and applications using libvirt

> have used these commands for as long as QMP has existed, via the

> "human-monitor-command" passthrough command. IOW, while it is clearly

> desirable to be able to fix the problems, they are not a blocker to

> all real world usage.

> 

> Meanwhile there is a need for other features which involve adding new

> parameters to the commands. This is possible with HMP passthrough, but

> it provides no reliable way for apps to introspect features, so using

> QAPI modelling is highly desirable.

> 

> This patch thus introduces new snapshot-{load,save,delete} commands to

> QMP that are intended to replace the old HMP counterparts. The new

> commands are given different names, because they will be using the new

> QEMU job framework and thus will have diverging behaviour from the HMP

> originals. It would thus be misleading to keep the same name.

> 

> While this design uses the generic job framework, the current impl is

> still blocking. The intention that the blocking problem is fixed later.

> None the less applications using these new commands should assume that

> they are asynchronous and thus wait for the job status change event to

> indicate completion.

> 

> In addition to using the job framework, the new commands require the

> caller to be explicit about all the block device nodes used in the

> snapshot operations, with no built-in default heuristics in use.

> 

> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

> ---


> +++ b/qapi/job.json

> @@ -22,10 +22,17 @@

>  #

>  # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)

>  #

> +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 5.2)

> +#

> +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 5.2)

> +#

> +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 5.2)

> +#

>  # Since: 1.7

>  ##

>  { 'enum': 'JobType',

> -  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }

> +  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',

> +           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }

>  

>  ##

>  # @JobStatus:

> diff --git a/qapi/migration.json b/qapi/migration.json

> index 7f5e6fd681..d2bd551ad9 100644

> --- a/qapi/migration.json

> +++ b/qapi/migration.json

> @@ -1787,3 +1787,123 @@

>  # Since: 5.2

>  ##

>  { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }

> +

> +##

> +# @snapshot-save:

> +#

> +# Save a VM snapshot

> +#

> +# @job-id: identifier for the newly created job

> +# @tag: name of the snapshot to create

> +# @devices: list of block device node names to save a snapshot to

> +# @vmstate: block device node name to save vmstate to


Here, you document vmstate last,...

> +#

> +# Applications should not assume that the snapshot save is complete

> +# when this command returns. The job commands / events must be used

> +# to determine completion and to fetch details of any errors that arise.

> +#

> +# Note that the VM CPUs will be paused during the time it takes to

> +# save the snapshot


"will be", or "may be"?  As you stated above, we may be able to lift the
synchronous limitations down the road, while still maintaining the
present interface of using this command to start the job and waiting on
the job id until it is finished, at which point the CPUs might not need
to be paused as much.

> +#

> +# It is strongly recommended that @devices contain all writable

> +# block device nodes if a consistent snapshot is required.

> +#

> +# If @tag already exists, an error will be reported

> +#

> +# Returns: nothing

> +#

> +# Example:

> +#

> +# -> { "execute": "snapshot-save",

> +#      "data": {

> +#         "job-id": "snapsave0",

> +#         "tag": "my-snap",

> +#         "vmstate": "disk0",

> +#         "devices": ["disk0", "disk1"]


...here vmstate occurs before devices.  I don't know if our doc
generator cares about inconsistent ordering.

> +#      }

> +#    }

> +# <- { "return": { } }

> +#

> +# Since: 5.2

> +##

> +{ 'command': 'snapshot-save',

> +  'data': { 'job-id': 'str',

> +            'tag': 'str',

> +            'vmstate': 'str',

> +            'devices': ['str'] } }

> +

> +##

> +# @snapshot-load:

> +#

> +# Load a VM snapshot

> +#

> +# @job-id: identifier for the newly created job

> +# @tag: name of the snapshot to load.

> +# @devices: list of block device node names to load a snapshot from

> +# @vmstate: block device node name to load vmstate from

> +#

> +# Applications should not assume that the snapshot save is complete

> +# when this command returns. The job commands / events must be used

> +# to determine completion and to fetch details of any errors that arise.


s/save/load/

> +#

> +# Note that the VM CPUs will be paused during the time it takes to

> +# save the snapshot


s/save/load/

But while pausing CPUs during save is annoying, pausing CPUs during
restore makes sense (after all, executing on stale data that will still
be updated during the restore is just wasted execution).


> +#

> +# It is strongly recommended that @devices contain all writable

> +# block device nodes that can have changed since the original

> +# @snapshot-save command execution.

> +#

> +# Returns: nothing

> +#

> +# Example:

> +#

> +# -> { "execute": "snapshot-load",

> +#      "data": {

> +#         "job-id": "snapload0",

> +#         "tag": "my-snap",

> +#         "vmstate": "disk0",

> +#         "devices": ["disk0", "disk1"]

> +#      }

> +#    }

> +# <- { "return": { } }

> +#

> +# Since: 5.2

> +##

> +{ 'command': 'snapshot-load',

> +  'data': { 'job-id': 'str',

> +            'tag': 'str',

> +            'vmstate': 'str',

> +            'devices': ['str'] } }

> +

> +##

> +# @snapshot-delete:

> +#

> +# Delete a VM snapshot

> +#

> +# @job-id: identifier for the newly created job

> +# @tag: name of the snapshot to delete.

> +# @devices: list of block device node names to delete a snapshot from

> +#

> +# Applications should not assume that the snapshot save is complete

> +# when this command returns. The job commands / events must be used

> +# to determine completion and to fetch details of any errors that arise.


Do we have a query- command handy to easily learn which snapshot names
are even available to attempt deletion on?  If not, that's worth a
separate patch.

> +#

> +# Returns: nothing

> +#

> +# Example:

> +#

> +# -> { "execute": "snapshot-delete",

> +#      "data": {

> +#         "job-id": "snapdelete0",

> +#         "tag": "my-snap",

> +#         "devices": ["disk0", "disk1"]

> +#      }

> +#    }

> +# <- { "return": { } }

> +#

> +# Since: 5.2

> +##


> +++ b/tests/qemu-iotests/group

> @@ -291,6 +291,7 @@

>  277 rw quick

>  279 rw backing quick

>  280 rw migration quick

> +310 rw quick

>  281 rw quick

>  282 rw img quick

>  283 auto quick


What's wrong with sorted order? I get the renumbering to appease a merge
conflict, but it also requires rearrangement ;)

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org
Markus Armbruster Oct. 5, 2020, 7:26 a.m. UTC | #2
Eric Blake <eblake@redhat.com> writes:

> On 10/2/20 11:27 AM, Daniel P. Berrangé wrote:

>> savevm, loadvm and delvm are some of the few HMP commands that have never

>> been converted to use QMP. The reasons for the lack of conversion are

>> that they blocked execution of the event thread, and the semantics

>> around choice of disks were ill-defined.

>> 

>> Despite this downside, however, libvirt and applications using libvirt

>> have used these commands for as long as QMP has existed, via the

>> "human-monitor-command" passthrough command. IOW, while it is clearly

>> desirable to be able to fix the problems, they are not a blocker to

>> all real world usage.

>> 

>> Meanwhile there is a need for other features which involve adding new

>> parameters to the commands. This is possible with HMP passthrough, but

>> it provides no reliable way for apps to introspect features, so using

>> QAPI modelling is highly desirable.

>> 

>> This patch thus introduces new snapshot-{load,save,delete} commands to

>> QMP that are intended to replace the old HMP counterparts. The new

>> commands are given different names, because they will be using the new

>> QEMU job framework and thus will have diverging behaviour from the HMP

>> originals. It would thus be misleading to keep the same name.

>> 

>> While this design uses the generic job framework, the current impl is

>> still blocking. The intention that the blocking problem is fixed later.

>> None the less applications using these new commands should assume that

>> they are asynchronous and thus wait for the job status change event to

>> indicate completion.

>> 

>> In addition to using the job framework, the new commands require the

>> caller to be explicit about all the block device nodes used in the

>> snapshot operations, with no built-in default heuristics in use.

>> 

>> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

>> ---

>

>> +++ b/qapi/job.json

>> @@ -22,10 +22,17 @@

>>  #

>>  # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)

>>  #

>> +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 5.2)

>> +#

>> +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 5.2)

>> +#

>> +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 5.2)

>> +#

>>  # Since: 1.7

>>  ##

>>  { 'enum': 'JobType',

>> -  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }

>> +  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',

>> +           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }

>>  

>>  ##

>>  # @JobStatus:

>> diff --git a/qapi/migration.json b/qapi/migration.json

>> index 7f5e6fd681..d2bd551ad9 100644

>> --- a/qapi/migration.json

>> +++ b/qapi/migration.json

>> @@ -1787,3 +1787,123 @@

>>  # Since: 5.2

>>  ##

>>  { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }

>> +

>> +##

>> +# @snapshot-save:

>> +#

>> +# Save a VM snapshot

>> +#

>> +# @job-id: identifier for the newly created job

>> +# @tag: name of the snapshot to create

>> +# @devices: list of block device node names to save a snapshot to

>> +# @vmstate: block device node name to save vmstate to

>

> Here, you document vmstate last,...

>

>> +#

>> +# Applications should not assume that the snapshot save is complete

>> +# when this command returns. The job commands / events must be used

>> +# to determine completion and to fetch details of any errors that arise.

>> +#

>> +# Note that the VM CPUs will be paused during the time it takes to

>> +# save the snapshot


End the sentence with a period, please.

> "will be", or "may be"?  As you stated above, we may be able to lift the

> synchronous limitations down the road, while still maintaining the

> present interface of using this command to start the job and waiting on

> the job id until it is finished, at which point the CPUs might not need

> to be paused as much.


Could also add a sentence like "This may change in the future".

>> +#

>> +# It is strongly recommended that @devices contain all writable

>> +# block device nodes if a consistent snapshot is required.

>> +#

>> +# If @tag already exists, an error will be reported

>> +#

>> +# Returns: nothing

>> +#

>> +# Example:

>> +#

>> +# -> { "execute": "snapshot-save",

>> +#      "data": {

>> +#         "job-id": "snapsave0",

>> +#         "tag": "my-snap",

>> +#         "vmstate": "disk0",

>> +#         "devices": ["disk0", "disk1"]

>

> ...here vmstate occurs before devices.  I don't know if our doc

> generator cares about inconsistent ordering.


It does not.  It's questionable style, though.  Easy enough to tidy up.

>> +#      }

>> +#    }

>> +# <- { "return": { } }

>> +#

>> +# Since: 5.2

>> +##

>> +{ 'command': 'snapshot-save',

>> +  'data': { 'job-id': 'str',

>> +            'tag': 'str',

>> +            'vmstate': 'str',

>> +            'devices': ['str'] } }

>> +

>> +##

>> +# @snapshot-load:

>> +#

>> +# Load a VM snapshot

>> +#

>> +# @job-id: identifier for the newly created job

>> +# @tag: name of the snapshot to load.

>> +# @devices: list of block device node names to load a snapshot from

>> +# @vmstate: block device node name to load vmstate from

>> +#

>> +# Applications should not assume that the snapshot save is complete

>> +# when this command returns. The job commands / events must be used

>> +# to determine completion and to fetch details of any errors that arise.

>

> s/save/load/

>

>> +#

>> +# Note that the VM CPUs will be paused during the time it takes to

>> +# save the snapshot


End the sentence with a period, please.

> s/save/load/

>

> But while pausing CPUs during save is annoying, pausing CPUs during

> restore makes sense (after all, executing on stale data that will still

> be updated during the restore is just wasted execution).

>

>

>> +#

>> +# It is strongly recommended that @devices contain all writable

>> +# block device nodes that can have changed since the original

>> +# @snapshot-save command execution.

>> +#

>> +# Returns: nothing

>> +#

>> +# Example:

>> +#

>> +# -> { "execute": "snapshot-load",

>> +#      "data": {

>> +#         "job-id": "snapload0",

>> +#         "tag": "my-snap",

>> +#         "vmstate": "disk0",

>> +#         "devices": ["disk0", "disk1"]

>> +#      }

>> +#    }

>> +# <- { "return": { } }

>> +#

>> +# Since: 5.2

>> +##

>> +{ 'command': 'snapshot-load',

>> +  'data': { 'job-id': 'str',

>> +            'tag': 'str',

>> +            'vmstate': 'str',

>> +            'devices': ['str'] } }

>> +

>> +##

>> +# @snapshot-delete:

>> +#

>> +# Delete a VM snapshot

>> +#

>> +# @job-id: identifier for the newly created job

>> +# @tag: name of the snapshot to delete.

>> +# @devices: list of block device node names to delete a snapshot from

>> +#

>> +# Applications should not assume that the snapshot save is complete

>> +# when this command returns. The job commands / events must be used

>> +# to determine completion and to fetch details of any errors that arise.

>

> Do we have a query- command handy to easily learn which snapshot names

> are even available to attempt deletion on?  If not, that's worth a

> separate patch.


Oh, I missed that one.  It's the QMP equivalent to "info snapshots", and
it is required to finish the job.  Since we're at v5, I'd be okay with a
follow-up patch, as long as it is done for 5.2.

>> +#

>> +# Returns: nothing

>> +#

>> +# Example:

>> +#

>> +# -> { "execute": "snapshot-delete",

>> +#      "data": {

>> +#         "job-id": "snapdelete0",

>> +#         "tag": "my-snap",

>> +#         "devices": ["disk0", "disk1"]

>> +#      }

>> +#    }

>> +# <- { "return": { } }

>> +#

>> +# Since: 5.2

>> +##


With the doc comment nits addressed, and with new command
query-snapshots either included or promised for later in 5.2:
Acked-by: Markus Armbruster <armbru@redhat.com>


>

>> +++ b/tests/qemu-iotests/group

>> @@ -291,6 +291,7 @@

>>  277 rw quick

>>  279 rw backing quick

>>  280 rw migration quick

>> +310 rw quick

>>  281 rw quick

>>  282 rw img quick

>>  283 auto quick

>

> What's wrong with sorted order? I get the renumbering to appease a merge

> conflict, but it also requires rearrangement ;)
Daniel P. Berrangé Oct. 5, 2020, 11:36 a.m. UTC | #3
On Mon, Oct 05, 2020 at 09:26:54AM +0200, Markus Armbruster wrote:
> Eric Blake <eblake@redhat.com> writes:

> 

> > On 10/2/20 11:27 AM, Daniel P. Berrangé wrote:

> >

> > Do we have a query- command handy to easily learn which snapshot names

> > are even available to attempt deletion on?  If not, that's worth a

> > separate patch.

> 

> Oh, I missed that one.  It's the QMP equivalent to "info snapshots", and

> it is required to finish the job.  Since we're at v5, I'd be okay with a

> follow-up patch, as long as it is done for 5.2.


"query-named-block-nodes" returns BlockDeviceInfo struct, which
contains ImageInfo which contains an array of SnapshotInfo. So
we don't need any new query command.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Daniel P. Berrangé Oct. 5, 2020, 11:37 a.m. UTC | #4
On Fri, Oct 02, 2020 at 02:46:07PM -0500, Eric Blake wrote:
> On 10/2/20 11:27 AM, Daniel P. Berrangé wrote:

> > savevm, loadvm and delvm are some of the few HMP commands that have never

> > been converted to use QMP. The reasons for the lack of conversion are

> > that they blocked execution of the event thread, and the semantics

> > around choice of disks were ill-defined.

> > 

> > Despite this downside, however, libvirt and applications using libvirt

> > have used these commands for as long as QMP has existed, via the

> > "human-monitor-command" passthrough command. IOW, while it is clearly

> > desirable to be able to fix the problems, they are not a blocker to

> > all real world usage.

> > 

> > Meanwhile there is a need for other features which involve adding new

> > parameters to the commands. This is possible with HMP passthrough, but

> > it provides no reliable way for apps to introspect features, so using

> > QAPI modelling is highly desirable.

> > 

> > This patch thus introduces new snapshot-{load,save,delete} commands to

> > QMP that are intended to replace the old HMP counterparts. The new

> > commands are given different names, because they will be using the new

> > QEMU job framework and thus will have diverging behaviour from the HMP

> > originals. It would thus be misleading to keep the same name.

> > 

> > While this design uses the generic job framework, the current impl is

> > still blocking. The intention that the blocking problem is fixed later.

> > None the less applications using these new commands should assume that

> > they are asynchronous and thus wait for the job status change event to

> > indicate completion.

> > 

> > In addition to using the job framework, the new commands require the

> > caller to be explicit about all the block device nodes used in the

> > snapshot operations, with no built-in default heuristics in use.

> > 

> > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

> > ---


> > +++ b/tests/qemu-iotests/group

> > @@ -291,6 +291,7 @@

> >  277 rw quick

> >  279 rw backing quick

> >  280 rw migration quick

> > +310 rw quick

> >  281 rw quick

> >  282 rw img quick

> >  283 auto quick

> 

> What's wrong with sorted order? I get the renumbering to appease a merge

> conflict, but it also requires rearrangement ;)


This file is a conflict magnet when rebasing to git master if you just
append to the end of it. So I picked a number many bigger than current
max, and stuffed the new entry in the middle to avoid rebase conflicts.



Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Dr. David Alan Gilbert Oct. 6, 2020, 5:36 p.m. UTC | #5
* Eric Blake (eblake@redhat.com) wrote:
> On 10/2/20 11:27 AM, Daniel P. Berrangé wrote:

> > savevm, loadvm and delvm are some of the few HMP commands that have never

> > been converted to use QMP. The reasons for the lack of conversion are

> > that they blocked execution of the event thread, and the semantics

> > around choice of disks were ill-defined.

> > 

> > Despite this downside, however, libvirt and applications using libvirt

> > have used these commands for as long as QMP has existed, via the

> > "human-monitor-command" passthrough command. IOW, while it is clearly

> > desirable to be able to fix the problems, they are not a blocker to

> > all real world usage.

> > 

> > Meanwhile there is a need for other features which involve adding new

> > parameters to the commands. This is possible with HMP passthrough, but

> > it provides no reliable way for apps to introspect features, so using

> > QAPI modelling is highly desirable.

> > 

> > This patch thus introduces new snapshot-{load,save,delete} commands to

> > QMP that are intended to replace the old HMP counterparts. The new

> > commands are given different names, because they will be using the new

> > QEMU job framework and thus will have diverging behaviour from the HMP

> > originals. It would thus be misleading to keep the same name.

> > 

> > While this design uses the generic job framework, the current impl is

> > still blocking. The intention that the blocking problem is fixed later.

> > None the less applications using these new commands should assume that

> > they are asynchronous and thus wait for the job status change event to

> > indicate completion.

> > 

> > In addition to using the job framework, the new commands require the

> > caller to be explicit about all the block device nodes used in the

> > snapshot operations, with no built-in default heuristics in use.

> > 

> > Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

> > ---

> 

> > +++ b/qapi/job.json

> > @@ -22,10 +22,17 @@

> >  #

> >  # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)

> >  #

> > +# @snapshot-load: snapshot load job type, see "snapshot-load" (since 5.2)

> > +#

> > +# @snapshot-save: snapshot save job type, see "snapshot-save" (since 5.2)

> > +#

> > +# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 5.2)

> > +#

> >  # Since: 1.7

> >  ##

> >  { 'enum': 'JobType',

> > -  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }

> > +  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',

> > +           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }

> >  

> >  ##

> >  # @JobStatus:

> > diff --git a/qapi/migration.json b/qapi/migration.json

> > index 7f5e6fd681..d2bd551ad9 100644

> > --- a/qapi/migration.json

> > +++ b/qapi/migration.json

> > @@ -1787,3 +1787,123 @@

> >  # Since: 5.2

> >  ##

> >  { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }

> > +

> > +##

> > +# @snapshot-save:

> > +#

> > +# Save a VM snapshot

> > +#

> > +# @job-id: identifier for the newly created job

> > +# @tag: name of the snapshot to create

> > +# @devices: list of block device node names to save a snapshot to

> > +# @vmstate: block device node name to save vmstate to

> 

> Here, you document vmstate last,...

> 

> > +#

> > +# Applications should not assume that the snapshot save is complete

> > +# when this command returns. The job commands / events must be used

> > +# to determine completion and to fetch details of any errors that arise.

> > +#

> > +# Note that the VM CPUs will be paused during the time it takes to

> > +# save the snapshot

> 

> "will be", or "may be"?  As you stated above, we may be able to lift the

> synchronous limitations down the road, while still maintaining the

> present interface of using this command to start the job and waiting on

> the job id until it is finished, at which point the CPUs might not need

> to be paused as much.

> 

> > +#

> > +# It is strongly recommended that @devices contain all writable

> > +# block device nodes if a consistent snapshot is required.

> > +#

> > +# If @tag already exists, an error will be reported

> > +#

> > +# Returns: nothing

> > +#

> > +# Example:

> > +#

> > +# -> { "execute": "snapshot-save",

> > +#      "data": {

> > +#         "job-id": "snapsave0",

> > +#         "tag": "my-snap",

> > +#         "vmstate": "disk0",

> > +#         "devices": ["disk0", "disk1"]

> 

> ...here vmstate occurs before devices.  I don't know if our doc

> generator cares about inconsistent ordering.

> 

> > +#      }

> > +#    }

> > +# <- { "return": { } }

> > +#

> > +# Since: 5.2

> > +##

> > +{ 'command': 'snapshot-save',

> > +  'data': { 'job-id': 'str',

> > +            'tag': 'str',

> > +            'vmstate': 'str',

> > +            'devices': ['str'] } }

> > +

> > +##

> > +# @snapshot-load:

> > +#

> > +# Load a VM snapshot

> > +#

> > +# @job-id: identifier for the newly created job

> > +# @tag: name of the snapshot to load.

> > +# @devices: list of block device node names to load a snapshot from

> > +# @vmstate: block device node name to load vmstate from

> > +#

> > +# Applications should not assume that the snapshot save is complete

> > +# when this command returns. The job commands / events must be used

> > +# to determine completion and to fetch details of any errors that arise.

> 

> s/save/load/

> 

> > +#

> > +# Note that the VM CPUs will be paused during the time it takes to

> > +# save the snapshot

> 

> s/save/load/

> 

> But while pausing CPUs during save is annoying, pausing CPUs during

> restore makes sense (after all, executing on stale data that will still

> be updated during the restore is just wasted execution).


Note that there are other snapshotting schemes that can do this more
dynamically and page/load the state on demand - a rapid resume from
snapshot like that is quite attractive.

Dave

> 

> > +#

> > +# It is strongly recommended that @devices contain all writable

> > +# block device nodes that can have changed since the original

> > +# @snapshot-save command execution.

> > +#

> > +# Returns: nothing

> > +#

> > +# Example:

> > +#

> > +# -> { "execute": "snapshot-load",

> > +#      "data": {

> > +#         "job-id": "snapload0",

> > +#         "tag": "my-snap",

> > +#         "vmstate": "disk0",

> > +#         "devices": ["disk0", "disk1"]

> > +#      }

> > +#    }

> > +# <- { "return": { } }

> > +#

> > +# Since: 5.2

> > +##

> > +{ 'command': 'snapshot-load',

> > +  'data': { 'job-id': 'str',

> > +            'tag': 'str',

> > +            'vmstate': 'str',

> > +            'devices': ['str'] } }

> > +

> > +##

> > +# @snapshot-delete:

> > +#

> > +# Delete a VM snapshot

> > +#

> > +# @job-id: identifier for the newly created job

> > +# @tag: name of the snapshot to delete.

> > +# @devices: list of block device node names to delete a snapshot from

> > +#

> > +# Applications should not assume that the snapshot save is complete

> > +# when this command returns. The job commands / events must be used

> > +# to determine completion and to fetch details of any errors that arise.

> 

> Do we have a query- command handy to easily learn which snapshot names

> are even available to attempt deletion on?  If not, that's worth a

> separate patch.

> 

> > +#

> > +# Returns: nothing

> > +#

> > +# Example:

> > +#

> > +# -> { "execute": "snapshot-delete",

> > +#      "data": {

> > +#         "job-id": "snapdelete0",

> > +#         "tag": "my-snap",

> > +#         "devices": ["disk0", "disk1"]

> > +#      }

> > +#    }

> > +# <- { "return": { } }

> > +#

> > +# Since: 5.2

> > +##

> 

> > +++ b/tests/qemu-iotests/group

> > @@ -291,6 +291,7 @@

> >  277 rw quick

> >  279 rw backing quick

> >  280 rw migration quick

> > +310 rw quick

> >  281 rw quick

> >  282 rw img quick

> >  283 auto quick

> 

> What's wrong with sorted order? I get the renumbering to appease a merge

> conflict, but it also requires rearrangement ;)

> 

> -- 

> Eric Blake, Principal Software Engineer

> Red Hat, Inc.           +1-919-301-3226

> Virtualization:  qemu.org | libvirt.org

> 




-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
diff mbox series

Patch

diff --git a/migration/savevm.c b/migration/savevm.c
index f6f7aa85fd..7a4232de19 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -3002,3 +3002,187 @@  bool vmstate_check_only_migratable(const VMStateDescription *vmsd)
 
     return !(vmsd && vmsd->unmigratable);
 }
+
+typedef struct SnapshotJob {
+    Job common;
+    char *tag;
+    char *vmstate;
+    strList *devices;
+    Coroutine *co;
+    Error **errp;
+    int ret;
+} SnapshotJob;
+
+static void qmp_snapshot_job_free(SnapshotJob *s)
+{
+    g_free(s->tag);
+    g_free(s->vmstate);
+    qapi_free_strList(s->devices);
+}
+
+
+static void snapshot_load_job_bh(void *opaque)
+{
+    Job *job = opaque;
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    int saved_vm_running;
+
+    job_progress_set_remaining(&s->common, 1);
+
+    saved_vm_running = runstate_is_running();
+    vm_stop(RUN_STATE_RESTORE_VM);
+
+    s->ret = load_snapshot(s->tag, s->vmstate, true, s->devices, s->errp);
+    if (s->ret == 0 && saved_vm_running) {
+        vm_start();
+    }
+
+    job_progress_update(&s->common, 1);
+
+    qmp_snapshot_job_free(s);
+    aio_co_wake(s->co);
+}
+
+static void snapshot_save_job_bh(void *opaque)
+{
+    Job *job = opaque;
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+
+    job_progress_set_remaining(&s->common, 1);
+    s->ret = save_snapshot(s->tag, false, s->vmstate,
+                           true, s->devices, s->errp);
+    job_progress_update(&s->common, 1);
+
+    qmp_snapshot_job_free(s);
+    aio_co_wake(s->co);
+}
+
+static void snapshot_delete_job_bh(void *opaque)
+{
+    Job *job = opaque;
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+
+    job_progress_set_remaining(&s->common, 1);
+    s->ret = delete_snapshot(s->tag, true, s->devices, s->errp);
+    job_progress_update(&s->common, 1);
+
+    qmp_snapshot_job_free(s);
+    aio_co_wake(s->co);
+}
+
+static int coroutine_fn snapshot_save_job_run(Job *job, Error **errp)
+{
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    s->errp = errp;
+    s->co = qemu_coroutine_self();
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            snapshot_save_job_bh, job);
+    qemu_coroutine_yield();
+    return s->ret;
+}
+
+static int coroutine_fn snapshot_load_job_run(Job *job, Error **errp)
+{
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    s->errp = errp;
+    s->co = qemu_coroutine_self();
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            snapshot_load_job_bh, job);
+    qemu_coroutine_yield();
+    return s->ret;
+}
+
+static int coroutine_fn snapshot_delete_job_run(Job *job, Error **errp)
+{
+    SnapshotJob *s = container_of(job, SnapshotJob, common);
+    s->errp = errp;
+    s->co = qemu_coroutine_self();
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            snapshot_delete_job_bh, job);
+    qemu_coroutine_yield();
+    return s->ret;
+}
+
+
+static const JobDriver snapshot_load_job_driver = {
+    .instance_size = sizeof(SnapshotJob),
+    .job_type      = JOB_TYPE_SNAPSHOT_LOAD,
+    .run           = snapshot_load_job_run,
+};
+
+static const JobDriver snapshot_save_job_driver = {
+    .instance_size = sizeof(SnapshotJob),
+    .job_type      = JOB_TYPE_SNAPSHOT_SAVE,
+    .run           = snapshot_save_job_run,
+};
+
+static const JobDriver snapshot_delete_job_driver = {
+    .instance_size = sizeof(SnapshotJob),
+    .job_type      = JOB_TYPE_SNAPSHOT_DELETE,
+    .run           = snapshot_delete_job_run,
+};
+
+
+void qmp_snapshot_save(const char *job_id,
+                       const char *tag,
+                       const char *vmstate,
+                       strList *devices,
+                       Error **errp)
+{
+    SnapshotJob *s;
+
+    s = job_create(job_id, &snapshot_save_job_driver, NULL,
+                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
+                   NULL, NULL, errp);
+    if (!s) {
+        return;
+    }
+
+    s->tag = g_strdup(tag);
+    s->vmstate = g_strdup(vmstate);
+    s->devices = QAPI_CLONE(strList, devices);
+
+    job_start(&s->common);
+}
+
+void qmp_snapshot_load(const char *job_id,
+                       const char *tag,
+                       const char *vmstate,
+                       strList *devices,
+                       Error **errp)
+{
+    SnapshotJob *s;
+
+    s = job_create(job_id, &snapshot_load_job_driver, NULL,
+                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
+                   NULL, NULL, errp);
+    if (!s) {
+        return;
+    }
+
+    s->tag = g_strdup(tag);
+    s->vmstate = g_strdup(vmstate);
+    s->devices = QAPI_CLONE(strList, devices);
+
+    job_start(&s->common);
+}
+
+void qmp_snapshot_delete(const char *job_id,
+                         const char *tag,
+                         strList *devices,
+                         Error **errp)
+{
+    SnapshotJob *s;
+
+    s = job_create(job_id, &snapshot_delete_job_driver, NULL,
+                   qemu_get_aio_context(), JOB_MANUAL_DISMISS,
+                   NULL, NULL, errp);
+    if (!s) {
+        return;
+    }
+
+    s->tag = g_strdup(tag);
+    s->devices = QAPI_CLONE(strList, devices);
+
+    job_start(&s->common);
+}
diff --git a/qapi/job.json b/qapi/job.json
index 280c2f76f1..b2cbb4fead 100644
--- a/qapi/job.json
+++ b/qapi/job.json
@@ -22,10 +22,17 @@ 
 #
 # @amend: image options amend job type, see "x-blockdev-amend" (since 5.1)
 #
+# @snapshot-load: snapshot load job type, see "snapshot-load" (since 5.2)
+#
+# @snapshot-save: snapshot save job type, see "snapshot-save" (since 5.2)
+#
+# @snapshot-delete: snapshot delete job type, see "snapshot-delete" (since 5.2)
+#
 # Since: 1.7
 ##
 { 'enum': 'JobType',
-  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend'] }
+  'data': ['commit', 'stream', 'mirror', 'backup', 'create', 'amend',
+           'snapshot-load', 'snapshot-save', 'snapshot-delete'] }
 
 ##
 # @JobStatus:
diff --git a/qapi/migration.json b/qapi/migration.json
index 7f5e6fd681..d2bd551ad9 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1787,3 +1787,123 @@ 
 # Since: 5.2
 ##
 { 'command': 'query-dirty-rate', 'returns': 'DirtyRateInfo' }
+
+##
+# @snapshot-save:
+#
+# Save a VM snapshot
+#
+# @job-id: identifier for the newly created job
+# @tag: name of the snapshot to create
+# @devices: list of block device node names to save a snapshot to
+# @vmstate: block device node name to save vmstate to
+#
+# Applications should not assume that the snapshot save is complete
+# when this command returns. The job commands / events must be used
+# to determine completion and to fetch details of any errors that arise.
+#
+# Note that the VM CPUs will be paused during the time it takes to
+# save the snapshot
+#
+# It is strongly recommended that @devices contain all writable
+# block device nodes if a consistent snapshot is required.
+#
+# If @tag already exists, an error will be reported
+#
+# Returns: nothing
+#
+# Example:
+#
+# -> { "execute": "snapshot-save",
+#      "data": {
+#         "job-id": "snapsave0",
+#         "tag": "my-snap",
+#         "vmstate": "disk0",
+#         "devices": ["disk0", "disk1"]
+#      }
+#    }
+# <- { "return": { } }
+#
+# Since: 5.2
+##
+{ 'command': 'snapshot-save',
+  'data': { 'job-id': 'str',
+            'tag': 'str',
+            'vmstate': 'str',
+            'devices': ['str'] } }
+
+##
+# @snapshot-load:
+#
+# Load a VM snapshot
+#
+# @job-id: identifier for the newly created job
+# @tag: name of the snapshot to load.
+# @devices: list of block device node names to load a snapshot from
+# @vmstate: block device node name to load vmstate from
+#
+# Applications should not assume that the snapshot save is complete
+# when this command returns. The job commands / events must be used
+# to determine completion and to fetch details of any errors that arise.
+#
+# Note that the VM CPUs will be paused during the time it takes to
+# save the snapshot
+#
+# It is strongly recommended that @devices contain all writable
+# block device nodes that can have changed since the original
+# @snapshot-save command execution.
+#
+# Returns: nothing
+#
+# Example:
+#
+# -> { "execute": "snapshot-load",
+#      "data": {
+#         "job-id": "snapload0",
+#         "tag": "my-snap",
+#         "vmstate": "disk0",
+#         "devices": ["disk0", "disk1"]
+#      }
+#    }
+# <- { "return": { } }
+#
+# Since: 5.2
+##
+{ 'command': 'snapshot-load',
+  'data': { 'job-id': 'str',
+            'tag': 'str',
+            'vmstate': 'str',
+            'devices': ['str'] } }
+
+##
+# @snapshot-delete:
+#
+# Delete a VM snapshot
+#
+# @job-id: identifier for the newly created job
+# @tag: name of the snapshot to delete.
+# @devices: list of block device node names to delete a snapshot from
+#
+# Applications should not assume that the snapshot save is complete
+# when this command returns. The job commands / events must be used
+# to determine completion and to fetch details of any errors that arise.
+#
+# Returns: nothing
+#
+# Example:
+#
+# -> { "execute": "snapshot-delete",
+#      "data": {
+#         "job-id": "snapdelete0",
+#         "tag": "my-snap",
+#         "devices": ["disk0", "disk1"]
+#      }
+#    }
+# <- { "return": { } }
+#
+# Since: 5.2
+##
+{ 'command': 'snapshot-delete',
+  'data': { 'job-id': 'str',
+            'tag': 'str',
+            'devices': ['str'] } }
diff --git a/tests/qemu-iotests/310 b/tests/qemu-iotests/310
new file mode 100755
index 0000000000..41cec9ea8d
--- /dev/null
+++ b/tests/qemu-iotests/310
@@ -0,0 +1,385 @@ 
+#!/usr/bin/env bash
+#
+# Test which nodes are involved in internal snapshots
+#
+# Copyright (C) 2020 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+
+# creator
+owner=berrange@redhat.com
+
+seq=`basename $0`
+echo "QA output created by $seq"
+
+status=1	# failure is the default!
+
+_cleanup()
+{
+    _cleanup_qemu
+    _cleanup_test_img
+    TEST_IMG="$TEST_IMG.alt1" _cleanup_test_img
+    TEST_IMG="$TEST_IMG.alt2" _cleanup_test_img
+    rm -f "$SOCK_DIR/nbd"
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+. ./common.rc
+. ./common.filter
+. ./common.qemu
+
+_supported_fmt qcow2
+_supported_proto file
+_supported_os Linux
+_require_drivers copy-on-read
+
+# Internal snapshots are (currently) impossible with refcount_bits=1,
+# and generally impossible with external data files
+_unsupported_imgopts 'refcount_bits=1[^0-9]' data_file
+
+_require_devices virtio-blk
+
+
+size=128M
+
+if [ -n "$BACKING_FILE" ]; then
+    _make_test_img -b "$BACKING_FILE" -F $IMGFMT $size
+else
+    _make_test_img $size
+fi
+TEST_IMG="$TEST_IMG.alt1" _make_test_img $size
+IMGOPTS= IMGFMT=raw TEST_IMG="$TEST_IMG.alt2" _make_test_img $size
+
+export capture_events="JOB_STATUS_CHANGE STOP RESUME"
+
+wait_job()
+{
+    local job=$1
+    shift
+
+    # All jobs start with two events...
+    #
+    # created
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+    # running
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+
+    # Next events vary depending on job type and
+    # whether it succeeds or not.
+    for evname in $@
+    do
+	_wait_event $QEMU_HANDLE $evname
+    done
+
+    # All jobs finish off with two more events...
+    # concluded
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"query-jobs\"}" "return"
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"job-dismiss\", \"arguments\": {\"id\": \"$job\"}}" "return"
+    # null
+    _wait_event $QEMU_HANDLE "JOB_STATUS_CHANGE"
+}
+
+run_save()
+{
+    local job=$1
+    local vmstate=$2
+    local devices=$3
+    local fail=$4
+
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-save\",
+                                  \"arguments\": {
+                                     \"job-id\": \"$job\",
+                                     \"tag\": \"snap0\",
+                                     \"vmstate\": \"$vmstate\",
+                                     \"devices\": $devices}}" "return"
+
+    if [ $fail = 0 ]; then
+	# job status: waiting, pending
+	wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
+    else
+	# job status: aborting
+	wait_job $job "JOB_STATUS_CHANGE"
+    fi
+}
+
+run_load()
+{
+    local job=$1
+    local vmstate=$2
+    local devices=$3
+    local fail=$4
+
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-load\",
+                                  \"arguments\": {
+                                     \"job-id\": \"$job\",
+                                     \"tag\": \"snap0\",
+                                     \"vmstate\": \"$vmstate\",
+                                     \"devices\": $devices}}" "return"
+    if [ $fail = 0 ]; then
+	# job status: waiting, pending
+	wait_job $job "STOP" "RESUME" "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
+    else
+	# job status: aborting
+	wait_job $job "STOP" "JOB_STATUS_CHANGE"
+    fi
+}
+
+run_delete()
+{
+    local job=$1
+    local devices=$2
+    local fail=$3
+
+    _send_qemu_cmd $QEMU_HANDLE "{\"execute\": \"snapshot-delete\",
+                                  \"arguments\": {
+                                     \"job-id\": \"$job\",
+                                     \"tag\": \"snap0\",
+                                     \"devices\": $devices}}" "return"
+    if [ $fail = 0 ]; then
+	# job status: waiting, pending
+	wait_job $job "JOB_STATUS_CHANGE" "JOB_STATUS_CHANGE"
+    else
+	# job status: aborting
+	wait_job $job "JOB_STATUS_CHANGE"
+    fi
+}
+
+start_qemu()
+{
+    keep_stderr=y
+    _launch_qemu -nodefaults -nographic "$@"
+
+    _send_qemu_cmd $QEMU_HANDLE '{"execute": "qmp_capabilities"}' 'return'
+}
+
+stop_qemu()
+{
+    _send_qemu_cmd $QEMU_HANDLE '{"execute": "quit"}' 'return'
+
+    wait=1 _cleanup_qemu
+}
+
+
+echo
+echo "=====  Snapshot single qcow2 image ====="
+echo
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-simple" "diskfmt0" "[\"diskfmt0\"]" 0
+run_load "load-simple" "diskfmt0" "[\"diskfmt0\"]" 0
+run_delete "delete-simple" "[\"diskfmt0\"]" 0
+stop_qemu
+
+
+echo
+echo "=====  Snapshot no image ====="
+echo
+
+# When snapshotting we need to pass at least one writable disk
+# otherwise there's no work to do
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-no-image" "diskfmt0" "[]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot missing image ====="
+echo
+
+# The block node names we pass need to actually exist
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-missing-image" "diskfmt1729" "[\"diskfmt1729\"]" 1
+stop_qemu
+
+echo
+echo "=====  Snapshot vmstate not in devices list ====="
+echo
+
+# The node name referred to for vmstate must be one of the nodes
+# being included in the snapshot, otherwise the vmstate that is
+# captured is liable to be overwritten making subsequent load
+# impossible
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
+run_save "save-excluded-vmstate" "diskfmt0" "[\"diskfmt1\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot protocol instead of format ====="
+echo
+
+# The snapshot has to be done against the qcow2 format layer
+# not the underlying file protocol layer
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_save "save-proto-not-fmt" "disk0" "[\"disk0\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot dual qcow2 image ====="
+echo
+
+# We can snapshot multiple  qcow2 disks at the same time
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}"
+run_save "save-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_load "load-dual-image" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_delete "delete-dual-image" "[\"diskfmt0\", \"diskfmt1\"]" 0
+stop_qemu
+
+
+echo
+echo "=====  Snapshot error with raw image ====="
+echo
+
+# If we're snapshotting multiple disks, all must be capable
+# of supporting snapshots. A raw disk in the list must cause
+# an error.
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
+    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
+run_save "save-raw-fmt" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\", \"diskfmt2\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot with raw image excluded ====="
+echo
+
+# If we're snapshotting multiple disks, all must be capable
+# of supporting snapshots. A writable raw disk can be excluded
+# from the snapshot, though it means its data won't be restored
+# by later snapshot load operation.
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt1','node-name':'disk1'}" \
+    -blockdev "{'driver':'qcow2','file':'disk1','node-name':'diskfmt1'}" \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG.alt2','node-name':'disk2'}" \
+    -blockdev "{'driver':'raw','file':'disk2','node-name':'diskfmt2'}"
+run_save "save-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_load "load-skip-raw" "diskfmt0" "[\"diskfmt0\", \"diskfmt1\"]" 0
+run_delete "delete-skip-raw" "[\"diskfmt0\", \"diskfmt1\"]" 0
+stop_qemu
+
+echo
+echo "=====  Snapshot bad error reporting to stderr ====="
+echo
+
+# This demonstrates that we're not capturing vmstate loading failures
+# into QMP errors, they're ending up in stderr instead. vmstate needs
+# to report errors via Error object but that is a major piece of work
+# for the future. This test case's expected output log will need
+# adjusting when that is done.
+
+start_qemu \
+    -device virtio-rng \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+
+run_save "save-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 0
+stop_qemu
+
+# leave off virtio-rng to provoke vmstate failure
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+
+run_load "load-err-stderr" "diskfmt0" "[\"diskfmt0\"]" 1
+run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
+
+stop_qemu
+
+
+echo
+echo "=====  Snapshot reuse same tag ====="
+echo
+
+# Validates that we get an error when reusing a snapshot tag that
+# already exists
+
+start_qemu \
+    -device virtio-rng \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+
+run_save "save-err-stderr-initial" "diskfmt0" "[\"diskfmt0\"]" 0
+run_save "save-err-stderr-repeat1" "diskfmt0" "[\"diskfmt0\"]" 1
+run_delete "delete-err-stderr" "[\"diskfmt0\"]" 0
+run_save "save-err-stderr-repeat2" "diskfmt0" "[\"diskfmt0\"]" 0
+run_delete "delete-err-stderr-repeat2" "[\"diskfmt0\"]" 0
+
+stop_qemu
+
+echo
+echo "=====  Snapshot load does not exist ====="
+echo
+
+# Validates that we get an error when loading a snapshot that does
+# not exist
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_load "load-missing-snapshot" "diskfmt0" "[\"diskfmt0\"]" 1
+stop_qemu
+
+
+echo
+echo "=====  Snapshot delete does not exist ====="
+echo
+
+# Validates that we don't get an error when deleting a snapshot that
+# does not exist
+
+start_qemu \
+    -blockdev "{'driver':'file','filename':'$TEST_IMG','node-name':'disk0'}" \
+    -blockdev "{'driver':'qcow2','file':'disk0','node-name':'diskfmt0'}"
+run_delete "delete-missing-snapshot" "[\"diskfmt0\"]" 0
+stop_qemu
+
+
+# success, all done
+echo "*** done"
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/310.out b/tests/qemu-iotests/310.out
new file mode 100644
index 0000000000..f0e21e9e98
--- /dev/null
+++ b/tests/qemu-iotests/310.out
@@ -0,0 +1,407 @@ 
+QA output created by 310
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
+Formatting 'TEST_DIR/t.IMGFMT.alt1', fmt=IMGFMT size=134217728
+Formatting 'TEST_DIR/t.qcow2.alt2', fmt=IMGFMT size=134217728
+
+=====  Snapshot single qcow2 image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-simple", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-simple"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-simple"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-simple"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-simple"}}
+{"execute": "snapshot-load", "arguments": {"job-id": "load-simple", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-simple"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-simple"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-simple"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-simple"}}
+{"execute": "snapshot-delete", "arguments": {"job-id": "delete-simple", "tag": "snap0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-simple"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-simple"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-simple"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-simple"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-simple"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot no image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-no-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": []}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-no-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-no-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-no-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-no-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-no-image", "error": "At least one device is required for snapshot"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-no-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-no-image"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot missing image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-missing-image", "tag": "snap0", "vmstate": "diskfmt1729", "devices": ["diskfmt1729"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-missing-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-missing-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-missing-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-missing-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-missing-image", "error": "No block device node 'diskfmt1729'"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-missing-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-missing-image"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot vmstate not in devices list =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": { "job-id": "save-excluded-vmstate", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-excluded-vmstate"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-excluded-vmstate"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-excluded-vmstate"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-excluded-vmstate"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-excluded-vmstate", "error": "vmstate block device 'diskfmt0' does not exist"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-excluded-vmstate"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-excluded-vmstate"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot protocol instead of format =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-proto-not-fmt", "tag": "snap0", "vmstate": "disk0", "devices": ["disk0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-proto-not-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-proto-not-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-proto-not-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-proto-not-fmt"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-proto-not-fmt", "error": "Device 'disk0' is writable but does not support snapshots"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-proto-not-fmt"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-proto-not-fmt"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot dual qcow2 image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-dual-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-dual-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-dual-image"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-dual-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-dual-image"}}
+{"execute": "snapshot-load", "arguments": {"job-id": "load-dual-image", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-dual-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-dual-image"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-dual-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-dual-image"}}
+{"execute": "snapshot-delete", "arguments": {"job-id": "delete-dual-image", "tag": "snap0", "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-dual-image"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-dual-image"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-dual-image"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-dual-image"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-dual-image"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot error with raw image =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-raw-fmt", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1", "diskfmt2"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-raw-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-raw-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-raw-fmt"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-raw-fmt"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-raw-fmt", "error": "Device 'diskfmt2' is writable but does not support snapshots"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-raw-fmt"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-raw-fmt"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot with raw image excluded =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-skip-raw", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-skip-raw"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-skip-raw"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-skip-raw"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-skip-raw"}}
+{"execute": "snapshot-load", "arguments": {"job-id": "load-skip-raw", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "load-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-skip-raw"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-skip-raw"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-skip-raw"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-skip-raw"}}
+{"execute": "snapshot-delete", "arguments": {"job-id": "delete-skip-raw", "tag": "snap0", "devices": ["diskfmt0", "diskfmt1"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-skip-raw"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-skip-raw"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-skip-raw"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-skip-raw"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-skip-raw"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot bad error reporting to stderr =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": {"job-id": "save-err-stderr", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-load", "arguments": {"job-id": "load-err-stderr", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+qemu-system-x86_64: Unknown savevm section or instance '0000:00:02.0/virtio-rng' 0. Make sure that your current VM setup matches your saved VM setup, including any hotplugged devices
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-err-stderr", "error": "Error -22 while loading VM state"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-err-stderr"}}
+{"execute": "snapshot-delete", "arguments": {"job-id": "delete-err-stderr", "tag": "snap0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot reuse same tag =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-save", "arguments": { "job-id": "save-err-stderr-initial", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-initial"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-initial"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-initial"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-initial"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-initial"}}
+{"execute": "snapshot-save", "arguments": { "job-id": "save-err-stderr-repeat1", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat1"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat1"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "save-err-stderr-repeat1"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat1"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat1", "error": "Snapshot 'snap0' already exists in one or more devices"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat1"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat1"}}
+{"execute": "snapshot-delete", "arguments": { "job-id": "delete-err-stderr", "tag": "snap0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr"}}
+{"execute": "snapshot-save", "arguments": { "job-id": "save-err-stderr-repeat2", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "RESUME"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "save-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "save-err-stderr-repeat2"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-save", "id": "save-err-stderr-repeat2"}]}
+{"execute": "job-dismiss", "arguments": {"id": "save-err-stderr-repeat2"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "save-err-stderr-repeat2"}}
+{"execute": "snapshot-delete", "arguments": { "job-id": "delete-err-stderr-repeat2", "tag": "snap0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-err-stderr-repeat2"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-err-stderr-repeat2"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-err-stderr-repeat2"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-err-stderr-repeat2"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-err-stderr-repeat2"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot load does not exist =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-load", "arguments": { "job-id": "load-missing-snapshot", "tag": "snap0", "vmstate": "diskfmt0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "load-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "load-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "STOP"}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "load-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "load-missing-snapshot"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-load", "id": "load-missing-snapshot", "error": "Snapshot 'snap0' does not exist in one or more devices"}]}
+{"execute": "job-dismiss", "arguments": {"id": "load-missing-snapshot"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "load-missing-snapshot"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+
+=====  Snapshot delete does not exist =====
+
+{"execute": "qmp_capabilities"}
+{"return": {}}
+{"execute": "snapshot-delete", "arguments": { "job-id": "delete-missing-snapshot", "tag": "snap0", "devices": ["diskfmt0"]}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "delete-missing-snapshot"}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "delete-missing-snapshot"}}
+{"execute": "query-jobs"}
+{"return": [{"current-progress": 1, "status": "concluded", "total-progress": 1, "type": "snapshot-delete", "id": "delete-missing-snapshot"}]}
+{"execute": "job-dismiss", "arguments": {"id": "delete-missing-snapshot"}}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "delete-missing-snapshot"}}
+{"execute": "quit"}
+{"return": {}}
+{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}
+*** done
diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group
index ff59cfd2d4..5f827260da 100644
--- a/tests/qemu-iotests/group
+++ b/tests/qemu-iotests/group
@@ -291,6 +291,7 @@ 
 277 rw quick
 279 rw backing quick
 280 rw migration quick
+310 rw quick
 281 rw quick
 282 rw img quick
 283 auto quick