Message ID | 1363628865-29390-3-git-send-email-peter.maydell@linaro.org |
---|---|
State | Superseded |
Headers | show |
On 03/18/2013 09:47 PM, Peter Maydell wrote: > >> Update the GIC save/restore to use vmstate rather than hand-rolled >> save/load functions. >> >> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> >> --- >> hw/arm_gic_common.c | 109 ++++++++++++++++++++----------** >> --------------------- >> 1 file changed, 42 insertions(+), 67 deletions(-) >> >> diff --git a/hw/arm_gic_common.c b/hw/arm_gic_common.c >> index f95bec3..0785a11 100644 >> --- a/hw/arm_gic_common.c >> +++ b/hw/arm_gic_common.c >> @@ -20,90 +20,66 @@ >> > > > - GIC_INTERNAL), >> + VMSTATE_BUFFER_UNSAFE(last_**active, GICState, 0, >> + GIC_MAXIRQ * NCPU * sizeof(uint16_t)), >> > > > I'm not sure about this one, do we have any guarantees that it will always be tightly packed? What will happen when we will try to migrate VM between BE and LE hosts? I successfully tested it on my PC, but I think generally, this macro is OK to use with byte buffers, but not with an array of arbitrary type. That said, I couldn't come up with any alternatives yet.
On 18 March 2013 19:48, Igor Mitsyanko <i.mitsyanko@gmail.com> wrote: >> On 03/18/2013 09:47 PM, Peter Maydell wrote: >>> >>> + VMSTATE_BUFFER_UNSAFE(last_active, GICState, 0, >>> + GIC_MAXIRQ * NCPU * sizeof(uint16_t)), > I'm not sure about this one, do we have any guarantees that it will always > be tightly packed? What will happen when we will try to migrate VM between > BE and LE hosts? Ugh. I think the packing is ok but I hadn't thought about the endianness issue. Gerd and I were talking on IRC about 2D arrays. I think we came to the conclusion that you could provide a new set of vmstate macros for 2D arrays which basically work just like the existing 1D array ones except that the typecheck is different. (vmstate.h is getting hugely repetitive to the point that I'm really tempted to say we should just autogenerate it. That way you could define a fairly small set of things (arrays, base types, safe vs unsafe, etc) and have a script generate the cross product, rather than the current setup where there is a lot of hand written repetition and a tendency to gaps in the coverage where nobody's using them yet.) -- PMM
On Mar 19, 2013 12:21 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote: > > On 18 March 2013 19:48, Igor Mitsyanko <i.mitsyanko@gmail.com> wrote: > >> On 03/18/2013 09:47 PM, Peter Maydell wrote: > >>> > >>> + VMSTATE_BUFFER_UNSAFE(last_active, GICState, 0, > >>> + GIC_MAXIRQ * NCPU * sizeof(uint16_t)), > > > I'm not sure about this one, do we have any guarantees that it will always > > be tightly packed? What will happen when we will try to migrate VM between > > BE and LE hosts? > > Ugh. I think the packing is ok but I hadn't thought about the > endianness issue. > > Gerd and I were talking on IRC about 2D arrays. I think we came to > the conclusion that you could provide a new set of vmstate macros > for 2D arrays which basically work just like the existing 1D array > ones except that the typecheck is different. > > (vmstate.h is getting hugely repetitive to the point that I'm > really tempted to say we should just autogenerate it. That way > you could define a fairly small set of things (arrays, base types, > safe vs unsafe, etc) and have a script generate the cross product, > rather than the current setup where there is a lot of hand written > repetition and a tendency to gaps in the coverage where nobody's > using them yet.) > > -- PMM I can recall a qemu-devel discussion over a long-term QOM goals a while ago.Somebody suggested that in the future we will define devices state structures using some special macro which will be parsed during compilation, serializing each member for both QOM introspection and vmstate migration.
Hi, >> (vmstate.h is getting hugely repetitive to the point that I'm >> really tempted to say we should just autogenerate it. That way >> you could define a fairly small set of things (arrays, base types, >> safe vs unsafe, etc) and have a script generate the cross product, >> rather than the current setup where there is a lot of hand written >> repetition and a tendency to gaps in the coverage where nobody's >> using them yet.) > I can recall a qemu-devel discussion over a long-term QOM goals a while > ago.Somebody suggested that in the future we will define devices state > structures using some special macro which will be parsed during > compilation, serializing each member for both QOM introspection and vmstate > migration. That is where I see the future too. Michael Roth [ cc'ed ] has this on his agenda. We have code generation infrastructure for qapi and it surely makes sense to reuse that for vmstate. cheers, Gerd
Am 18.03.2013 21:43, schrieb Igor Mitsyanko: > > On Mar 19, 2013 12:21 AM, "Peter Maydell" <peter.maydell@linaro.org > <mailto:peter.maydell@linaro.org>> wrote: >> >> On 18 March 2013 19:48, Igor Mitsyanko <i.mitsyanko@gmail.com > <mailto:i.mitsyanko@gmail.com>> wrote: >> >> On 03/18/2013 09:47 PM, Peter Maydell wrote: >> >>> >> >>> + VMSTATE_BUFFER_UNSAFE(last_active, GICState, 0, >> >>> + GIC_MAXIRQ * NCPU * sizeof(uint16_t)), >> >> > I'm not sure about this one, do we have any guarantees that it will > always >> > be tightly packed? What will happen when we will try to migrate VM > between >> > BE and LE hosts? >> >> Ugh. I think the packing is ok but I hadn't thought about the >> endianness issue. >> >> Gerd and I were talking on IRC about 2D arrays. I think we came to >> the conclusion that you could provide a new set of vmstate macros >> for 2D arrays which basically work just like the existing 1D array >> ones except that the typecheck is different. >> >> (vmstate.h is getting hugely repetitive to the point that I'm >> really tempted to say we should just autogenerate it. That way >> you could define a fairly small set of things (arrays, base types, >> safe vs unsafe, etc) and have a script generate the cross product, >> rather than the current setup where there is a lot of hand written >> repetition and a tendency to gaps in the coverage where nobody's >> using them yet.) >> >> -- PMM > > I can recall a qemu-devel discussion over a long-term QOM goals a while > ago.Somebody suggested that in the future we will define devices state > structures using some special macro which will be parsed during > compilation, serializing each member for both QOM introspection and > vmstate migration. QIDL - CC'ing Michael. Andreas
diff --git a/hw/arm_gic_common.c b/hw/arm_gic_common.c index f95bec3..0785a11 100644 --- a/hw/arm_gic_common.c +++ b/hw/arm_gic_common.c @@ -20,90 +20,66 @@ #include "hw/arm_gic_internal.h" -static void gic_save(QEMUFile *f, void *opaque) +static void gic_pre_save(void *opaque) { GICState *s = (GICState *)opaque; ARMGICCommonClass *c = ARM_GIC_COMMON_GET_CLASS(s); - int i; - int j; if (c->pre_save) { c->pre_save(s); } - - qemu_put_be32(f, s->enabled); - for (i = 0; i < s->num_cpu; i++) { - qemu_put_be32(f, s->cpu_enabled[i]); - for (j = 0; j < GIC_INTERNAL; j++) { - qemu_put_be32(f, s->priority1[j][i]); - } - for (j = 0; j < s->num_irq; j++) { - qemu_put_be32(f, s->last_active[j][i]); - } - qemu_put_be32(f, s->priority_mask[i]); - qemu_put_be32(f, s->running_irq[i]); - qemu_put_be32(f, s->running_priority[i]); - qemu_put_be32(f, s->current_pending[i]); - } - for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) { - qemu_put_be32(f, s->priority2[i]); - } - for (i = 0; i < s->num_irq; i++) { - qemu_put_be32(f, s->irq_target[i]); - qemu_put_byte(f, s->irq_state[i].enabled); - qemu_put_byte(f, s->irq_state[i].pending); - qemu_put_byte(f, s->irq_state[i].active); - qemu_put_byte(f, s->irq_state[i].level); - qemu_put_byte(f, s->irq_state[i].model); - qemu_put_byte(f, s->irq_state[i].trigger); - } } -static int gic_load(QEMUFile *f, void *opaque, int version_id) +static int gic_post_load(void *opaque, int version_id) { GICState *s = (GICState *)opaque; ARMGICCommonClass *c = ARM_GIC_COMMON_GET_CLASS(s); - int i; - int j; - - if (version_id != 3) { - return -EINVAL; - } - - s->enabled = qemu_get_be32(f); - for (i = 0; i < s->num_cpu; i++) { - s->cpu_enabled[i] = qemu_get_be32(f); - for (j = 0; j < GIC_INTERNAL; j++) { - s->priority1[j][i] = qemu_get_be32(f); - } - for (j = 0; j < s->num_irq; j++) { - s->last_active[j][i] = qemu_get_be32(f); - } - s->priority_mask[i] = qemu_get_be32(f); - s->running_irq[i] = qemu_get_be32(f); - s->running_priority[i] = qemu_get_be32(f); - s->current_pending[i] = qemu_get_be32(f); - } - for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) { - s->priority2[i] = qemu_get_be32(f); - } - for (i = 0; i < s->num_irq; i++) { - s->irq_target[i] = qemu_get_be32(f); - s->irq_state[i].enabled = qemu_get_byte(f); - s->irq_state[i].pending = qemu_get_byte(f); - s->irq_state[i].active = qemu_get_byte(f); - s->irq_state[i].level = qemu_get_byte(f); - s->irq_state[i].model = qemu_get_byte(f); - s->irq_state[i].trigger = qemu_get_byte(f); - } if (c->post_load) { c->post_load(s); } - return 0; } +static const VMStateDescription vmstate_gic_irq_state = { + .name = "arm_gic_irq_state", + .version_id = 1, + .minimum_version_id = 1, + .fields = (VMStateField[]) { + VMSTATE_UINT8(enabled, gic_irq_state), + VMSTATE_UINT8(pending, gic_irq_state), + VMSTATE_UINT8(active, gic_irq_state), + VMSTATE_UINT8(level, gic_irq_state), + VMSTATE_BOOL(model, gic_irq_state), + VMSTATE_BOOL(trigger, gic_irq_state), + VMSTATE_END_OF_LIST() + } +}; + +static const VMStateDescription vmstate_gic = { + .name = "arm_gic", + .version_id = 4, + .minimum_version_id = 4, + .pre_save = gic_pre_save, + .post_load = gic_post_load, + .fields = (VMStateField[]) { + VMSTATE_BOOL(enabled, GICState), + VMSTATE_BOOL_ARRAY(cpu_enabled, GICState, NCPU), + VMSTATE_STRUCT_ARRAY(irq_state, GICState, GIC_MAXIRQ, 1, + vmstate_gic_irq_state, gic_irq_state), + VMSTATE_UINT8_ARRAY(irq_target, GICState, GIC_MAXIRQ), + VMSTATE_BUFFER_UNSAFE(priority1, GICState, 0, GIC_INTERNAL * NCPU), + VMSTATE_UINT8_ARRAY(priority2, GICState, GIC_MAXIRQ - GIC_INTERNAL), + VMSTATE_BUFFER_UNSAFE(last_active, GICState, 0, + GIC_MAXIRQ * NCPU * sizeof(uint16_t)), + VMSTATE_UINT16_ARRAY(priority_mask, GICState, NCPU), + VMSTATE_UINT16_ARRAY(running_irq, GICState, NCPU), + VMSTATE_UINT16_ARRAY(running_priority, GICState, NCPU), + VMSTATE_UINT16_ARRAY(current_pending, GICState, NCPU), + VMSTATE_END_OF_LIST() + } +}; + static void arm_gic_common_realize(DeviceState *dev, Error **errp) { GICState *s = ARM_GIC_COMMON(dev); @@ -131,8 +107,6 @@ static void arm_gic_common_realize(DeviceState *dev, Error **errp) num_irq); return; } - - register_savevm(NULL, "arm_gic", -1, 3, gic_save, gic_load, s); } static void arm_gic_common_reset(DeviceState *dev) @@ -182,6 +156,7 @@ static void arm_gic_common_class_init(ObjectClass *klass, void *data) dc->reset = arm_gic_common_reset; dc->realize = arm_gic_common_realize; dc->props = arm_gic_common_properties; + dc->vmsd = &vmstate_gic; dc->no_user = 1; }
Update the GIC save/restore to use vmstate rather than hand-rolled save/load functions. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- hw/arm_gic_common.c | 109 ++++++++++++++++++++------------------------------- 1 file changed, 42 insertions(+), 67 deletions(-)