Message ID | 1465318123-3090-9-git-send-email-julien.grall@arm.com |
---|---|
State | New |
Headers | show |
Hi Stefano, On 22/06/16 12:44, Stefano Stabellini wrote: > On Tue, 7 Jun 2016, Julien Grall wrote: >> It is not possible to know which IRQs will be used by DOM0 when ACPI is >> inuse. The approach implemented by this patch, will route all unused >> IRQs to DOM0 before it has booted. >> >> The number of IRQs routed is based on the maximum SPIs supported by the >> hardware (up to ~1000). However, some of them might not be wired. So we >> would allocate resource for nothing. >> >> For each IRQ routed, Xen is allocating memory for irqaction (40 bytes) >> and irq_guest (16 bytes). So in the worst case scenario ~54KB of memory >> will be allocated. Given that ACPI will mostly be used by server, I >> think it is a small drawback. > > Yeah, it's a pity. The patch is certainly an improvement and it would > fix ACPI, which is currently broken, so I think it should go in pretty > much as is. (See only one small comment below.) > > But I wonder if we could do something better by the next release. Have > you considered using something like a tasklet to call > route_irq_to_guest? The tasklet would be scheduled only after > vgic_enable_irqs is called. > > Something like: > > - guest writes to GICD_ISENABLER > - run vgic_enable_irqs > - enable tasklet > - tasklet run > - call route_irq_to_guest > > ? I remembered we talked about a such solution on IRC. I have considered it and I have found multiple downsides, what do we do if the call to route_irq_to_guest fails? It is easy to make this call fail on ARM64 because there is no difference between xenheap and domheap. And there is no way to report this failure to the guest except crashing it. Furthermore, we would need to take into account that the IRQ might be disabled by another CPU before the tasklet is called. Finally, we would also need to plumb GICD_CTLR.RWP on GICv3 because the effects of the write will be differed. Which lead to the problem, what if the tasklet takes more than a 1s to be scheduled/executed? I agree that the memory usage is "greater" (54KB on server with tens of gigabyte of memory), but I think the complexity of the code with tasklet outweigh the memory usage of this patch. > > >> map_irq_to_domain is slightly reworked to remove the dependency on >> device-tree. So the function can be also be used for ACPI and will >> avoid code duplication. >> >> Signed-off-by: Julien Grall <julien.grall@arm.com> >> --- >> xen/arch/arm/domain_build.c | 19 ++++++++----------- >> 1 file changed, 8 insertions(+), 11 deletions(-) >> >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c >> index 00dc07a..76d503d 100644 >> --- a/xen/arch/arm/domain_build.c >> +++ b/xen/arch/arm/domain_build.c [...] >> @@ -1411,13 +1411,10 @@ static int acpi_permit_spi_access(struct domain *d) > > I would rename acpi_permit_spi_access to acpi_route_spis_to_dom0 or > something like that, to reflect the change in behavior of the function. Good point. I will do it in the next version. Regards,
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 00dc07a..76d503d 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -953,11 +953,10 @@ static int make_timer_node(const struct domain *d, void *fdt, return res; } -static int map_irq_to_domain(const struct dt_device_node *dev, - struct domain *d, unsigned int irq) +static int map_irq_to_domain(struct domain *d, unsigned int irq, + bool_t need_mapping, const char *devname) { - bool_t need_mapping = !dt_device_for_passthrough(dev); int res; res = irq_permit_access(d, irq); @@ -977,7 +976,7 @@ static int map_irq_to_domain(const struct dt_device_node *dev, */ vgic_reserve_virq(d, irq); - res = route_irq_to_guest(d, irq, irq, dt_node_name(dev)); + res = route_irq_to_guest(d, irq, irq, devname); if ( res < 0 ) { printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n", @@ -997,6 +996,7 @@ static int map_dt_irq_to_domain(const struct dt_device_node *dev, struct domain *d = data; unsigned int irq = dt_irq->irq; int res; + bool_t need_mapping = !dt_device_for_passthrough(dev); if ( irq < NR_LOCAL_IRQS ) { @@ -1015,7 +1015,7 @@ static int map_dt_irq_to_domain(const struct dt_device_node *dev, return res; } - res = map_irq_to_domain(dev, d, irq); + res = map_irq_to_domain(d, irq, need_mapping, dt_node_name(dev)); return 0; } @@ -1152,7 +1152,7 @@ static int handle_device(struct domain *d, struct dt_device_node *dev) return res; } - res = map_irq_to_domain(dev, d, res); + res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev)); if ( res ) return res; } @@ -1411,13 +1411,10 @@ static int acpi_permit_spi_access(struct domain *d) if ( desc->action != NULL) continue; - res = irq_permit_access(d, i); + /* XXX: Shall we use a proper devname? */ + res = map_irq_to_domain(d, i, true, "ACPI"); if ( res ) - { - printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n", - d->domain_id, i); return res; - } } return 0;
It is not possible to know which IRQs will be used by DOM0 when ACPI is inuse. The approach implemented by this patch, will route all unused IRQs to DOM0 before it has booted. The number of IRQs routed is based on the maximum SPIs supported by the hardware (up to ~1000). However, some of them might not be wired. So we would allocate resource for nothing. For each IRQ routed, Xen is allocating memory for irqaction (40 bytes) and irq_guest (16 bytes). So in the worst case scenario ~54KB of memory will be allocated. Given that ACPI will mostly be used by server, I think it is a small drawback. map_irq_to_domain is slightly reworked to remove the dependency on device-tree. So the function can be also be used for ACPI and will avoid code duplication. Signed-off-by: Julien Grall <julien.grall@arm.com> --- xen/arch/arm/domain_build.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-)