Message ID | alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com |
---|---|
State | New |
Headers | show |
On 02/04/2014 04:20 PM, Stefano Stabellini wrote: > gic_route_irq_to_guest routes all IRQs to > cpumask_of(smp_processor_id()), but actually it always called on cpu0. > To avoid confusion and possible issues in case someone modified the code > and reassigned a particular irq to a cpu other than cpu0, hardcode > cpumask_of(0). > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Julien Grall <julien.grall@linaro.org> > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index e6257a7..8854800 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq, > > level = dt_irq_is_level_triggered(irq); > > - gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()), > - 0xa0); > + /* TODO: handle routing irqs to cpus != cpu0 */ > + gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0); > > retval = __setup_irq(desc, irq->irq, action); > if (retval) { >
Hi all, Ping? It would be nice to have this patch for Xen 4.4 as IPI priority patch won't be pushed before the release. The patch is a minor change and won't impact normal use. When dom0 is built, Xen always do it on CPU 0. Regards, On 02/04/2014 04:20 PM, Stefano Stabellini wrote: > gic_route_irq_to_guest routes all IRQs to > cpumask_of(smp_processor_id()), but actually it always called on cpu0. > To avoid confusion and possible issues in case someone modified the code > and reassigned a particular irq to a cpu other than cpu0, hardcode > cpumask_of(0). > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index e6257a7..8854800 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq, > > level = dt_irq_is_level_triggered(irq); > > - gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()), > - 0xa0); > + /* TODO: handle routing irqs to cpus != cpu0 */ > + gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0); > > retval = __setup_irq(desc, irq->irq, action); > if (retval) { >
On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote: > Hi all, > > Ping? No one made a case for a release exception so I put it in my 4.5 pile. > It would be nice to have this patch for Xen 4.4 as IPI priority > patch won't be pushed before the release. > > The patch is a minor change and won't impact normal use. When dom0 is > built, Xen always do it on CPU 0. Right, so whoever is doing otherwise already has a big pile of patches I presume? It's rather late to be making such changes IMHO, but I'll defer to George. > > Regards, > > On 02/04/2014 04:20 PM, Stefano Stabellini wrote: > > gic_route_irq_to_guest routes all IRQs to > > cpumask_of(smp_processor_id()), but actually it always called on cpu0. > > To avoid confusion and possible issues in case someone modified the code > > and reassigned a particular irq to a cpu other than cpu0, hardcode > > cpumask_of(0). > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > > index e6257a7..8854800 100644 > > --- a/xen/arch/arm/gic.c > > +++ b/xen/arch/arm/gic.c > > @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq, > > > > level = dt_irq_is_level_triggered(irq); > > > > - gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()), > > - 0xa0); > > + /* TODO: handle routing irqs to cpus != cpu0 */ > > + gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0); > > > > retval = __setup_irq(desc, irq->irq, action); > > if (retval) { > > > > > >
On Wed, 19 Feb 2014, George Dunlap wrote: > On 02/19/2014 01:53 PM, Ian Campbell wrote: > > On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote: > > > Hi all, > > > > > > Ping? > > No one made a case for a release exception so I put it in my 4.5 pile. > > > > > It would be nice to have this patch for Xen 4.4 as IPI priority > > > patch won't be pushed before the release. > > > > > > The patch is a minor change and won't impact normal use. When dom0 is > > > built, Xen always do it on CPU 0. > > Right, so whoever is doing otherwise already has a big pile of patches I > > presume? > > > > It's rather late to be making such changes IMHO, but I'll defer to > > George. > > I can't figure out from the description what's the advantage of having it in > 4.4. People that use the default configuration won't see any differences but people that manually modify Xen to start a second domain and assign a device to it would. To give you a concrete example, it fixes a deadlock reported by Oleksandr Tyshchenko: http://marc.info/?l=xen-devel&m=139099606402232
On 02/21/2014 11:12 AM, George Dunlap wrote: > On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini > <stefano.stabellini@eu.citrix.com> wrote: >> On Wed, 19 Feb 2014, George Dunlap wrote: >>> On 02/19/2014 01:53 PM, Ian Campbell wrote: >>>> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote: >>>>> Hi all, >>>>> >>>>> Ping? >>>> No one made a case for a release exception so I put it in my 4.5 pile. >>>> >>>>> It would be nice to have this patch for Xen 4.4 as IPI priority >>>>> patch won't be pushed before the release. >>>>> >>>>> The patch is a minor change and won't impact normal use. When dom0 is >>>>> built, Xen always do it on CPU 0. >>>> Right, so whoever is doing otherwise already has a big pile of patches I >>>> presume? >>>> >>>> It's rather late to be making such changes IMHO, but I'll defer to >>>> George. >>> >>> I can't figure out from the description what's the advantage of having it in >>> 4.4. >> >> People that use the default configuration won't see any differences but >> people that manually modify Xen to start a second domain and assign a >> device to it would. >> To give you a concrete example, it fixes a deadlock reported by >> Oleksandr Tyshchenko: >> >> http://marc.info/?l=xen-devel&m=139099606402232 > > Right -- I think if I had been cc'd when the patch was submitted, I > would have said yes for sure; but at this point I think we just want > to get 4.4.0 out without any more delays if possible. You were already CCed from the beginning :).
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index e6257a7..8854800 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq, level = dt_irq_is_level_triggered(irq); - gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()), - 0xa0); + /* TODO: handle routing irqs to cpus != cpu0 */ + gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0); retval = __setup_irq(desc, irq->irq, action); if (retval) {
gic_route_irq_to_guest routes all IRQs to cpumask_of(smp_processor_id()), but actually it always called on cpu0. To avoid confusion and possible issues in case someone modified the code and reassigned a particular irq to a cpu other than cpu0, hardcode cpumask_of(0). Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>