diff mbox

[Xen-devel,2/2] xen/arm: observe itarget setting in vgic_enable_irqs and vgic_disable_irqs

Message ID 1401041192-20424-2-git-send-email-stefano.stabellini@eu.citrix.com
State New
Headers show

Commit Message

Stefano Stabellini May 25, 2014, 6:06 p.m. UTC
vgic_enable_irqs should enable irq delivery to the vcpu specified by
GICD_ITARGETSR, rather than the vcpu that wrote to GICD_ISENABLER.
Similarly vgic_disable_irqs should use the target vcpu specified by
itarget to disable irqs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/vgic.c |   42 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 34 insertions(+), 8 deletions(-)

Comments

Julien Grall May 27, 2014, 4:54 p.m. UTC | #1
Hi Stefano,

On 05/25/2014 07:06 PM, Stefano Stabellini wrote:
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
> -        p = irq_to_pending(v, irq);
> +        rank = vgic_irq_rank(v, 1, irq/32);
> +        vgic_lock_rank(v, rank);
> +        if ( irq >= 32 )
> +        {
> +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> +            target &= 0xff;
> +            v_target = v->domain->vcpu[target];

Without looking to the target stuff (see comment on patch #1), I don't
need to do a specific case for SPIs.

It will avoid diverging following the IRQ type.

Regards,
Stefano Stabellini May 27, 2014, 5:02 p.m. UTC | #2
On Tue, 27 May 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 05/25/2014 07:06 PM, Stefano Stabellini wrote:
> >      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> > -        p = irq_to_pending(v, irq);
> > +        rank = vgic_irq_rank(v, 1, irq/32);
> > +        vgic_lock_rank(v, rank);
> > +        if ( irq >= 32 )
> > +        {
> > +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> > +            target &= 0xff;
> > +            v_target = v->domain->vcpu[target];
> 
> Without looking to the target stuff (see comment on patch #1), I don't
> need to do a specific case for SPIs.
> 
> It will avoid diverging following the IRQ type.

Sooner or later we'll implement SPI delivery to vcpu != 0. When we do
we'll actually need this patch, that is correct even without SPI
delivery to vcpu != 0.
Julien Grall May 27, 2014, 5:09 p.m. UTC | #3
On 05/27/2014 06:02 PM, Stefano Stabellini wrote:
> On Tue, 27 May 2014, Julien Grall wrote:
>> On 05/25/2014 07:06 PM, Stefano Stabellini wrote:
>>>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>>>          irq = i + (32 * n);
>>> -        p = irq_to_pending(v, irq);
>>> +        rank = vgic_irq_rank(v, 1, irq/32);
>>> +        vgic_lock_rank(v, rank);
>>> +        if ( irq >= 32 )
>>> +        {
>>> +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
>>> +            target &= 0xff;
>>> +            v_target = v->domain->vcpu[target];
>>
>> Without looking to the target stuff (see comment on patch #1), I don't
>> need to do a specific case for SPIs.
>>
>> It will avoid diverging following the IRQ type.
> 
> Sooner or later we'll implement SPI delivery to vcpu != 0. When we do
> we'll actually need this patch, that is correct even without SPI
> delivery to vcpu != 0.
> 

Hrrrm .... right I forgot this case. If so, the vgic_irq_rank_lock
doesn't need to be taken for non-SPIs.

Regards,
Julien Grall May 27, 2014, 5:10 p.m. UTC | #4
On 05/27/2014 06:02 PM, Stefano Stabellini wrote:
> On Tue, 27 May 2014, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 05/25/2014 07:06 PM, Stefano Stabellini wrote:
>>>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>>>          irq = i + (32 * n);
>>> -        p = irq_to_pending(v, irq);
>>> +        rank = vgic_irq_rank(v, 1, irq/32);
>>> +        vgic_lock_rank(v, rank);
>>> +        if ( irq >= 32 )
>>> +        {
>>> +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
>>> +            target &= 0xff;
>>> +            v_target = v->domain->vcpu[target];
>>
>> Without looking to the target stuff (see comment on patch #1), I don't
>> need to do a specific case for SPIs.
>>
>> It will avoid diverging following the IRQ type.
> 
> Sooner or later we'll implement SPI delivery to vcpu != 0. When we do
> we'll actually need this patch, that is correct even without SPI
> delivery to vcpu != 0.
> 

To be clear, I didn't say this patch was not useful. I just say we can
merge the code and do use the same path for non-SPIs.

Regards,
Ian Campbell May 28, 2014, 1:38 p.m. UTC | #5
On Sun, 2014-05-25 at 19:06 +0100, Stefano Stabellini wrote:
> vgic_enable_irqs should enable irq delivery to the vcpu specified by
> GICD_ITARGETSR, rather than the vcpu that wrote to GICD_ISENABLER.
> Similarly vgic_disable_irqs should use the target vcpu specified by
> itarget to disable irqs.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/vgic.c |   42 ++++++++++++++++++++++++++++++++++--------
>  1 file changed, 34 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index e4f38a0..0f0ba1d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -376,12 +376,25 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>      unsigned int irq;
>      unsigned long flags;
>      int i = 0;
> +    int target;
> +    struct vcpu *v_target;
> +    struct vgic_irq_rank *rank;
>  
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
> -        p = irq_to_pending(v, irq);
> +        rank = vgic_irq_rank(v, 1, irq/32);
> +        vgic_lock_rank(v, rank);
> +        if ( irq >= 32 )
> +        {
> +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> +            target &= 0xff;

This is byte_read(), isn't it?

> +            v_target = v->domain->vcpu[target];

There needs to be some sort of range check here I think. Else you are
setting a trap for whoever implements itargets properly.

> +        } else
> +            v_target = v;
> +        vgic_unlock_rank(v, rank);
> +        p = irq_to_pending(v_target, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -        gic_remove_from_queues(v, irq);
> +        gic_remove_from_queues(v_target, irq);
>          if ( p->desc != NULL )
>          {
>              spin_lock_irqsave(&p->desc->lock, flags);
> @@ -399,21 +412,34 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>      unsigned int irq;
>      unsigned long flags;
>      int i = 0;
> +    int target;
> +    struct vcpu *v_target;
> +    struct vgic_irq_rank *rank;
>  
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
> -        p = irq_to_pending(v, irq);
> +        rank = vgic_irq_rank(v, 1, irq/32);
> +        vgic_lock_rank(v, rank);
> +        if ( irq >= 32 )
> +        {
> +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> +            target &= 0xff;
> +            v_target = v->domain->vcpu[target];
> +        } else
> +            v_target = v;

This is the same code as above -- there should be a helper
(get_target_vcpu or some such).

> +        vgic_unlock_rank(v, rank);
> +        p = irq_to_pending(v_target, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -        if ( irq == v->domain->arch.evtchn_irq &&
> +        if ( irq == v_target->domain->arch.evtchn_irq &&
>               vcpu_info(current, evtchn_upcall_pending) &&
>               list_empty(&p->inflight) )
> -            vgic_vcpu_inject_irq(v, irq);
> +            vgic_vcpu_inject_irq(v_target, irq);
>          else {
>              unsigned long flags;
> -            spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +            spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
>              if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> -                gic_raise_guest_irq(v, irq, p->priority);
> -            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +                gic_raise_guest_irq(v_target, irq, p->priority);
> +            spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
>          }
>          if ( p->desc != NULL )
>          {
Stefano Stabellini June 3, 2014, 2:03 p.m. UTC | #6
On Wed, 28 May 2014, Ian Campbell wrote:
> On Sun, 2014-05-25 at 19:06 +0100, Stefano Stabellini wrote:
> > vgic_enable_irqs should enable irq delivery to the vcpu specified by
> > GICD_ITARGETSR, rather than the vcpu that wrote to GICD_ISENABLER.
> > Similarly vgic_disable_irqs should use the target vcpu specified by
> > itarget to disable irqs.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >  xen/arch/arm/vgic.c |   42 ++++++++++++++++++++++++++++++++++--------
> >  1 file changed, 34 insertions(+), 8 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index e4f38a0..0f0ba1d 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -376,12 +376,25 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >      unsigned int irq;
> >      unsigned long flags;
> >      int i = 0;
> > +    int target;
> > +    struct vcpu *v_target;
> > +    struct vgic_irq_rank *rank;
> >  
> >      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> > -        p = irq_to_pending(v, irq);
> > +        rank = vgic_irq_rank(v, 1, irq/32);
> > +        vgic_lock_rank(v, rank);
> > +        if ( irq >= 32 )
> > +        {
> > +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> > +            target &= 0xff;
> 
> This is byte_read(), isn't it?

yes, I'll use it


> > +            v_target = v->domain->vcpu[target];
> 
> There needs to be some sort of range check here I think. Else you are
> setting a trap for whoever implements itargets properly.

The check should be at the point where we write itargets, not here.
The previous patch already introduces a check for itargets to always be
zero.


> > +        } else
> > +            v_target = v;
> > +        vgic_unlock_rank(v, rank);
> > +        p = irq_to_pending(v_target, irq);
> >          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > -        gic_remove_from_queues(v, irq);
> > +        gic_remove_from_queues(v_target, irq);
> >          if ( p->desc != NULL )
> >          {
> >              spin_lock_irqsave(&p->desc->lock, flags);
> > @@ -399,21 +412,34 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> >      unsigned int irq;
> >      unsigned long flags;
> >      int i = 0;
> > +    int target;
> > +    struct vcpu *v_target;
> > +    struct vgic_irq_rank *rank;
> >  
> >      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> > -        p = irq_to_pending(v, irq);
> > +        rank = vgic_irq_rank(v, 1, irq/32);
> > +        vgic_lock_rank(v, rank);
> > +        if ( irq >= 32 )
> > +        {
> > +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> > +            target &= 0xff;
> > +            v_target = v->domain->vcpu[target];
> > +        } else
> > +            v_target = v;
> 
> This is the same code as above -- there should be a helper
> (get_target_vcpu or some such).

Good idea


> > +        vgic_unlock_rank(v, rank);
> > +        p = irq_to_pending(v_target, irq);
> >          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > -        if ( irq == v->domain->arch.evtchn_irq &&
> > +        if ( irq == v_target->domain->arch.evtchn_irq &&
> >               vcpu_info(current, evtchn_upcall_pending) &&
> >               list_empty(&p->inflight) )
> > -            vgic_vcpu_inject_irq(v, irq);
> > +            vgic_vcpu_inject_irq(v_target, irq);
> >          else {
> >              unsigned long flags;
> > -            spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +            spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> >              if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> > -                gic_raise_guest_irq(v, irq, p->priority);
> > -            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +                gic_raise_guest_irq(v_target, irq, p->priority);
> > +            spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> >          }
> >          if ( p->desc != NULL )
> >          {
> 
>
Stefano Stabellini June 3, 2014, 2:24 p.m. UTC | #7
On Tue, 27 May 2014, Julien Grall wrote:
> On 05/27/2014 06:02 PM, Stefano Stabellini wrote:
> > On Tue, 27 May 2014, Julien Grall wrote:
> >> Hi Stefano,
> >>
> >> On 05/25/2014 07:06 PM, Stefano Stabellini wrote:
> >>>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >>>          irq = i + (32 * n);
> >>> -        p = irq_to_pending(v, irq);
> >>> +        rank = vgic_irq_rank(v, 1, irq/32);
> >>> +        vgic_lock_rank(v, rank);
> >>> +        if ( irq >= 32 )
> >>> +        {
> >>> +            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
> >>> +            target &= 0xff;
> >>> +            v_target = v->domain->vcpu[target];
> >>
> >> Without looking to the target stuff (see comment on patch #1), I don't
> >> need to do a specific case for SPIs.
> >>
> >> It will avoid diverging following the IRQ type.
> > 
> > Sooner or later we'll implement SPI delivery to vcpu != 0. When we do
> > we'll actually need this patch, that is correct even without SPI
> > delivery to vcpu != 0.
> > 
> 
> To be clear, I didn't say this patch was not useful. I just say we can
> merge the code and do use the same path for non-SPIs.

Ah OK
diff mbox

Patch

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index e4f38a0..0f0ba1d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -376,12 +376,25 @@  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     unsigned int irq;
     unsigned long flags;
     int i = 0;
+    int target;
+    struct vcpu *v_target;
+    struct vgic_irq_rank *rank;
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        p = irq_to_pending(v, irq);
+        rank = vgic_irq_rank(v, 1, irq/32);
+        vgic_lock_rank(v, rank);
+        if ( irq >= 32 )
+        {
+            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
+            target &= 0xff;
+            v_target = v->domain->vcpu[target];
+        } else
+            v_target = v;
+        vgic_unlock_rank(v, rank);
+        p = irq_to_pending(v_target, irq);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        gic_remove_from_queues(v, irq);
+        gic_remove_from_queues(v_target, irq);
         if ( p->desc != NULL )
         {
             spin_lock_irqsave(&p->desc->lock, flags);
@@ -399,21 +412,34 @@  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     unsigned int irq;
     unsigned long flags;
     int i = 0;
+    int target;
+    struct vcpu *v_target;
+    struct vgic_irq_rank *rank;
 
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
-        p = irq_to_pending(v, irq);
+        rank = vgic_irq_rank(v, 1, irq/32);
+        vgic_lock_rank(v, rank);
+        if ( irq >= 32 )
+        {
+            target = rank->itargets[(irq%32)/4] >> (8*(irq % 4));
+            target &= 0xff;
+            v_target = v->domain->vcpu[target];
+        } else
+            v_target = v;
+        vgic_unlock_rank(v, rank);
+        p = irq_to_pending(v_target, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( irq == v->domain->arch.evtchn_irq &&
+        if ( irq == v_target->domain->arch.evtchn_irq &&
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
-            vgic_vcpu_inject_irq(v, irq);
+            vgic_vcpu_inject_irq(v_target, irq);
         else {
             unsigned long flags;
-            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
             if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-                gic_raise_guest_irq(v, irq, p->priority);
-            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+                gic_raise_guest_irq(v_target, irq, p->priority);
+            spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
         }
         if ( p->desc != NULL )
         {