diff mbox

[Xen-devel,v4,3/4] xen/arm: support irq delivery to vcpu > 0

Message ID 1402076908-26740-3-git-send-email-stefano.stabellini@eu.citrix.com
State New
Headers show

Commit Message

Stefano Stabellini June 6, 2014, 5:48 p.m. UTC
Use vgic_get_target_vcpu to retrieve the target vcpu from do_IRQ.
Remove in-code comments about missing implementation of SGI delivery to
vcpus other than 0.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v4:
- the mask in gic_route_irq_to_guest is a physical cpu mask, treat it as
such;
- export vgic_get_target_vcpu in a previous patch.
---
 xen/arch/arm/gic.c |    1 -
 xen/arch/arm/irq.c |    3 +--
 2 files changed, 1 insertion(+), 3 deletions(-)

Comments

Ian Campbell June 10, 2014, 12:16 p.m. UTC | #1
On Fri, 2014-06-06 at 18:48 +0100, Stefano Stabellini wrote:
> Use vgic_get_target_vcpu to retrieve the target vcpu from do_IRQ.
> Remove in-code comments about missing implementation of SGI delivery to
> vcpus other than 0.

You meant SPI I think?

What about PPIs?

> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> ---
> 
> Changes in v4:
> - the mask in gic_route_irq_to_guest is a physical cpu mask, treat it as
> such;
> - export vgic_get_target_vcpu in a previous patch.
> ---
>  xen/arch/arm/gic.c |    1 -
>  xen/arch/arm/irq.c |    3 +--
>  2 files changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 92391b4..6f24b14 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -287,7 +287,6 @@ void gic_route_irq_to_guest(struct domain *d, struct irq_desc *desc,
>      gic_set_irq_properties(desc->irq, level, cpumask_of(smp_processor_id()),
>                             GIC_PRI_IRQ);
>  
> -    /* TODO: do not assume delivery to vcpu0 */
>      p = irq_to_pending(d->vcpu[0], desc->irq);
>      p->desc = desc;
>  }
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index a33c797..0fad647 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -175,8 +175,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
>          desc->status |= IRQ_INPROGRESS;
>          desc->arch.eoi_cpu = smp_processor_id();
>  
> -        /* XXX: inject irq into all guest vcpus */
> -        vgic_vcpu_inject_irq(d->vcpu[0], irq);
> +        vgic_vcpu_inject_irq(vgic_get_target_vcpu(d->vcpu[0], irq), irq);

Would it make sense to push vgic_get_target_vcpu down into
vgic_vcpu_inject_irq rather than have all callers need to do it?

I'm also wondering if vgic_get_target_vcpu shouldn't take d and not v.

Does this do the right thing for PPIs? vgic_get_target_vcpu will just
lookup vcpu0's target, not the actual expected target, won't it?
(something else must deal with this, or it'd be broken already I
suppose)

Ian.
Julien Grall June 10, 2014, 12:56 p.m. UTC | #2
On 06/10/2014 01:16 PM, Ian Campbell wrote:
> Does this do the right thing for PPIs? vgic_get_target_vcpu will just
> lookup vcpu0's target, not the actual expected target, won't it?
> (something else must deal with this, or it'd be broken already I
> suppose)

physical PPIs can't be routed to the guest. We don't have any support
for this such things and adding it will be a nightmare (a guest with
more VCPUs than the pCPUs...).

Regards,
Stefano Stabellini June 11, 2014, 2:22 p.m. UTC | #3
On Tue, 10 Jun 2014, Ian Campbell wrote:
> > diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> > index a33c797..0fad647 100644
> > --- a/xen/arch/arm/irq.c
> > +++ b/xen/arch/arm/irq.c
> > @@ -175,8 +175,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
> >          desc->status |= IRQ_INPROGRESS;
> >          desc->arch.eoi_cpu = smp_processor_id();
> >  
> > -        /* XXX: inject irq into all guest vcpus */
> > -        vgic_vcpu_inject_irq(d->vcpu[0], irq);
> > +        vgic_vcpu_inject_irq(vgic_get_target_vcpu(d->vcpu[0], irq), irq);
> 
> Would it make sense to push vgic_get_target_vcpu down into
> vgic_vcpu_inject_irq rather than have all callers need to do it?
> 
> I'm also wondering if vgic_get_target_vcpu shouldn't take d and not v.

That could be a good idea.

> Does this do the right thing for PPIs? vgic_get_target_vcpu will just
> lookup vcpu0's target, not the actual expected target, won't it?
> (something else must deal with this, or it'd be broken already I
> suppose)

As Julien wrote, we don't support routing PPIs to guests.
diff mbox

Patch

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 92391b4..6f24b14 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -287,7 +287,6 @@  void gic_route_irq_to_guest(struct domain *d, struct irq_desc *desc,
     gic_set_irq_properties(desc->irq, level, cpumask_of(smp_processor_id()),
                            GIC_PRI_IRQ);
 
-    /* TODO: do not assume delivery to vcpu0 */
     p = irq_to_pending(d->vcpu[0], desc->irq);
     p->desc = desc;
 }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index a33c797..0fad647 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -175,8 +175,7 @@  void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->status |= IRQ_INPROGRESS;
         desc->arch.eoi_cpu = smp_processor_id();
 
-        /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq);
+        vgic_vcpu_inject_irq(vgic_get_target_vcpu(d->vcpu[0], irq), irq);
         goto out_no_end;
     }