diff mbox

[Xen-devel,v3,1/2] xen/arm: vgic: Keep track of vIRQ used by a domain

Message ID 1422546694-22797-2-git-send-email-julien.grall@linaro.org
State Superseded, archived
Headers show

Commit Message

Julien Grall Jan. 29, 2015, 3:51 p.m. UTC
While it's easy to know which hardware IRQ is assigned to a domain, there
is no way to know which vIRQ is allocated by Xen for a specific domain.

Introduce a bitmap to keep track of every vIRQ used by a domain. This
will be used later to find free vIRQ for interrupt device assignment and
emulated interrupt.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v3:
        - Typoes in commit message
        - Remove pointless reserved in vgic_reserve_virq
        - Move the retry after looking for first/end. I keep the goto
        rather than a loop because it's more clear that we retry because
        we were unable to set the bit
        - Introduce vgic_allocate_{spi,ppi} helpers
        - Use NR_GIC_SGI

    Changes in v2:
        - Remove lock
        - Update the prototype of vgic_free_irq to actually show we are
        taking in parameter a vIRQ and and IRQ
        - vgic_free_virq was completely buggy. Rewrite it
        - The usage of find_next_zero_bit was buggy. I don't know how it
        was working before.
        - Make find_next_zero_bit common for SPIs and PPIs
        - Use if (...) BUG() rather than BUG_ON()
        - Fix comments and some printk message
        - Update the commit message
        - Add missing vgic_reserve_irq for the event channel PPI
---
 xen/arch/arm/domain.c                |  3 ++
 xen/arch/arm/domain_build.c          |  6 ++++
 xen/arch/arm/platforms/xgene-storm.c |  4 +++
 xen/arch/arm/vgic.c                  | 54 ++++++++++++++++++++++++++++++++++++
 xen/arch/arm/vtimer.c                | 25 +++++++++++++++++
 xen/include/asm-arm/domain.h         |  1 +
 xen/include/asm-arm/vgic.h           | 23 +++++++++++++++
 7 files changed, 116 insertions(+)

Comments

Julien Grall Feb. 18, 2015, 5:36 p.m. UTC | #1
Hi Ian,

On 02/02/2015 15:09, Ian Campbell wrote:
> On Thu, 2015-01-29 at 15:51 +0000, Julien Grall wrote:
>>          - Move the retry after looking for first/end. I keep the goto
>>          rather than a loop because it's more clear that we retry because
>>          we were unable to set the bit
>
> Then I think a "do {} while (!successfully allocated)" is what is
> wanted, maybe with a comment.

I though about it and I find the do {} while more difficult to read than 
the goto in this specific case.

I find more explicit with the goto that we will unlikely retry.
y. Adding a comment in the code doesn't really help.

the do {} while version would look like:

    do {
         virq = find_next_zero_bit(d->arch.vgic.allocated_irqs, end, first);
         if ( virq >= end )
             return -1;

         ret = test_and_set_bit(virq, d->arch.vgic.allocated_irqs);
         /* There is no spinlock to protect allocated_irqs, therefore
          * test_and_set_bit may unlikely fail. If so retry it.
          */
     } while ( unlikely(ret) )

Regards,
Julien Grall Feb. 19, 2015, 5:19 p.m. UTC | #2
On 19/02/15 17:15, Ian Campbell wrote:
> On Wed, 2015-02-18 at 17:36 +0000, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/02/2015 15:09, Ian Campbell wrote:
>>> On Thu, 2015-01-29 at 15:51 +0000, Julien Grall wrote:
>>>>          - Move the retry after looking for first/end. I keep the goto
>>>>          rather than a loop because it's more clear that we retry because
>>>>          we were unable to set the bit
>>>
>>> Then I think a "do {} while (!successfully allocated)" is what is
>>> wanted, maybe with a comment.
>>
>> I though about it and I find the do {} while more difficult to read than 
>> the goto in this specific case.
>>
>> I find more explicit with the goto that we will unlikely retry.
>> y. Adding a comment in the code doesn't really help.
>>
>> the do {} while version would look like:
>>
>>     do {
>>          virq = find_next_zero_bit(d->arch.vgic.allocated_irqs, end, first);
>>          if ( virq >= end )
>>              return -1;
>>
>>          ret = test_and_set_bit(virq, d->arch.vgic.allocated_irqs);
>>          /* There is no spinlock to protect allocated_irqs, therefore
>>           * test_and_set_bit may unlikely fail. If so retry it.
>>           */
>>      } while ( unlikely(ret) )
> 
> I think
>     } while ( unlikely(test_and_set_bit(virq, d->arch.vgic.allocated_irqs)) )
> would be clear enough, with the comment you have being moved before the
> whole loop.
> 
> I'm not sure the unlikely is really useful in the context of a do{}while
> either, I doubt it will cause any different heuristic to be used which
> isn't already triggered by the use of do-while rather than while-do.
> Especially since this one isn't performance critical I'd just leave it
> out.
> 
> i.e.
> 
>     /*
>      * There is no spinlock to protect allocated_irqs, therefore
>      * test_and_set_bit may fail. If so retry it.
>      */
>     do {
>           virq = find_next_zero_bit(d->arch.vgic.allocated_irqs, end, first);
>           if ( virq >= end )
>               return -1;
>     } while ( test_and_set_bit(virq, d->arch.vgic.allocated_irqs) )
> 

Ok, I will resend the patch series with this change.

Regards,
diff mbox

Patch

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index cfc7ab4..5cb4258 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -548,6 +548,9 @@  int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     else
         d->arch.evtchn_irq = platform_dom0_evtchn_ppi();
 
+    if ( !vgic_reserve_virq(d, d->arch.evtchn_irq) )
+        BUG();
+
     /*
      * Virtual UART is only used by linux early printk and decompress code.
      * Only use it for the hardware domain because the linux kernel may not
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c2dcb49..c1df1fc 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -970,6 +970,12 @@  static int map_device(struct domain *d, struct dt_device_node *dev)
         irq = res;
 
         DPRINT("irq %u = %u\n", i, irq);
+        /*
+         * Checking the return of vgic_reserve_virq is not
+         * necessary. It should not fail except when we try to map
+         * the IRQ twice. This can legitimately happen if the IRQ is shared
+         */
+        vgic_reserve_virq(d, irq);
         res = route_irq_to_guest(d, irq, dt_node_name(dev));
         if ( res )
         {
diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 0b3492d..ef3ba63 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -71,6 +71,10 @@  static int map_one_spi(struct domain *d, const char *what,
 
     printk("Additional IRQ %u (%s)\n", irq, what);
 
+    if ( !vgic_reserve_virq(d, irq) )
+        printk("Failed to reserve vIRQ %u on dom%d\n",
+               irq, d->domain_id);
+
     ret = route_irq_to_guest(d, irq, what);
     if ( ret )
         printk("Failed to route %s to dom%d\n", what, d->domain_id);
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index b272d86..3b9b8dd 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -110,6 +110,15 @@  int domain_vgic_init(struct domain *d)
 
     d->arch.vgic.handler->domain_init(d);
 
+    d->arch.vgic.allocated_irqs =
+        xzalloc_array(unsigned long, BITS_TO_LONGS(vgic_num_irqs(d)));
+    if ( !d->arch.vgic.allocated_irqs )
+        return -ENOMEM;
+
+    /* vIRQ0-15 (SGIs) are reserved */
+    for ( i = 0; i < NR_GIC_SGI; i++ )
+        set_bit(i, d->arch.vgic.allocated_irqs);
+
     return 0;
 }
 
@@ -122,6 +131,7 @@  void domain_vgic_free(struct domain *d)
 {
     xfree(d->arch.vgic.shared_irqs);
     xfree(d->arch.vgic.pending_irqs);
+    xfree(d->arch.vgic.allocated_irqs);
 }
 
 int vcpu_vgic_init(struct vcpu *v)
@@ -452,6 +462,50 @@  int vgic_emulate(struct cpu_user_regs *regs, union hsr hsr)
     return v->domain->arch.vgic.handler->emulate_sysreg(regs, hsr);
 }
 
+bool_t vgic_reserve_virq(struct domain *d, unsigned int virq)
+{
+    if ( virq >= vgic_num_irqs(d) )
+        return 0;
+
+    return !test_and_set_bit(virq, d->arch.vgic.allocated_irqs);
+}
+
+int vgic_allocate_virq(struct domain *d, bool_t spi)
+{
+    int ret;
+    int first, end;
+    unsigned int virq;
+
+    if ( !spi )
+    {
+        /* We only allocate PPIs. SGIs are all reserved */
+        first = 16;
+        end = 32;
+    }
+    else
+    {
+        first = 32;
+        end = vgic_num_irqs(d);
+    }
+
+retry:
+    virq = find_next_zero_bit(d->arch.vgic.allocated_irqs, end, first);
+    if ( virq >= end )
+        return -1;
+
+    ret = test_and_set_bit(virq, d->arch.vgic.allocated_irqs);
+
+    if ( unlikely(ret) )
+        goto retry;
+
+    return virq;
+}
+
+void vgic_free_virq(struct domain *d, unsigned int virq)
+{
+    clear_bit(virq, d->arch.vgic.allocated_irqs);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 848e2c6..4427ae5 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -56,6 +56,31 @@  int domain_vtimer_init(struct domain *d)
 {
     d->arch.phys_timer_base.offset = NOW();
     d->arch.virt_timer_base.offset = READ_SYSREG64(CNTPCT_EL0);
+
+    /* At this stage vgic_reserve_virq can't fail */
+    if ( is_hardware_domain(d) )
+    {
+        if ( !vgic_reserve_virq(d, timer_get_irq(TIMER_PHYS_SECURE_PPI)) )
+            BUG();
+
+        if ( !vgic_reserve_virq(d, timer_get_irq(TIMER_PHYS_NONSECURE_PPI)) )
+            BUG();
+
+        if ( !vgic_reserve_virq(d, timer_get_irq(TIMER_VIRT_PPI)) )
+            BUG();
+    }
+    else
+    {
+        if ( !vgic_reserve_virq(d, GUEST_TIMER_PHYS_S_PPI) )
+            BUG();
+
+        if ( !vgic_reserve_virq(d, GUEST_TIMER_PHYS_NS_PPI) )
+            BUG();
+
+        if ( !vgic_reserve_virq(d, GUEST_TIMER_VIRT_PPI) )
+            BUG();
+    }
+
     return 0;
 }
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9018c6a..f50deb4 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -90,6 +90,7 @@  struct arch_domain
         spinlock_t lock;
         int ctlr;
         int nr_spis; /* Number of SPIs */
+        unsigned long *allocated_irqs; /* bitmap of IRQs allocated */
         struct vgic_irq_rank *shared_irqs;
         /*
          * SPIs are domain global, SGIs and PPIs are per-VCPU and stored in
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 74d5a4e..0c99481 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -199,6 +199,29 @@  extern int vgic_to_sgi(struct vcpu *v, register_t sgir,
                        enum gic_sgi_mode irqmode, int virq,
                        unsigned long vcpu_mask);
 extern void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq);
+
+/* Reserve a specific guest vIRQ */
+extern bool_t vgic_reserve_virq(struct domain *d, unsigned int virq);
+
+/*
+ * Allocate a guest VIRQ
+ *  - spi == 0 => allocate a PPI. It will be the same on every vCPU
+ *  - spi == 1 => allocate an SPI
+ */
+extern int vgic_allocate_virq(struct domain *d, bool_t spi);
+
+static inline int vgic_allocate_ppi(struct domain *d)
+{
+    return vgic_allocate_virq(d, 0 /* ppi */);
+}
+
+static inline int vgic_allocate_spi(struct domain *d)
+{
+    return vgic_allocate_virq(d, 1 /* spi */);
+}
+
+extern void vgic_free_virq(struct domain *d, unsigned int virq);
+
 #endif /* __ASM_ARM_VGIC_H__ */
 
 /*