diff mbox

[Xen-devel,v4,08/10] xen/arm: second irq injection while the first irq is still inflight

Message ID alpine.DEB.2.02.1403241613190.8273@kaball.uk.xensource.com
State New
Headers show

Commit Message

Stefano Stabellini March 24, 2014, 4:41 p.m. UTC
On Mon, 24 Mar 2014, Ian Campbell wrote:
> > > > > > We also need to force the first injection of evtchn_irq (call
> > > > > > gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
> > > > > > is already set by common code on vcpu creation.
> > > > > 
> > > > > This is because the common code expects that the guest is moving from
> > > > > sharedinfo based vcpu info using VCPUOP_register_vcpu_info on x86, but
> > > > > on ARM we start off that way anyway.
> > > > > 
> > > > > I suppose it's a minor wrinkle, but I wonder if we can get rid of it...
> > > > 
> > > > Do you mean getting rid of evtchn_upcall_pending being set at vcpu
> > > > creation? Wouldn't that cause possible undesirable side effects to
> > > > guests that came to rely on it somehow? It would be an ABI change.
> > > 
> > > I mean precisely for the boot cpu when it is brought online, there isn't
> > > much sense in immediately taking an interrupt when that cpu enables
> > > them.
> > > 
> > > The reason for setting it right now is only for the case of a cpu moving
> > > its vcpu info, to ensure it can't miss anything.
> > 
> > What about secondary vcpus? Should we keep the spurious injection for
> > them?
> 
> Not sure. No?
> 
> > In any case I agree with you that the current behaviour is not nice,
> > however I am worried about changing a guest visible interface like this
> > one, that would affect x86 guests too.
> 
> Oh, I certainly wouldn't change this for x86! Or maybe I would change it
> but only for cpus which are not online at the time when the init happens
> (which is effectively the difference between the x86 and arm cases)
 
Today on ARM and x86 PV on HVM VCPUOP_register_vcpu_info is called by
each vcpu independently when is coming online, so secondary cpus would
be already online at the time of the hypercall.

On x86 PV VCPUOP_register_vcpu_info is called in a loop by vcpu0 for all
the possible vcpus before smp bringup.

Either way I don't think we can easily change this behaviour without
affecting x86 guests.

Alternatively of course this (ugly) change in Xen would work:

Comments

Ian Campbell March 25, 2014, 10:09 a.m. UTC | #1
On Mon, 2014-03-24 at 16:41 +0000, Stefano Stabellini wrote:
> On Mon, 24 Mar 2014, Ian Campbell wrote:
> > > > > > > We also need to force the first injection of evtchn_irq (call
> > > > > > > gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
> > > > > > > is already set by common code on vcpu creation.
> > > > > > 
> > > > > > This is because the common code expects that the guest is moving from
> > > > > > sharedinfo based vcpu info using VCPUOP_register_vcpu_info on x86, but
> > > > > > on ARM we start off that way anyway.
> > > > > > 
> > > > > > I suppose it's a minor wrinkle, but I wonder if we can get rid of it...
> > > > > 
> > > > > Do you mean getting rid of evtchn_upcall_pending being set at vcpu
> > > > > creation? Wouldn't that cause possible undesirable side effects to
> > > > > guests that came to rely on it somehow? It would be an ABI change.
> > > > 
> > > > I mean precisely for the boot cpu when it is brought online, there isn't
> > > > much sense in immediately taking an interrupt when that cpu enables
> > > > them.
> > > > 
> > > > The reason for setting it right now is only for the case of a cpu moving
> > > > its vcpu info, to ensure it can't miss anything.
> > > 
> > > What about secondary vcpus? Should we keep the spurious injection for
> > > them?
> > 
> > Not sure. No?
> > 
> > > In any case I agree with you that the current behaviour is not nice,
> > > however I am worried about changing a guest visible interface like this
> > > one, that would affect x86 guests too.
> > 
> > Oh, I certainly wouldn't change this for x86! Or maybe I would change it
> > but only for cpus which are not online at the time when the init happens
> > (which is effectively the difference between the x86 and arm cases)
>  
> Today on ARM and x86 PV on HVM VCPUOP_register_vcpu_info is called by
> each vcpu independently when is coming online, so secondary cpus would
> be already online at the time of the hypercall.
> 
> On x86 PV VCPUOP_register_vcpu_info is called in a loop by vcpu0 for all
> the possible vcpus before smp bringup.
> 
> Either way I don't think we can easily change this behaviour without
> affecting x86 guests.

OK, it was worth considering but I think you are right that it isn't
worth it. Hopefully any sane OS is going to be prepared to handle
interrupts before it enables them!

> Alternatively of course this (ugly) change in Xen would work:

Yeah, lets not ;-)

> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index ad8a1b6..bd81f43 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -893,7 +893,9 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
>      void *mapping;
>      vcpu_info_t *new_info;
>      struct page_info *page;
> +#ifdef CONFIG_X86
>      int i;
> +#endif
>  
>      if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
>          return -EINVAL;
> @@ -942,6 +944,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
>      /* Set new vcpu_info pointer /before/ setting pending flags. */
>      smp_wmb();
>  
> +#ifdef CONFIG_X86
>      /*
>       * Mark everything as being pending just to make sure nothing gets
>       * lost.  The domain will get a spurious event, but it can cope.
> @@ -949,7 +952,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
>      vcpu_info(v, evtchn_upcall_pending) = 1;
>      for ( i = 0; i < BITS_PER_EVTCHN_WORD(d); i++ )
>          set_bit(i, &vcpu_info(v, evtchn_pending_sel));
> -
> +#endif
>      return 0;
>  }
diff mbox

Patch

diff --git a/xen/common/domain.c b/xen/common/domain.c
index ad8a1b6..bd81f43 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -893,7 +893,9 @@  int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     void *mapping;
     vcpu_info_t *new_info;
     struct page_info *page;
+#ifdef CONFIG_X86
     int i;
+#endif
 
     if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
         return -EINVAL;
@@ -942,6 +944,7 @@  int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     /* Set new vcpu_info pointer /before/ setting pending flags. */
     smp_wmb();
 
+#ifdef CONFIG_X86
     /*
      * Mark everything as being pending just to make sure nothing gets
      * lost.  The domain will get a spurious event, but it can cope.
@@ -949,7 +952,7 @@  int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset)
     vcpu_info(v, evtchn_upcall_pending) = 1;
     for ( i = 0; i < BITS_PER_EVTCHN_WORD(d); i++ )
         set_bit(i, &vcpu_info(v, evtchn_pending_sel));
-
+#endif
     return 0;
 }