diff mbox series

[Xen-devel,v3,06/39] ARM: evtchn: Handle level triggered IRQs correctly

Message ID 20180321163235.12529-7-andre.przywara@linaro.org
State New
Headers show
Series New VGIC(-v2) implementation | expand

Commit Message

Andre Przywara March 21, 2018, 4:32 p.m. UTC
The event channel IRQ has level triggered semantics, however the current
VGIC treats everything as edge triggered.
To correctly process those IRQs, we have to lower the (virtual) IRQ line
at some point in time, depending on whether ther interrupt condition
still prevails.
Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
line match its status, and call this function upon every hypervisor
entry.

Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
Reviewed-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain.c       | 7 +++++++
 xen/arch/arm/traps.c        | 1 +
 xen/include/asm-arm/event.h | 1 +
 3 files changed, 9 insertions(+)

Comments

Julien Grall March 22, 2018, 2:08 a.m. UTC | #1
On 03/21/2018 04:32 PM, Andre Przywara wrote:
> The event channel IRQ has level triggered semantics, however the current
> VGIC treats everything as edge triggered.
> To correctly process those IRQs, we have to lower the (virtual) IRQ line
> at some point in time, depending on whether ther interrupt condition

NIT: s/ther/the/
Stefano Stabellini March 26, 2018, 8:08 p.m. UTC | #2
On Wed, 21 Mar 2018, Andre Przywara wrote:
> The event channel IRQ has level triggered semantics, however the current
> VGIC treats everything as edge triggered.
> To correctly process those IRQs, we have to lower the (virtual) IRQ line
> at some point in time, depending on whether ther interrupt condition
> still prevails.
> Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
> line match its status, and call this function upon every hypervisor
> entry.
> 
> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
> Reviewed-by: Julien Grall <julien.grall@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/domain.c       | 7 +++++++
>  xen/arch/arm/traps.c        | 1 +
>  xen/include/asm-arm/event.h | 1 +
>  3 files changed, 9 insertions(+)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index ff97f2bc76..9688e62f78 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
>      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
>  }
>  
> +void vcpu_update_evtchn_irq(struct vcpu *v)
> +{
> +    bool pending = vcpu_info(v, evtchn_upcall_pending);
> +
> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
> +}
> +
>  /* The ARM spec declares that even if local irqs are masked in
>   * the CPSR register, an irq should wake up a cpu from WFI anyway.
>   * For this reason we need to check for irqs that need delivery,
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 2638446693..5c18e918b0 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
>           * trap and how it can be optimised.
>           */
>          vtimer_update_irqs(current);
> +        vcpu_update_evtchn_irq(current);
>  #endif
>  
>          vgic_sync_from_lrs(current);
> diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h
> index c7a415ef57..2f51864043 100644
> --- a/xen/include/asm-arm/event.h
> +++ b/xen/include/asm-arm/event.h
> @@ -6,6 +6,7 @@
>  
>  void vcpu_kick(struct vcpu *v);
>  void vcpu_mark_events_pending(struct vcpu *v);
> +void vcpu_update_evtchn_irq(struct vcpu *v);
>  void vcpu_block_unless_event_pending(struct vcpu *v);
>  
>  static inline int vcpu_event_delivery_is_enabled(struct vcpu *v)
> -- 
> 2.14.1
>
Stefano Stabellini March 28, 2018, 12:01 a.m. UTC | #3
On Wed, 21 Mar 2018, Andre Przywara wrote:
> The event channel IRQ has level triggered semantics, however the current
> VGIC treats everything as edge triggered.
> To correctly process those IRQs, we have to lower the (virtual) IRQ line
> at some point in time, depending on whether ther interrupt condition
> still prevails.
> Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
> line match its status, and call this function upon every hypervisor
> entry.
> 
> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
> Reviewed-by: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/domain.c       | 7 +++++++
>  xen/arch/arm/traps.c        | 1 +
>  xen/include/asm-arm/event.h | 1 +
>  3 files changed, 9 insertions(+)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index ff97f2bc76..9688e62f78 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
>      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
>  }
>  
> +void vcpu_update_evtchn_irq(struct vcpu *v)
> +{
> +    bool pending = vcpu_info(v, evtchn_upcall_pending);
> +
> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
> +}
> +
>  /* The ARM spec declares that even if local irqs are masked in
>   * the CPSR register, an irq should wake up a cpu from WFI anyway.
>   * For this reason we need to check for irqs that need delivery,
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 2638446693..5c18e918b0 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
>           * trap and how it can be optimised.
>           */
>          vtimer_update_irqs(current);
> +        vcpu_update_evtchn_irq(current);
>  #endif

I am replying to this patch, even though I have already committed it, to
point out a problem with the way we currently handle the evtchn_irq in
this series.

The short version is that I think we should configure the PPI
corresponding to the evtchn_irq as EDGE instead of LEVEL.

The long explanation follows, please correct me if I am wrong.

1) vcpuA/cpuA is running, it has already handled the event, cleared
evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
Xen yet. It is still in guest mode.

2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
vgic_inject_irq. However, because irq->line_level is high, it is not
injected.

3) vcpuA has to wait until trapping into Xen, calling
vcpu_update_evtchn_irq, and going back to guest mode before receiving
the event. This is theoretically a very long time.


Instead what should happen is:

1) vcpuA/cpuA is running, it has already handled the event, cleared
evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
Xen yet. It is still in guest mode.

2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
vgic_inject_irq, which calls vgic_queue_irq_unlock that
vcpu_kick(vcpuA), forcing it to take the event immediately.

Am I right? Wouldn't it be safer to continue configuring the evtchn_irq
as edge even in the new vgic?
Andre Przywara March 28, 2018, 3:39 p.m. UTC | #4
Hi,

On 28/03/18 01:01, Stefano Stabellini wrote:
> On Wed, 21 Mar 2018, Andre Przywara wrote:
>> The event channel IRQ has level triggered semantics, however the current
>> VGIC treats everything as edge triggered.
>> To correctly process those IRQs, we have to lower the (virtual) IRQ line
>> at some point in time, depending on whether ther interrupt condition
>> still prevails.
>> Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
>> line match its status, and call this function upon every hypervisor
>> entry.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
>> Reviewed-by: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/domain.c       | 7 +++++++
>>  xen/arch/arm/traps.c        | 1 +
>>  xen/include/asm-arm/event.h | 1 +
>>  3 files changed, 9 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index ff97f2bc76..9688e62f78 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
>>      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
>>  }
>>  
>> +void vcpu_update_evtchn_irq(struct vcpu *v)
>> +{
>> +    bool pending = vcpu_info(v, evtchn_upcall_pending);
>> +
>> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
>> +}
>> +
>>  /* The ARM spec declares that even if local irqs are masked in
>>   * the CPSR register, an irq should wake up a cpu from WFI anyway.
>>   * For this reason we need to check for irqs that need delivery,
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 2638446693..5c18e918b0 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
>>           * trap and how it can be optimised.
>>           */
>>          vtimer_update_irqs(current);
>> +        vcpu_update_evtchn_irq(current);
>>  #endif
> 
> I am replying to this patch, even though I have already committed it, to
> point out a problem with the way we currently handle the evtchn_irq in
> this series.
> 
> The short version is that I think we should configure the PPI
> corresponding to the evtchn_irq as EDGE instead of LEVEL.

Well, that's really a separate problem, then. We can't just configure
the PPI at will, it has to match the device semantic.
When writing this patch, I checked how the the evtchn "device" is
implemented, and it screams "level IRQ" to me:
- We have a flag (evtchn_upcall_pending), which stores the current
interrupt state.
- This flag gets set by the producer when the interrupt condition is true.
- It gets cleared by the *consumer* once it has handled the request.

So if the event channel mechanism should be edge (which would be fair
enough), we need to change the code to implement this: the interrupt
condition should be cleared once we *injected* the IRQ - and not only
when the consumer has signalled completion.

Another thing to consider: by the spec the *configurability* of PPIs is
implementation defined. The KVM implementation chose to fix all of them
to "level", which we need for the arch timer. So setting the evtchn PPI
to edge would be ignored. We could deviate from that, but I need to
check what the side effects are.

> The long explanation follows, please correct me if I am wrong.
> 
> 1) vcpuA/cpuA is running, it has already handled the event, cleared
> evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
> Xen yet. It is still in guest mode.
> 
> 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
> vgic_inject_irq. However, because irq->line_level is high, it is not
> injected.

So this is a case where we fail to sync in time on the actual emulated
line level. KVM recently gained some nice code to solve this: We can
register per-IRQ functions that return the line level. For hardware
mapped IRQs this queries the distributor, but for the arch timer for
instance it just uses a shortcut to read CNTV_CTL_EL0.
The evtchn IRQ could just check evtchn_upcall_pending.

I can take a look at a follow up patch to implement this.

Cheers,
Andre.


> 3) vcpuA has to wait until trapping into Xen, calling
> vcpu_update_evtchn_irq, and going back to guest mode before receiving
> the event. This is theoretically a very long time.
> 
> 
> Instead what should happen is:
> 
> 1) vcpuA/cpuA is running, it has already handled the event, cleared
> evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
> Xen yet. It is still in guest mode.
> 
> 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
> vgic_inject_irq, which calls vgic_queue_irq_unlock that
> vcpu_kick(vcpuA), forcing it to take the event immediately.
> 
> Am I right? Wouldn't it be safer to continue configuring the evtchn_irq
> as edge even in the new vgic?
>
Stefano Stabellini March 28, 2018, 5:46 p.m. UTC | #5
On Wed, 28 Mar 2018, Andre Przywara wrote:
> On 28/03/18 01:01, Stefano Stabellini wrote:
> > On Wed, 21 Mar 2018, Andre Przywara wrote:
> >> The event channel IRQ has level triggered semantics, however the current
> >> VGIC treats everything as edge triggered.
> >> To correctly process those IRQs, we have to lower the (virtual) IRQ line
> >> at some point in time, depending on whether ther interrupt condition
> >> still prevails.
> >> Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
> >> line match its status, and call this function upon every hypervisor
> >> entry.
> >>
> >> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
> >> Reviewed-by: Julien Grall <julien.grall@arm.com>
> >> ---
> >>  xen/arch/arm/domain.c       | 7 +++++++
> >>  xen/arch/arm/traps.c        | 1 +
> >>  xen/include/asm-arm/event.h | 1 +
> >>  3 files changed, 9 insertions(+)
> >>
> >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> >> index ff97f2bc76..9688e62f78 100644
> >> --- a/xen/arch/arm/domain.c
> >> +++ b/xen/arch/arm/domain.c
> >> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
> >>      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
> >>  }
> >>  
> >> +void vcpu_update_evtchn_irq(struct vcpu *v)
> >> +{
> >> +    bool pending = vcpu_info(v, evtchn_upcall_pending);
> >> +
> >> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
> >> +}
> >> +
> >>  /* The ARM spec declares that even if local irqs are masked in
> >>   * the CPSR register, an irq should wake up a cpu from WFI anyway.
> >>   * For this reason we need to check for irqs that need delivery,
> >> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> >> index 2638446693..5c18e918b0 100644
> >> --- a/xen/arch/arm/traps.c
> >> +++ b/xen/arch/arm/traps.c
> >> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
> >>           * trap and how it can be optimised.
> >>           */
> >>          vtimer_update_irqs(current);
> >> +        vcpu_update_evtchn_irq(current);
> >>  #endif
> > 
> > I am replying to this patch, even though I have already committed it, to
> > point out a problem with the way we currently handle the evtchn_irq in
> > this series.
> > 
> > The short version is that I think we should configure the PPI
> > corresponding to the evtchn_irq as EDGE instead of LEVEL.
> 
> Well, that's really a separate problem, then. We can't just configure
> the PPI at will, it has to match the device semantic.
> When writing this patch, I checked how the the evtchn "device" is
> implemented, and it screams "level IRQ" to me:
> - We have a flag (evtchn_upcall_pending), which stores the current
> interrupt state.
> - This flag gets set by the producer when the interrupt condition is true.
> - It gets cleared by the *consumer* once it has handled the request.
> 
> So if the event channel mechanism should be edge (which would be fair
> enough), we need to change the code to implement this: the interrupt
> condition should be cleared once we *injected* the IRQ - and not only
> when the consumer has signalled completion.
> 
> Another thing to consider: by the spec the *configurability* of PPIs is
> implementation defined. The KVM implementation chose to fix all of them
> to "level", which we need for the arch timer. So setting the evtchn PPI
> to edge would be ignored. We could deviate from that, but I need to
> check what the side effects are.
> 
> > The long explanation follows, please correct me if I am wrong.
> > 
> > 1) vcpuA/cpuA is running, it has already handled the event, cleared
> > evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
> > Xen yet. It is still in guest mode.
> > 
> > 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
> > vgic_inject_irq. However, because irq->line_level is high, it is not
> > injected.
> 
> So this is a case where we fail to sync in time on the actual emulated
> line level. KVM recently gained some nice code to solve this: We can
> register per-IRQ functions that return the line level. For hardware
> mapped IRQs this queries the distributor, but for the arch timer for
> instance it just uses a shortcut to read CNTV_CTL_EL0.
> The evtchn IRQ could just check evtchn_upcall_pending.
> 
> I can take a look at a follow up patch to implement this.

I agree that the evtchn_upcall_pending mechanism is very similar to a
level interrupt, however the mechanism to bring the notification to the
guest is edge, even on x86. Even with the new vgic implementation it
falls very naturally in the edge pattern of behaviors. This is one of
those cases where I would be happy to deviate from the KVM
implementation, because it makes sense and would be easier to maintain
going forward. We can even get rid of vcpu_update_evtchn_irq.

I am OK with a not-ideal short term fix by the end of this week, as long
as we come up with a proper fix later, by the release date. But in this
instance, wouldn't be enough to change the PPI type to EDGE, remove
vcpu_update_evtchn_irq, and be done with it?



> > 3) vcpuA has to wait until trapping into Xen, calling
> > vcpu_update_evtchn_irq, and going back to guest mode before receiving
> > the event. This is theoretically a very long time.
> > 
> > 
> > Instead what should happen is:
> > 
> > 1) vcpuA/cpuA is running, it has already handled the event, cleared
> > evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
> > Xen yet. It is still in guest mode.
> > 
> > 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
> > vgic_inject_irq, which calls vgic_queue_irq_unlock that
> > vcpu_kick(vcpuA), forcing it to take the event immediately.
> > 
> > Am I right? Wouldn't it be safer to continue configuring the evtchn_irq
> > as edge even in the new vgic?
Julien Grall March 29, 2018, 12:34 a.m. UTC | #6
(sorry for the formatting)

On Thu, 29 Mar 2018, 01:48 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Wed, 28 Mar 2018, Andre Przywara wrote:

> > On 28/03/18 01:01, Stefano Stabellini wrote:

> > > On Wed, 21 Mar 2018, Andre Przywara wrote:

> > >> The event channel IRQ has level triggered semantics, however the

> current

> > >> VGIC treats everything as edge triggered.

> > >> To correctly process those IRQs, we have to lower the (virtual) IRQ

> line

> > >> at some point in time, depending on whether ther interrupt condition

> > >> still prevails.

> > >> Check the per-VCPU evtchn_upcall_pending variable to make the

> interrupt

> > >> line match its status, and call this function upon every hypervisor

> > >> entry.

> > >>

> > >> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>

> > >> Reviewed-by: Julien Grall <julien.grall@arm.com>

> > >> ---

> > >>  xen/arch/arm/domain.c       | 7 +++++++

> > >>  xen/arch/arm/traps.c        | 1 +

> > >>  xen/include/asm-arm/event.h | 1 +

> > >>  3 files changed, 9 insertions(+)

> > >>

> > >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c

> > >> index ff97f2bc76..9688e62f78 100644

> > >> --- a/xen/arch/arm/domain.c

> > >> +++ b/xen/arch/arm/domain.c

> > >> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)

> > >>      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);

> > >>  }

> > >>

> > >> +void vcpu_update_evtchn_irq(struct vcpu *v)

> > >> +{

> > >> +    bool pending = vcpu_info(v, evtchn_upcall_pending);

> > >> +

> > >> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq,

> pending);

> > >> +}

> > >> +

> > >>  /* The ARM spec declares that even if local irqs are masked in

> > >>   * the CPSR register, an irq should wake up a cpu from WFI anyway.

> > >>   * For this reason we need to check for irqs that need delivery,

> > >> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c

> > >> index 2638446693..5c18e918b0 100644

> > >> --- a/xen/arch/arm/traps.c

> > >> +++ b/xen/arch/arm/traps.c

> > >> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct

> cpu_user_regs *regs)

> > >>           * trap and how it can be optimised.

> > >>           */

> > >>          vtimer_update_irqs(current);

> > >> +        vcpu_update_evtchn_irq(current);

> > >>  #endif

> > >

> > > I am replying to this patch, even though I have already committed it,

> to

> > > point out a problem with the way we currently handle the evtchn_irq in

> > > this series.

> > >

> > > The short version is that I think we should configure the PPI

> > > corresponding to the evtchn_irq as EDGE instead of LEVEL.

> >

> > Well, that's really a separate problem, then. We can't just configure

> > the PPI at will, it has to match the device semantic.

> > When writing this patch, I checked how the the evtchn "device" is

> > implemented, and it screams "level IRQ" to me:

> > - We have a flag (evtchn_upcall_pending), which stores the current

> > interrupt state.

> > - This flag gets set by the producer when the interrupt condition is

> true.

> > - It gets cleared by the *consumer* once it has handled the request.

> >

> > So if the event channel mechanism should be edge (which would be fair

> > enough), we need to change the code to implement this: the interrupt

> > condition should be cleared once we *injected* the IRQ - and not only

> > when the consumer has signalled completion.

> >

> > Another thing to consider: by the spec the *configurability* of PPIs is

> > implementation defined. The KVM implementation chose to fix all of them

> > to "level", which we need for the arch timer. So setting the evtchn PPI

> > to edge would be ignored. We could deviate from that, but I need to

> > check what the side effects are.

> >

> > > The long explanation follows, please correct me if I am wrong.

> > >

> > > 1) vcpuA/cpuA is running, it has already handled the event, cleared

> > > evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into

> > > Xen yet. It is still in guest mode.

> > >

> > > 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls

> > > vgic_inject_irq. However, because irq->line_level is high, it is not

> > > injected.

> >

> > So this is a case where we fail to sync in time on the actual emulated

> > line level. KVM recently gained some nice code to solve this: We can

> > register per-IRQ functions that return the line level. For hardware

> > mapped IRQs this queries the distributor, but for the arch timer for

> > instance it just uses a shortcut to read CNTV_CTL_EL0.

> > The evtchn IRQ could just check evtchn_upcall_pending.

> >

> > I can take a look at a follow up patch to implement this.

>

> I agree that the evtchn_upcall_pending mechanism is very similar to a

> level interrupt, however the mechanism to bring the notification to the

> guest is edge, even on x86. Even with the new vgic implementation it

> falls very naturally in the edge pattern of behaviors. This is one of

> those cases where I would be happy to deviate from the KVM

> implementation, because it makes sense and would be easier to maintain

> going forward. We can even get rid of vcpu_update_evtchn_irq.

>

> I am OK with a not-ideal short term fix by the end of this week, as long

> as we come up with a proper fix later, by the release date. But in this

> instance, wouldn't be enough to change the PPI type to EDGE, remove

> vcpu_update_evtchn_irq, and be done with it?

>


We expose the event channsl as level in the DT and HVM_PARAM_CALLBACK_IRQ.
So they at least need to change.

But I an bit concerned to change those values between the 2 versions of the
vGIC.

What was the rationale to expose them as level in the old vGIC?



>

>

> > > 3) vcpuA has to wait until trapping into Xen, calling

> > > vcpu_update_evtchn_irq, and going back to guest mode before receiving

> > > the event. This is theoretically a very long time.

> > >

> > >

> > > Instead what should happen is:

> > >

> > > 1) vcpuA/cpuA is running, it has already handled the event, cleared

> > > evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into

> > > Xen yet. It is still in guest mode.

> > >

> > > 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls

> > > vgic_inject_irq, which calls vgic_queue_irq_unlock that

> > > vcpu_kick(vcpuA), forcing it to take the event immediately.

> > >

> > > Am I right? Wouldn't it be safer to continue configuring the evtchn_irq

> > > as edge even in the new vgic?

>

>

> _______________________________________________

> Xen-devel mailing list

> Xen-devel@lists.xenproject.org

> https://lists.xenproject.org/mailman/listinfo/xen-devel
(sorry for the formatting)<br><br><div class="gmail_quote"><div dir="ltr">On Thu, 29 Mar 2018, 01:48 Stefano Stabellini, &lt;<a href="mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, 28 Mar 2018, Andre Przywara wrote:<br>
&gt; On 28/03/18 01:01, Stefano Stabellini wrote:<br>
&gt; &gt; On Wed, 21 Mar 2018, Andre Przywara wrote:<br>
&gt; &gt;&gt; The event channel IRQ has level triggered semantics, however the current<br>
&gt; &gt;&gt; VGIC treats everything as edge triggered.<br>
&gt; &gt;&gt; To correctly process those IRQs, we have to lower the (virtual) IRQ line<br>
&gt; &gt;&gt; at some point in time, depending on whether ther interrupt condition<br>
&gt; &gt;&gt; still prevails.<br>
&gt; &gt;&gt; Check the per-VCPU evtchn_upcall_pending variable to make the interrupt<br>
&gt; &gt;&gt; line match its status, and call this function upon every hypervisor<br>
&gt; &gt;&gt; entry.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Signed-off-by: Andre Przywara &lt;<a href="mailto:andre.przywara@linaro.org" target="_blank">andre.przywara@linaro.org</a>&gt;<br>
&gt; &gt;&gt; Reviewed-by: Julien Grall &lt;<a href="mailto:julien.grall@arm.com" target="_blank">julien.grall@arm.com</a>&gt;<br>
&gt; &gt;&gt; ---<br>
&gt; &gt;&gt;  xen/arch/arm/domain.c       | 7 +++++++<br>
&gt; &gt;&gt;  xen/arch/arm/traps.c        | 1 +<br>
&gt; &gt;&gt;  xen/include/asm-arm/event.h | 1 +<br>
&gt; &gt;&gt;  3 files changed, 9 insertions(+)<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c<br>
&gt; &gt;&gt; index ff97f2bc76..9688e62f78 100644<br>
&gt; &gt;&gt; --- a/xen/arch/arm/domain.c<br>
&gt; &gt;&gt; +++ b/xen/arch/arm/domain.c<br>
&gt; &gt;&gt; @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)<br>
&gt; &gt;&gt;      vgic_inject_irq(v-&gt;domain, v, v-&gt;domain-&gt;arch.evtchn_irq, true);<br>
&gt; &gt;&gt;  }<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; +void vcpu_update_evtchn_irq(struct vcpu *v)<br>
&gt; &gt;&gt; +{<br>
&gt; &gt;&gt; +    bool pending = vcpu_info(v, evtchn_upcall_pending);<br>
&gt; &gt;&gt; +<br>
&gt; &gt;&gt; +    vgic_inject_irq(v-&gt;domain, v, v-&gt;domain-&gt;arch.evtchn_irq, pending);<br>
&gt; &gt;&gt; +}<br>
&gt; &gt;&gt; +<br>
&gt; &gt;&gt;  /* The ARM spec declares that even if local irqs are masked in<br>
&gt; &gt;&gt;   * the CPSR register, an irq should wake up a cpu from WFI anyway.<br>
&gt; &gt;&gt;   * For this reason we need to check for irqs that need delivery,<br>
&gt; &gt;&gt; diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c<br>
&gt; &gt;&gt; index 2638446693..5c18e918b0 100644<br>
&gt; &gt;&gt; --- a/xen/arch/arm/traps.c<br>
&gt; &gt;&gt; +++ b/xen/arch/arm/traps.c<br>
&gt; &gt;&gt; @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)<br>
&gt; &gt;&gt;           * trap and how it can be optimised.<br>
&gt; &gt;&gt;           */<br>
&gt; &gt;&gt;          vtimer_update_irqs(current);<br>
&gt; &gt;&gt; +        vcpu_update_evtchn_irq(current);<br>
&gt; &gt;&gt;  #endif<br>
&gt; &gt;<br>
&gt; &gt; I am replying to this patch, even though I have already committed it, to<br>
&gt; &gt; point out a problem with the way we currently handle the evtchn_irq in<br>
&gt; &gt; this series.<br>
&gt; &gt;<br>
&gt; &gt; The short version is that I think we should configure the PPI<br>
&gt; &gt; corresponding to the evtchn_irq as EDGE instead of LEVEL.<br>
&gt;<br>
&gt; Well, that&#39;s really a separate problem, then. We can&#39;t just configure<br>
&gt; the PPI at will, it has to match the device semantic.<br>
&gt; When writing this patch, I checked how the the evtchn &quot;device&quot; is<br>
&gt; implemented, and it screams &quot;level IRQ&quot; to me:<br>
&gt; - We have a flag (evtchn_upcall_pending), which stores the current<br>
&gt; interrupt state.<br>
&gt; - This flag gets set by the producer when the interrupt condition is true.<br>
&gt; - It gets cleared by the *consumer* once it has handled the request.<br>
&gt;<br>
&gt; So if the event channel mechanism should be edge (which would be fair<br>
&gt; enough), we need to change the code to implement this: the interrupt<br>
&gt; condition should be cleared once we *injected* the IRQ - and not only<br>
&gt; when the consumer has signalled completion.<br>
&gt;<br>
&gt; Another thing to consider: by the spec the *configurability* of PPIs is<br>
&gt; implementation defined. The KVM implementation chose to fix all of them<br>
&gt; to &quot;level&quot;, which we need for the arch timer. So setting the evtchn PPI<br>
&gt; to edge would be ignored. We could deviate from that, but I need to<br>
&gt; check what the side effects are.<br>
&gt;<br>
&gt; &gt; The long explanation follows, please correct me if I am wrong.<br>
&gt; &gt;<br>
&gt; &gt; 1) vcpuA/cpuA is running, it has already handled the event, cleared<br>
&gt; &gt; evtchn_upcall_pending and EOIed the event_irq but hasn&#39;t trapped into<br>
&gt; &gt; Xen yet. It is still in guest mode.<br>
&gt; &gt;<br>
&gt; &gt; 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls<br>
&gt; &gt; vgic_inject_irq. However, because irq-&gt;line_level is high, it is not<br>
&gt; &gt; injected.<br>
&gt;<br>
&gt; So this is a case where we fail to sync in time on the actual emulated<br>
&gt; line level. KVM recently gained some nice code to solve this: We can<br>
&gt; register per-IRQ functions that return the line level. For hardware<br>
&gt; mapped IRQs this queries the distributor, but for the arch timer for<br>
&gt; instance it just uses a shortcut to read CNTV_CTL_EL0.<br>
&gt; The evtchn IRQ could just check evtchn_upcall_pending.<br>
&gt;<br>
&gt; I can take a look at a follow up patch to implement this.<br>
<br>
I agree that the evtchn_upcall_pending mechanism is very similar to a<br>
level interrupt, however the mechanism to bring the notification to the<br>
guest is edge, even on x86. Even with the new vgic implementation it<br>
falls very naturally in the edge pattern of behaviors. This is one of<br>
those cases where I would be happy to deviate from the KVM<br>
implementation, because it makes sense and would be easier to maintain<br>
going forward. We can even get rid of vcpu_update_evtchn_irq.<br>
<br>
I am OK with a not-ideal short term fix by the end of this week, as long<br>
as we come up with a proper fix later, by the release date. But in this<br>
instance, wouldn&#39;t be enough to change the PPI type to EDGE, remove<br>
vcpu_update_evtchn_irq, and be done with it?<br></blockquote></div><div><br></div><div>We expose the event channsl as level in the DT and HVM_PARAM_CALLBACK_IRQ. So they at least need to change.</div><div><br></div><div>But I an bit concerned to change those values between the 2 versions of the vGIC.</div><div><br></div><div>What was the rationale to expose them as level in the old vGIC?</div><div><br></div><div><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
<br>
&gt; &gt; 3) vcpuA has to wait until trapping into Xen, calling<br>
&gt; &gt; vcpu_update_evtchn_irq, and going back to guest mode before receiving<br>
&gt; &gt; the event. This is theoretically a very long time.<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; Instead what should happen is:<br>
&gt; &gt;<br>
&gt; &gt; 1) vcpuA/cpuA is running, it has already handled the event, cleared<br>
&gt; &gt; evtchn_upcall_pending and EOIed the event_irq but hasn&#39;t trapped into<br>
&gt; &gt; Xen yet. It is still in guest mode.<br>
&gt; &gt;<br>
&gt; &gt; 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls<br>
&gt; &gt; vgic_inject_irq, which calls vgic_queue_irq_unlock that<br>
&gt; &gt; vcpu_kick(vcpuA), forcing it to take the event immediately.<br>
&gt; &gt;<br>
&gt; &gt; Am I right? Wouldn&#39;t it be safer to continue configuring the evtchn_irq<br>
&gt; &gt; as edge even in the new vgic?<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href="mailto:Xen-devel@lists.xenproject.org" target="_blank">Xen-devel@lists.xenproject.org</a><br>
<a href="https://lists.xenproject.org/mailman/listinfo/xen-devel" rel="noreferrer" target="_blank">https://lists.xenproject.org/mailman/listinfo/xen-devel</a></blockquote></div>
Andre Przywara March 29, 2018, 1:44 p.m. UTC | #7
Hi,

On 28/03/18 18:46, Stefano Stabellini wrote:
> On Wed, 28 Mar 2018, Andre Przywara wrote:
>> On 28/03/18 01:01, Stefano Stabellini wrote:
>>> On Wed, 21 Mar 2018, Andre Przywara wrote:
>>>> The event channel IRQ has level triggered semantics, however the current
>>>> VGIC treats everything as edge triggered.
>>>> To correctly process those IRQs, we have to lower the (virtual) IRQ line
>>>> at some point in time, depending on whether ther interrupt condition
>>>> still prevails.
>>>> Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
>>>> line match its status, and call this function upon every hypervisor
>>>> entry.
>>>>
>>>> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
>>>> Reviewed-by: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>>  xen/arch/arm/domain.c       | 7 +++++++
>>>>  xen/arch/arm/traps.c        | 1 +
>>>>  xen/include/asm-arm/event.h | 1 +
>>>>  3 files changed, 9 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>>> index ff97f2bc76..9688e62f78 100644
>>>> --- a/xen/arch/arm/domain.c
>>>> +++ b/xen/arch/arm/domain.c
>>>> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
>>>>      vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
>>>>  }
>>>>  
>>>> +void vcpu_update_evtchn_irq(struct vcpu *v)
>>>> +{
>>>> +    bool pending = vcpu_info(v, evtchn_upcall_pending);
>>>> +
>>>> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
>>>> +}
>>>> +
>>>>  /* The ARM spec declares that even if local irqs are masked in
>>>>   * the CPSR register, an irq should wake up a cpu from WFI anyway.
>>>>   * For this reason we need to check for irqs that need delivery,
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 2638446693..5c18e918b0 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
>>>>           * trap and how it can be optimised.
>>>>           */
>>>>          vtimer_update_irqs(current);
>>>> +        vcpu_update_evtchn_irq(current);
>>>>  #endif
>>>
>>> I am replying to this patch, even though I have already committed it, to
>>> point out a problem with the way we currently handle the evtchn_irq in
>>> this series.
>>>
>>> The short version is that I think we should configure the PPI
>>> corresponding to the evtchn_irq as EDGE instead of LEVEL.
>>
>> Well, that's really a separate problem, then. We can't just configure
>> the PPI at will, it has to match the device semantic.
>> When writing this patch, I checked how the the evtchn "device" is
>> implemented, and it screams "level IRQ" to me:
>> - We have a flag (evtchn_upcall_pending), which stores the current
>> interrupt state.
>> - This flag gets set by the producer when the interrupt condition is true.
>> - It gets cleared by the *consumer* once it has handled the request.
>>
>> So if the event channel mechanism should be edge (which would be fair
>> enough), we need to change the code to implement this: the interrupt
>> condition should be cleared once we *injected* the IRQ - and not only
>> when the consumer has signalled completion.
>>
>> Another thing to consider: by the spec the *configurability* of PPIs is
>> implementation defined. The KVM implementation chose to fix all of them
>> to "level", which we need for the arch timer. So setting the evtchn PPI
>> to edge would be ignored. We could deviate from that, but I need to
>> check what the side effects are.
>>
>>> The long explanation follows, please correct me if I am wrong.
>>>
>>> 1) vcpuA/cpuA is running, it has already handled the event, cleared
>>> evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
>>> Xen yet. It is still in guest mode.
>>>
>>> 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
>>> vgic_inject_irq. However, because irq->line_level is high, it is not
>>> injected.

I was just wondering if we could do something similar to the SBSA UART
change, where we amend vcpu_mark_events_pending() to update the line
level in the VGIC via vgic_inject_irq().

>> So this is a case where we fail to sync in time on the actual emulated
>> line level.

So to explain this:
The virtual line level is not in sync all of the time, as we don't get
signalled when the flag is cleared by the domain. We need to sync this
whenever needed, which, for once, is when the domain returns to the
hypervisor. Another point is this vcpu_mark_events_pending() here,
because we need to know the state. And fortunately we just queried it
also. So I will include a change which updates this into the new VGIC.

 KVM recently gained some nice code to solve this: We can
>> register per-IRQ functions that return the line level. For hardware
>> mapped IRQs this queries the distributor, but for the arch timer for
>> instance it just uses a shortcut to read CNTV_CTL_EL0.
>> The evtchn IRQ could just check evtchn_upcall_pending.
>>
>> I can take a look at a follow up patch to implement this.
> 
> I agree that the evtchn_upcall_pending mechanism is very similar to a
> level interrupt, however the mechanism to bring the notification to the
> guest is edge, even on x86.

How so?
In xen/arch/arm/domain.c I find:
> void vcpu_mark_events_pending(struct vcpu *v)
> {
>     int already_pending = test_and_set_bit(
>         0, (unsigned long *)&vcpu_info(v, evtchn_upcall_pending));
>
>     if ( already_pending )
>         return;

That smells very much like level to me.

>
>     vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
> }


> Even with the new vgic implementation it
> falls very naturally in the edge pattern of behaviors. This is one of
> those cases where I would be happy to deviate from the KVM
> implementation, because it makes sense and would be easier to maintain
> going forward. We can even get rid of vcpu_update_evtchn_irq.

So I tried this:
- remove vcpu_update_evtchn_irq()
- change type to edge in the hypervisor node of the DT
- allow changing the edge/level configuration of PPIs
- (optionally: ) remove the special vcpu_mark_events_pending() handling
(the one quoted above)

This boots Dom0, and I see the event IRQ being edge now.
But it hangs at various places when booting DomUs, both with and without
the last point applied.

So from all I can see the event channel IRQ is really a level IRQ:
- We have  a flag holding the *state*: edge IRQs are actually *events*.
- We set that flag in the hypervisor, but clear it in the domains.
- We don't inject new IRQs if that flag is still set.

A true edge IRQ mechanism would hand that event over to the interrupt
controller and then forget about it.

So I would like to *not* change this away from level this at this point:
- I don't have a working implementation of using edge IRQs.
- The existing level mechanism has shown no problems in weeks of testing.
- It's somewhat straining the architecture to allow configurable PPIs.

> I am OK with a not-ideal short term fix by the end of this week, as long
> as we come up with a proper fix later, by the release date. But in this
> instance, wouldn't be enough to change the PPI type to EDGE, remove
> vcpu_update_evtchn_irq, and be done with it?

Apparently not, sorry.

Cheers,
Andre.

>>> 3) vcpuA has to wait until trapping into Xen, calling
>>> vcpu_update_evtchn_irq, and going back to guest mode before receiving
>>> the event. This is theoretically a very long time.
>>>
>>>
>>> Instead what should happen is:
>>>
>>> 1) vcpuA/cpuA is running, it has already handled the event, cleared
>>> evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
>>> Xen yet. It is still in guest mode.
>>>
>>> 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
>>> vgic_inject_irq, which calls vgic_queue_irq_unlock that
>>> vcpu_kick(vcpuA), forcing it to take the event immediately.
>>>
>>> Am I right? Wouldn't it be safer to continue configuring the evtchn_irq
>>> as edge even in the new vgic?
>
Julien Grall April 3, 2018, 1:34 p.m. UTC | #8
Hi Stefano,

On 28/03/18 01:01, Stefano Stabellini wrote:
> On Wed, 21 Mar 2018, Andre Przywara wrote:
>> The event channel IRQ has level triggered semantics, however the current
>> VGIC treats everything as edge triggered.
>> To correctly process those IRQs, we have to lower the (virtual) IRQ line
>> at some point in time, depending on whether ther interrupt condition
>> still prevails.
>> Check the per-VCPU evtchn_upcall_pending variable to make the interrupt
>> line match its status, and call this function upon every hypervisor
>> entry.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@linaro.org>
>> Reviewed-by: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/domain.c       | 7 +++++++
>>   xen/arch/arm/traps.c        | 1 +
>>   xen/include/asm-arm/event.h | 1 +
>>   3 files changed, 9 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index ff97f2bc76..9688e62f78 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -953,6 +953,13 @@ void vcpu_mark_events_pending(struct vcpu *v)
>>       vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
>>   }
>>   
>> +void vcpu_update_evtchn_irq(struct vcpu *v)
>> +{
>> +    bool pending = vcpu_info(v, evtchn_upcall_pending);
>> +
>> +    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
>> +}
>> +
>>   /* The ARM spec declares that even if local irqs are masked in
>>    * the CPSR register, an irq should wake up a cpu from WFI anyway.
>>    * For this reason we need to check for irqs that need delivery,
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 2638446693..5c18e918b0 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs)
>>            * trap and how it can be optimised.
>>            */
>>           vtimer_update_irqs(current);
>> +        vcpu_update_evtchn_irq(current);
>>   #endif
> 
> I am replying to this patch, even though I have already committed it, to
> point out a problem with the way we currently handle the evtchn_irq in
> this series.
> 
> The short version is that I think we should configure the PPI
> corresponding to the evtchn_irq as EDGE instead of LEVEL.
> 
> The long explanation follows, please correct me if I am wrong.
> 
> 1) vcpuA/cpuA is running, it has already handled the event, cleared
> evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
> Xen yet. It is still in guest mode.
> 
> 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
> vgic_inject_irq. However, because irq->line_level is high, it is not
> injected.
> 
> 3) vcpuA has to wait until trapping into Xen, calling
> vcpu_update_evtchn_irq, and going back to guest mode before receiving
> the event. This is theoretically a very long time.

Well, looking at the vGIC code the EOI bit in the LR is set for level 
interrupt. So as soon as vCPU A eoi it, we would enter to Xen and then 
call vcpu_update_evtchn_irq.

So I don't see how it could take a very long time.

Cheers,

> 
> 
> Instead what should happen is:
> 
> 1) vcpuA/cpuA is running, it has already handled the event, cleared
> evtchn_upcall_pending and EOIed the event_irq but hasn't trapped into
> Xen yet. It is still in guest mode.
> 
> 2) Xen on cpuB calls vcpu_mark_events_pending(vcpuA), then calls
> vgic_inject_irq, which calls vgic_queue_irq_unlock that
> vcpu_kick(vcpuA), forcing it to take the event immediately.
> 
> Am I right? Wouldn't it be safer to continue configuring the evtchn_irq
> as edge even in the new vgic?
>
diff mbox series

Patch

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index ff97f2bc76..9688e62f78 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -953,6 +953,13 @@  void vcpu_mark_events_pending(struct vcpu *v)
     vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true);
 }
 
+void vcpu_update_evtchn_irq(struct vcpu *v)
+{
+    bool pending = vcpu_info(v, evtchn_upcall_pending);
+
+    vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending);
+}
+
 /* The ARM spec declares that even if local irqs are masked in
  * the CPSR register, an irq should wake up a cpu from WFI anyway.
  * For this reason we need to check for irqs that need delivery,
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 2638446693..5c18e918b0 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2033,6 +2033,7 @@  static void enter_hypervisor_head(struct cpu_user_regs *regs)
          * trap and how it can be optimised.
          */
         vtimer_update_irqs(current);
+        vcpu_update_evtchn_irq(current);
 #endif
 
         vgic_sync_from_lrs(current);
diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h
index c7a415ef57..2f51864043 100644
--- a/xen/include/asm-arm/event.h
+++ b/xen/include/asm-arm/event.h
@@ -6,6 +6,7 @@ 
 
 void vcpu_kick(struct vcpu *v);
 void vcpu_mark_events_pending(struct vcpu *v);
+void vcpu_update_evtchn_irq(struct vcpu *v);
 void vcpu_block_unless_event_pending(struct vcpu *v);
 
 static inline int vcpu_event_delivery_is_enabled(struct vcpu *v)