Message ID | 20181204202651.8836-13-julien.grall@arm.com |
---|---|
State | New |
Headers | show |
Series | xen/arm: Implement Set/Way operations | expand |
On Tue, 4 Dec 2018, Julien Grall wrote: > The function leave_hypervisor_tail is called before each return to the > guest vCPU. It has two main purposes: > 1) Process physical CPU work (e.g rescheduling) if required > 2) Prepare the physical CPU to run the guest vCPU > > 2) will always be done once we finished to process physical CPU work. At > the moment, it is done part of the last iterations of 1) making adding > some extra indentation in the code. > > This could be streamlined by moving out 2) of the loop. At the same > time, 1) is moved in a separate function making more obvious > > All those changes will help a follow-up patch where we would want to > introduce some vCPU work before returning to the guest vCPU. > > Signed-off-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> > --- > Changes in v2: > - Patch added > --- > xen/arch/arm/traps.c | 61 ++++++++++++++++++++++++++++------------------------ > 1 file changed, 33 insertions(+), 28 deletions(-) > > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c > index b00d0b8e1e..02665cc7b4 100644 > --- a/xen/arch/arm/traps.c > +++ b/xen/arch/arm/traps.c > @@ -2241,36 +2241,12 @@ void do_trap_fiq(struct cpu_user_regs *regs) > gic_interrupt(regs, 1); > } > > -void leave_hypervisor_tail(void) > +static void check_for_pcpu_work(void) > { > - while (1) > - { > - local_irq_disable(); > - if ( !softirq_pending(smp_processor_id()) ) > - { > - vgic_sync_to_lrs(); > - > - /* > - * If the SErrors handle option is "DIVERSE", we have to prevent > - * slipping the hypervisor SError to guest. In this option, before > - * returning from trap, we have to synchronize SErrors to guarantee > - * that the pending SError would be caught in hypervisor. > - * > - * If option is NOT "DIVERSE", SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT > - * will be set to cpu_hwcaps. This means we can use the alternative > - * to skip synchronizing SErrors for other SErrors handle options. > - */ > - SYNCHRONIZE_SERROR(SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT); > - > - /* > - * The hypervisor runs with the workaround always present. > - * If the guest wants it disabled, so be it... > - */ > - if ( needs_ssbd_flip(current) ) > - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL); > + ASSERT(!local_irq_is_enabled()); > > - return; > - } > + while ( softirq_pending(smp_processor_id()) ) > + { > local_irq_enable(); > do_softirq(); > /* > @@ -2278,9 +2254,38 @@ void leave_hypervisor_tail(void) > * and we want to patch the hypervisor with almost no stack. > */ > check_for_livepatch_work(); > + local_irq_disable(); > } > } > > +void leave_hypervisor_tail(void) > +{ > + local_irq_disable(); > + > + check_for_pcpu_work(); > + > + vgic_sync_to_lrs(); > + > + /* > + * If the SErrors handle option is "DIVERSE", we have to prevent > + * slipping the hypervisor SError to guest. In this option, before > + * returning from trap, we have to synchronize SErrors to guarantee > + * that the pending SError would be caught in hypervisor. > + * > + * If option is NOT "DIVERSE", SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT > + * will be set to cpu_hwcaps. This means we can use the alternative > + * to skip synchronizing SErrors for other SErrors handle options. > + */ > + SYNCHRONIZE_SERROR(SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT); > + > + /* > + * The hypervisor runs with the workaround always present. > + * If the guest wants it disabled, so be it... > + */ > + if ( needs_ssbd_flip(current) ) > + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL); > +} > + > /* > * Local variables: > * mode: C > -- > 2.11.0 >
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index b00d0b8e1e..02665cc7b4 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2241,36 +2241,12 @@ void do_trap_fiq(struct cpu_user_regs *regs) gic_interrupt(regs, 1); } -void leave_hypervisor_tail(void) +static void check_for_pcpu_work(void) { - while (1) - { - local_irq_disable(); - if ( !softirq_pending(smp_processor_id()) ) - { - vgic_sync_to_lrs(); - - /* - * If the SErrors handle option is "DIVERSE", we have to prevent - * slipping the hypervisor SError to guest. In this option, before - * returning from trap, we have to synchronize SErrors to guarantee - * that the pending SError would be caught in hypervisor. - * - * If option is NOT "DIVERSE", SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT - * will be set to cpu_hwcaps. This means we can use the alternative - * to skip synchronizing SErrors for other SErrors handle options. - */ - SYNCHRONIZE_SERROR(SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT); - - /* - * The hypervisor runs with the workaround always present. - * If the guest wants it disabled, so be it... - */ - if ( needs_ssbd_flip(current) ) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL); + ASSERT(!local_irq_is_enabled()); - return; - } + while ( softirq_pending(smp_processor_id()) ) + { local_irq_enable(); do_softirq(); /* @@ -2278,9 +2254,38 @@ void leave_hypervisor_tail(void) * and we want to patch the hypervisor with almost no stack. */ check_for_livepatch_work(); + local_irq_disable(); } } +void leave_hypervisor_tail(void) +{ + local_irq_disable(); + + check_for_pcpu_work(); + + vgic_sync_to_lrs(); + + /* + * If the SErrors handle option is "DIVERSE", we have to prevent + * slipping the hypervisor SError to guest. In this option, before + * returning from trap, we have to synchronize SErrors to guarantee + * that the pending SError would be caught in hypervisor. + * + * If option is NOT "DIVERSE", SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT + * will be set to cpu_hwcaps. This means we can use the alternative + * to skip synchronizing SErrors for other SErrors handle options. + */ + SYNCHRONIZE_SERROR(SKIP_SYNCHRONIZE_SERROR_ENTRY_EXIT); + + /* + * The hypervisor runs with the workaround always present. + * If the guest wants it disabled, so be it... + */ + if ( needs_ssbd_flip(current) ) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2_FID, 0, NULL); +} + /* * Local variables: * mode: C
The function leave_hypervisor_tail is called before each return to the guest vCPU. It has two main purposes: 1) Process physical CPU work (e.g rescheduling) if required 2) Prepare the physical CPU to run the guest vCPU 2) will always be done once we finished to process physical CPU work. At the moment, it is done part of the last iterations of 1) making adding some extra indentation in the code. This could be streamlined by moving out 2) of the loop. At the same time, 1) is moved in a separate function making more obvious All those changes will help a follow-up patch where we would want to introduce some vCPU work before returning to the guest vCPU. Signed-off-by: Julien Grall <julien.grall@arm.com> --- Changes in v2: - Patch added --- xen/arch/arm/traps.c | 61 ++++++++++++++++++++++++++++------------------------ 1 file changed, 33 insertions(+), 28 deletions(-)