diff mbox

[for,3.14.y,stable,47/47] arm/arm64: KVM: Keep elrsr/aisr in sync with software model

Message ID 1430704362-6292-48-git-send-email-shannon.zhao@linaro.org
State New
Headers show

Commit Message

Shannon Zhao May 4, 2015, 1:52 a.m. UTC
From: Christoffer Dall <christoffer.dall@linaro.org>

commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.

There is an interesting bug in the vgic code, which manifests itself
when the KVM run loop has a signal pending or needs a vmid generation
rollover after having disabled interrupts but before actually switching
to the guest.

In this case, we flush the vgic as usual, but we sync back the vgic
state and exit to userspace before entering the guest.  The consequence
is that we will be syncing the list registers back to the software model
using the GICH_ELRSR and GICH_EISR from the last execution of the guest,
potentially overwriting a list register containing an interrupt.

This showed up during migration testing where we would capture a state
where the VM has masked the arch timer but there were no interrupts,
resulting in a hung test.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Reported-by: Alex Bennee <alex.bennee@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 virt/kvm/arm/vgic.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Christoffer Dall May 11, 2015, 6:44 p.m. UTC | #1
Hi Greg,

On Mon, May 11, 2015 at 08:17:07AM -0700, Greg KH wrote:
> On Mon, May 04, 2015 at 09:52:42AM +0800, shannon.zhao@linaro.org wrote:
> > From: Christoffer Dall <christoffer.dall@linaro.org>
> > 
> > commit ae705930fca6322600690df9dc1c7d0516145a93 upstream.
> 
> No, that's not what the patch below really is.
> 
> Do I have to go back by hand and verify each one of these really is the
> patch you say it is?  That's a major pain...
> 
This is a backport of the referenced commit, but it couldn't be applied
directly because of the churn in the vgic code.

I believed that the commit X upstream notation would indicate the
equivalent fix upstream, not the *exact* commit for the relevant stable
kernel.

Apologies if that was an incorrect assumption.  I believe this is the
only patch which was significantly rewritten because of the churn in the
vgic code and enough users are seeing this in various distro kernels
that I figured it was important to refactor and backport.

What would you like me to do with this patch?

Note that I didn't understand that this is wrong from reading
Documentation/stable_kernel_rules.txt, which may just be because I'm
being stupid.  Is the procedure that we're violating documented
somewhere?

Thanks,
-Christoffer
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index c324a52..152ec76 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1042,6 +1042,7 @@  static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
 			  lr, irq, vgic_cpu->vgic_lr[lr]);
 		BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
 		vgic_cpu->vgic_lr[lr] |= GICH_LR_PENDING_BIT;
+		__clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
 		return true;
 	}
 
@@ -1055,6 +1056,7 @@  static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
 	vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq);
 	vgic_cpu->vgic_irq_lr_map[irq] = lr;
 	set_bit(lr, vgic_cpu->lr_used);
+	__clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
 
 	if (!vgic_irq_is_edge(vcpu, irq))
 		vgic_cpu->vgic_lr[lr] |= GICH_LR_EOI;
@@ -1209,6 +1211,14 @@  static bool vgic_process_maintenance(struct kvm_vcpu *vcpu)
 	if (vgic_cpu->vgic_misr & GICH_MISR_U)
 		vgic_cpu->vgic_hcr &= ~GICH_HCR_UIE;
 
+	/*
+	 * In the next iterations of the vcpu loop, if we sync the vgic state
+	 * after flushing it, but before entering the guest (this happens for
+	 * pending signals and vmid rollovers), then make sure we don't pick
+	 * up any old maintenance interrupts here.
+	 */
+	memset(vgic_cpu->vgic_eisr, 0, sizeof(vgic_cpu->vgic_eisr[0]) * 2);
+
 	return level_pending;
 }