From patchwork Thu Apr 30 09:25:11 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 47799 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 787CF2121F for ; Thu, 30 Apr 2015 09:33:32 +0000 (UTC) Received: by wizk4 with SMTP id k4sf3001164wiz.2 for ; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type :content-transfer-encoding:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=enT8TqaxFmZKhnOkeQjDK/MsOQE9meyjSETQxKBkFCM=; b=OcbDdrKjvb9cGPCObDOMMANfuiFEv51Z8myjzd8wa6y0J+0t5RGiYH8pjOpsKIXu4j 89fyvCiOL3X6GJABlaS5d+UTUtIo3cjUDiEEK+ZnVTLM09BefQ0E/LIyYx6lK2Wlc7eb fMsH+fq1LDC9kqbqQAC0IDaRROV8OmTyxyR2QM7i5L+CzW+edMjxGIC7vv8YVz1Q9Gj3 SDg9mVmTCPU1A/uOkcO8Wq+ZXsu718NfBfBfIYd9ALwFJs84q7wvQ0gA1vLD/5ee4cjS u/1DGwxyAW1XFgLvHJG2HVT9dlBg2ZbXJ2jMPfVY9biYZBs9/mJ0qgiivGTlofC08da/ vAPw== X-Gm-Message-State: ALoCoQkAbQb3OvZsxCfRHkZXXkrPwccc+R7xpbilQ+cAnDPP35/+XHB4gc8u/5Q0MGHlJSoso7sX X-Received: by 10.112.53.102 with SMTP id a6mr2019349lbp.16.1430386411796; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.23.232 with SMTP id p8ls349824laf.13.gmail; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) X-Received: by 10.112.180.201 with SMTP id dq9mr3093371lbc.78.1430386411689; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com. [209.85.217.178]) by mx.google.com with ESMTPS id ml5si1358692lbc.38.2015.04.30.02.33.31 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 02:33:31 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by lbbzk7 with SMTP id zk7so40053062lbb.0 for ; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) X-Received: by 10.112.13.6 with SMTP id d6mr1676794lbc.117.1430386411601; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2998645lbt; Thu, 30 Apr 2015 02:33:31 -0700 (PDT) X-Received: by 10.70.48.177 with SMTP id m17mr6539267pdn.115.1430385942888; Thu, 30 Apr 2015 02:25:42 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id nf2si2640288pbc.149.2015.04.30.02.25.42; Thu, 30 Apr 2015 02:25:42 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751338AbbD3JZe (ORCPT + 2 others); Thu, 30 Apr 2015 05:25:34 -0400 Received: from ip4-83-240-67-251.cust.nbox.cz ([83.240.67.251]:53007 "EHLO ip4-83-240-18-248.cust.nbox.cz" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751390AbbD3JZO (ORCPT ); Thu, 30 Apr 2015 05:25:14 -0400 Received: from ku by ip4-83-240-18-248.cust.nbox.cz with local (Exim 4.85) (envelope-from ) id 1YnkiW-0005Qn-HX; Thu, 30 Apr 2015 11:25:12 +0200 From: Jiri Slaby To: stable@vger.kernel.org Cc: Christoffer Dall , Marc Zyngier , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Shannon Zhao , Jiri Slaby Subject: [patch added to the 3.12 stable tree] arm/arm64: KVM: Keep elrsr/aisr in sync with software model Date: Thu, 30 Apr 2015 11:25:11 +0200 Message-Id: <1430385911-20480-63-git-send-email-jslaby@suse.cz> X-Mailer: git-send-email 2.3.5 In-Reply-To: <1430385911-20480-1-git-send-email-jslaby@suse.cz> References: <1430385911-20480-1-git-send-email-jslaby@suse.cz> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall This patch has been added to the 3.12 stable tree. If you have any objections, please let us know. =============== commit ae705930fca6322600690df9dc1c7d0516145a93 upstream. There is an interesting bug in the vgic code, which manifests itself when the KVM run loop has a signal pending or needs a vmid generation rollover after having disabled interrupts but before actually switching to the guest. In this case, we flush the vgic as usual, but we sync back the vgic state and exit to userspace before entering the guest. The consequence is that we will be syncing the list registers back to the software model using the GICH_ELRSR and GICH_EISR from the last execution of the guest, potentially overwriting a list register containing an interrupt. This showed up during migration testing where we would capture a state where the VM has masked the arch timer but there were no interrupts, resulting in a hung test. Cc: Marc Zyngier Reported-by: Alex Bennee Signed-off-by: Christoffer Dall Signed-off-by: Alex Bennée Acked-by: Marc Zyngier Signed-off-by: Shannon Zhao Signed-off-by: Jiri Slaby --- virt/kvm/arm/vgic.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 865a89178c82..ecea20153b42 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -881,6 +881,7 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq) lr, irq, vgic_cpu->vgic_lr[lr]); BUG_ON(!test_bit(lr, vgic_cpu->lr_used)); vgic_cpu->vgic_lr[lr] |= GICH_LR_PENDING_BIT; + __clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr); return true; } @@ -894,6 +895,7 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq) vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq); vgic_cpu->vgic_irq_lr_map[irq] = lr; set_bit(lr, vgic_cpu->lr_used); + __clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr); if (!vgic_irq_is_edge(vcpu, irq)) vgic_cpu->vgic_lr[lr] |= GICH_LR_EOI; @@ -1048,6 +1050,14 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) if (vgic_cpu->vgic_misr & GICH_MISR_U) vgic_cpu->vgic_hcr &= ~GICH_HCR_UIE; + /* + * In the next iterations of the vcpu loop, if we sync the vgic state + * after flushing it, but before entering the guest (this happens for + * pending signals and vmid rollovers), then make sure we don't pick + * up any old maintenance interrupts here. + */ + memset(vgic_cpu->vgic_eisr, 0, sizeof(vgic_cpu->vgic_eisr[0]) * 2); + return level_pending; }