From patchwork Fri Mar 13 17:02:56 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 45802 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 202D1214BF for ; Fri, 13 Mar 2015 17:04:52 +0000 (UTC) Received: by wivz2 with SMTP id z2sf4732797wiv.1 for ; Fri, 13 Mar 2015 10:04:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type :content-transfer-encoding:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=tlUptzQoHigwaP+4hhTgdsgZj8Zc9c4Xc6TGn6B7Q4U=; b=Z9+6nUUNb2b3s/YoPfOU9tLaFT/AIIQ9gUOHzD1KNHoWnAwiyji+3/Z1es4zlxknVT DUOFqvLkgJTF4HbCCyvPJog2JJPlm+vAmcMLlDaXqiO0iTUrKSgwsWcLvMYnViP6jTc0 2xRs4e8CpiEgTnOe89waWPyNqFb07z3WWp7L9VQD/bb2S99dhohCqGK37sYpjEuxRTUe yDndgVk1cLpikISdciW0wItUFiEiQGnyf+lMW9+kX4SnQgM27y/ykTcTxVLtEdpsJnz1 8LAguqkGPbLq9R79U53E+2D0q7FkecPj4+zsEFffp3BW8eu+/dsshp2MYPoCbZ1Tflbj 6+oQ== X-Gm-Message-State: ALoCoQm3rTT/oesvon64tOq5y7KNOeolQ1WV7D94RNXVaStmuSXkK4tCxZ0JEaHaV0RuaXXDRCbH X-Received: by 10.194.95.67 with SMTP id di3mr7448771wjb.0.1426266291379; Fri, 13 Mar 2015 10:04:51 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.121.5 with SMTP id lg5ls363458lab.106.gmail; Fri, 13 Mar 2015 10:04:51 -0700 (PDT) X-Received: by 10.112.123.99 with SMTP id lz3mr30139685lbb.48.1426266291185; Fri, 13 Mar 2015 10:04:51 -0700 (PDT) Received: from mail-la0-f47.google.com (mail-la0-f47.google.com. [209.85.215.47]) by mx.google.com with ESMTPS id db4si1792561lac.79.2015.03.13.10.04.51 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Mar 2015 10:04:51 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) client-ip=209.85.215.47; Received: by labgm9 with SMTP id gm9so23936504lab.13 for ; Fri, 13 Mar 2015 10:04:51 -0700 (PDT) X-Received: by 10.152.116.11 with SMTP id js11mr22285691lab.106.1426266290971; Fri, 13 Mar 2015 10:04:50 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1049044lbj; Fri, 13 Mar 2015 10:04:50 -0700 (PDT) X-Received: by 10.70.44.203 with SMTP id g11mr105700827pdm.130.1426266289126; Fri, 13 Mar 2015 10:04:49 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ez1si5064639pab.232.2015.03.13.10.04.46; Fri, 13 Mar 2015 10:04:49 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756540AbbCMREl (ORCPT + 28 others); Fri, 13 Mar 2015 13:04:41 -0400 Received: from static.88-198-71-155.clients.your-server.de ([88.198.71.155]:45956 "EHLO socrates.bennee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751038AbbCMRDJ (ORCPT ); Fri, 13 Mar 2015 13:03:09 -0400 Received: from localhost ([127.0.0.1] helo=zen.linaroharston) by socrates.bennee.com with esmtp (Exim 4.80) (envelope-from ) id 1YWU10-0006Tj-3U; Fri, 13 Mar 2015 19:08:54 +0100 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Cc: =?UTF-8?q?Alex=20Benn=C3=A9e?= , Gleb Natapov , Paolo Bonzini , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 5/5] arm/arm64: KVM: Keep elrsr/aisr in sync with software model Date: Fri, 13 Mar 2015 17:02:56 +0000 Message-Id: <1426266176-14933-6-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.3.2 In-Reply-To: <1426266176-14933-1-git-send-email-alex.bennee@linaro.org> References: <1426266176-14933-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: alex.bennee@linaro.org X-SA-Exim-Scanned: No (on socrates.bennee.com); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.bennee@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.47 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall There is an interesting bug in the vgic code, which manifests itself when the KVM run loop has a signal pending or needs a vmid generation rollover after having disabled interrupts but before actually switching to the guest. In this case, we flush the vgic as usual, but we sync back the vgic state and exit to userspace before entering the guest. The consequence is that we will be syncing the list registers back to the software model using the GICH_ELRSR and GICH_EISR from the last execution of the guest, potentially overwriting a list register containing an interrupt. This showed up during migration testing where we would capture a state where the VM has masked the arch timer but there were no interrupts, resulting in a hung test. Cc: Marc Zyngier Reported-by: Alex Bennee Signed-off-by: Christoffer Dall Signed-off-by: Alex Bennée Acked-by: Marc Zyngier diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 7042251..e2a676e 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -114,6 +114,7 @@ struct vgic_ops { void (*sync_lr_elrsr)(struct kvm_vcpu *, int, struct vgic_lr); u64 (*get_elrsr)(const struct kvm_vcpu *vcpu); u64 (*get_eisr)(const struct kvm_vcpu *vcpu); + void (*clear_eisr)(struct kvm_vcpu *vcpu); u32 (*get_interrupt_status)(const struct kvm_vcpu *vcpu); void (*enable_underflow)(struct kvm_vcpu *vcpu); void (*disable_underflow)(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/arm/vgic-v2.c b/virt/kvm/arm/vgic-v2.c index a0a7b5d..f9b9c7c 100644 --- a/virt/kvm/arm/vgic-v2.c +++ b/virt/kvm/arm/vgic-v2.c @@ -72,6 +72,8 @@ static void vgic_v2_sync_lr_elrsr(struct kvm_vcpu *vcpu, int lr, { if (!(lr_desc.state & LR_STATE_MASK)) vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr |= (1ULL << lr); + else + vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr &= ~(1ULL << lr); } static u64 vgic_v2_get_elrsr(const struct kvm_vcpu *vcpu) @@ -84,6 +86,11 @@ static u64 vgic_v2_get_eisr(const struct kvm_vcpu *vcpu) return vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr; } +static void vgic_v2_clear_eisr(struct kvm_vcpu *vcpu) +{ + vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr = 0; +} + static u32 vgic_v2_get_interrupt_status(const struct kvm_vcpu *vcpu) { u32 misr = vcpu->arch.vgic_cpu.vgic_v2.vgic_misr; @@ -148,6 +155,7 @@ static const struct vgic_ops vgic_v2_ops = { .sync_lr_elrsr = vgic_v2_sync_lr_elrsr, .get_elrsr = vgic_v2_get_elrsr, .get_eisr = vgic_v2_get_eisr, + .clear_eisr = vgic_v2_clear_eisr, .get_interrupt_status = vgic_v2_get_interrupt_status, .enable_underflow = vgic_v2_enable_underflow, .disable_underflow = vgic_v2_disable_underflow, diff --git a/virt/kvm/arm/vgic-v3.c b/virt/kvm/arm/vgic-v3.c index 3a62d8a..dff0602 100644 --- a/virt/kvm/arm/vgic-v3.c +++ b/virt/kvm/arm/vgic-v3.c @@ -104,6 +104,8 @@ static void vgic_v3_sync_lr_elrsr(struct kvm_vcpu *vcpu, int lr, { if (!(lr_desc.state & LR_STATE_MASK)) vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr |= (1U << lr); + else + vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr &= ~(1U << lr); } static u64 vgic_v3_get_elrsr(const struct kvm_vcpu *vcpu) @@ -116,6 +118,11 @@ static u64 vgic_v3_get_eisr(const struct kvm_vcpu *vcpu) return vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr; } +static void vgic_v3_clear_eisr(struct kvm_vcpu *vcpu) +{ + vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr = 0; +} + static u32 vgic_v3_get_interrupt_status(const struct kvm_vcpu *vcpu) { u32 misr = vcpu->arch.vgic_cpu.vgic_v3.vgic_misr; @@ -192,6 +199,7 @@ static const struct vgic_ops vgic_v3_ops = { .sync_lr_elrsr = vgic_v3_sync_lr_elrsr, .get_elrsr = vgic_v3_get_elrsr, .get_eisr = vgic_v3_get_eisr, + .clear_eisr = vgic_v3_clear_eisr, .get_interrupt_status = vgic_v3_get_interrupt_status, .enable_underflow = vgic_v3_enable_underflow, .disable_underflow = vgic_v3_disable_underflow, diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 22ff3bc..27ae3d4 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -979,6 +979,11 @@ static inline u64 vgic_get_eisr(struct kvm_vcpu *vcpu) return vgic_ops->get_eisr(vcpu); } +static inline void vgic_clear_eisr(struct kvm_vcpu *vcpu) +{ + vgic_ops->clear_eisr(vcpu); +} + static inline u32 vgic_get_interrupt_status(struct kvm_vcpu *vcpu) { return vgic_ops->get_interrupt_status(vcpu); @@ -1018,6 +1023,7 @@ static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu) vgic_set_lr(vcpu, lr_nr, vlr); clear_bit(lr_nr, vgic_cpu->lr_used); vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; + vgic_sync_lr_elrsr(vcpu, lr_nr, vlr); } /* @@ -1062,6 +1068,7 @@ static void vgic_queue_irq_to_lr(struct kvm_vcpu *vcpu, int irq, vlr.state |= LR_EOI_INT; vgic_set_lr(vcpu, lr_nr, vlr); + vgic_sync_lr_elrsr(vcpu, lr_nr, vlr); } /* @@ -1257,6 +1264,14 @@ static bool vgic_process_maintenance(struct kvm_vcpu *vcpu) if (status & INT_STATUS_UNDERFLOW) vgic_disable_underflow(vcpu); + /* + * In the next iterations of the vcpu loop, if we sync the vgic state + * after flushing it, but before entering the guest (this happens for + * pending signals and vmid rollovers), then make sure we don't pick + * up any old maintenance interrupts here. + */ + vgic_clear_eisr(vcpu); + return level_pending; }