From patchwork Thu Nov 19 14:52:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 57001 Delivered-To: patches@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp3226179lbb; Thu, 19 Nov 2015 06:52:37 -0800 (PST) X-Received: by 10.195.11.101 with SMTP id eh5mr8983289wjd.104.1447944757293; Thu, 19 Nov 2015 06:52:37 -0800 (PST) Return-Path: Received: from mail-wm0-x232.google.com (mail-wm0-x232.google.com. [2a00:1450:400c:c09::232]) by mx.google.com with ESMTPS id y66si48422823wmg.57.2015.11.19.06.52.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Nov 2015 06:52:37 -0800 (PST) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c09::232 as permitted sender) client-ip=2a00:1450:400c:c09::232; Authentication-Results: mx.google.com; spf=pass (google.com: domain of eric.auger@linaro.org designates 2a00:1450:400c:c09::232 as permitted sender) smtp.mailfrom=eric.auger@linaro.org; dkim=pass header.i=@linaro-org.20150623.gappssmtp.com Received: by wmdw130 with SMTP id w130so243570554wmd.0 for ; Thu, 19 Nov 2015 06:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=kEJmzUjScFj8737/46+48lE/m2bkUQnwkbiQqxmt+WM=; b=FBGdHuF2wz5t3YrXvn6HE6xCfxwXiBfZm3t2vCUaFJaS+ZtYo3mEi1xgSDfCshJ7GY czqN3wuZllqehddx6aq2G4+r8lSqpw9qgTGBchutgdLYW0lcTURCiuttGRmslbrFFKY2 LoDk5UMoPaexh58WfjEHrA2zC10x0XVpY9zX05rw9yoVlQdsGJF6jEjoSG55Yw0obEWm SCgZEiffehynHmyemlyPid+r/LLhh2+ilzRcMmxF5imibrECPgC+QdL7GEeWTwbIfv2Q bfn7vjMgsNeIR00/J/Q3Gdj/XutXV/fSqYu6+PaU0/dGhCA6iRjn/k4DXbdVewVSQDxE sPgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=kEJmzUjScFj8737/46+48lE/m2bkUQnwkbiQqxmt+WM=; b=MY8YuYHowX5Q45psrGSa6r56IJtEzM38f/YJDLp5gjnk8Jr6/sk+TBsBOk5IIChBDL IlEc2HdaRZi6mmCwVSjMDx1PukN0IRj1sYNoG8zKW+R8fQ1T/BiSRp9qWMZsOf9pa7zw LYgo/xf0YeY1RghqpOBFqPt7nkYnsEUssw6EBZOFWHZAoFaYtQJKvMZu4Sa6l2PgYx9F t/HNIWasZr01Nmr1zVzwWXT7RlscV2c3vPbMyYlChq5/DjG6+eexhVIy3gae10fw5fAY 44rdxU43Dm/UEQFoEiQqjldTAq3YF3KpvdIxeRqguJv8V1J441XcIYfqqv/aY70q9J+V Bxkg== X-Gm-Message-State: ALoCoQmchgzwx7r4UJB4cBcAqQzCMIn59HAcI2CO8ZfhSCJvyQqVPF0tISqiEh88YjQnt5pWKGaM X-Received: by 10.194.57.178 with SMTP id j18mr10319682wjq.113.1447944757093; Thu, 19 Nov 2015 06:52:37 -0800 (PST) Return-Path: Received: from new-host-3.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m16sm1660275wmb.13.2015.11.19.06.52.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 19 Nov 2015 06:52:36 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: andre.przywara@arm.com, linux-kernel@vger.kernel.org, patches@linaro.org Subject: [PATCH] KVM: arm/arm64: vgic: leave the LR active state on GICD_ICENABLERn access Date: Thu, 19 Nov 2015 14:52:27 +0000 Message-Id: <1447944747-17689-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 Currently on clear-enable MMIO we retire the corresponding LR whatever its state. More precisely we do not sync ACTIVE state but we erase the LR state. In case of a forwarded IRQ, the physical IRQ source is also erased meaning the physical IRQ will never be deactivated. In case of a non forwarded IRQ, the LR can be reused (since the state was reset) and the guest can deactivate an IRQ that is not marked in the LR anymore. This patch adds a parameter to vgic_retire_lr that makes possible to select the type of the LR that must be retired. unqueue will retire/sync all LR's while disable will leave the active LR's. Signed-off-by: Eric Auger --- virt/kvm/arm/vgic.c | 45 +++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 22 deletions(-) -- 1.9.1 diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 5335383..bc30d93 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -105,7 +105,7 @@ #include "vgic.h" static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu); -static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu); +static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu, unsigned state); static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr); static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc); static u64 vgic_get_elrsr(struct kvm_vcpu *vcpu); @@ -713,18 +713,10 @@ void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) add_sgi_source(vcpu, lr.irq, lr.source); /* - * If the LR holds an active (10) or a pending and active (11) - * interrupt then move the active state to the - * distributor tracking bit. + * retire pending, active, active and pending LR's and + * sync their state back to the distributor */ - if (lr.state & LR_STATE_ACTIVE) - vgic_irq_set_active(vcpu, lr.irq); - - /* - * Reestablish the pending state on the distributor and the - * CPU interface and mark the LR as free for other use. - */ - vgic_retire_lr(i, vcpu); + vgic_retire_lr(i, vcpu, LR_STATE_ACTIVE | LR_STATE_PENDING); /* Finally update the VGIC state. */ vgic_update_state(vcpu->kvm); @@ -1077,22 +1069,25 @@ static inline void vgic_enable(struct kvm_vcpu *vcpu) vgic_ops->enable(vcpu); } -static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu) +static void vgic_retire_lr(int lr_nr, struct kvm_vcpu *vcpu, unsigned state) { struct vgic_lr vlr = vgic_get_lr(vcpu, lr_nr); - vgic_irq_clear_queued(vcpu, vlr.irq); + if (vlr.state & LR_STATE_ACTIVE & state) { + vgic_irq_set_active(vcpu, vlr.irq); + vlr.state &= ~LR_STATE_ACTIVE; + } - /* - * We must transfer the pending state back to the distributor before - * retiring the LR, otherwise we may loose edge-triggered interrupts. - */ - if (vlr.state & LR_STATE_PENDING) { + if (vlr.state & LR_STATE_PENDING & state) { vgic_dist_irq_set_pending(vcpu, vlr.irq); - vlr.hwirq = 0; + vlr.state &= ~LR_STATE_PENDING; } - vlr.state = 0; + if (!(vlr.state & LR_STATE_MASK)) { + vlr.hwirq = 0; + vlr.state = 0; + vgic_irq_clear_queued(vcpu, vlr.irq); + } vgic_set_lr(vcpu, lr_nr, vlr); } @@ -1114,8 +1109,14 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu) for_each_clear_bit(lr, elrsr_ptr, vgic->nr_lr) { struct vgic_lr vlr = vgic_get_lr(vcpu, lr); + /* + * retire pending only LR's and sync their state + * back to the distributor. Active LR's cannot be + * retired since the guest will attempt to deactivate + * the IRQ. + */ if (!vgic_irq_is_enabled(vcpu, vlr.irq)) - vgic_retire_lr(lr, vcpu); + vgic_retire_lr(lr, vcpu, LR_STATE_PENDING); } }