From patchwork Fri May 20 13:53:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 68245 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp3791423qge; Fri, 20 May 2016 06:54:15 -0700 (PDT) X-Received: by 10.66.160.201 with SMTP id xm9mr5059861pab.68.1463752455600; Fri, 20 May 2016 06:54:15 -0700 (PDT) Return-Path: Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id rb3si28132828pab.89.2016.05.20.06.54.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 May 2016 06:54:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) client-ip=2001:1868:205::9; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:1868:205::9 as permitted sender) smtp.mailfrom=linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1b3krT-00006S-PH; Fri, 20 May 2016 13:53:07 +0000 Received: from mail-wm0-x22a.google.com ([2a00:1450:400c:c09::22a]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1b3krQ-0008TL-SG for linux-arm-kernel@lists.infradead.org; Fri, 20 May 2016 13:53:05 +0000 Received: by mail-wm0-x22a.google.com with SMTP id n129so273029594wmn.1 for ; Fri, 20 May 2016 06:52:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UIIyiset6vPqIwJUNALmUlDR6uI9Y96DiW0zFTbgXVo=; b=VsvfeEd+NuaYRaFXn/Du5oLX0Qzwiwk3Q3mZAKYny7osvT1hHFcrKLGePln4kCBkJe O4461i/kb9GTtx4HpPYfF6RtwAo1f63gN8xm3xNHXh7Iitm5SPVMSb+EN5k6DQwG641A kFl4ttoYdd6UheMJnAG74kCgZpDbNCUa558So= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UIIyiset6vPqIwJUNALmUlDR6uI9Y96DiW0zFTbgXVo=; b=LxDO+Iz4672+j2xMc7F7vkKYMFty2i291nP59xdGUK2aNSTnvvh6dpvMSaWMvIP34o 78Z77hEIRgpncETt7bl1zrrbhWTxWhvBsLkGFoI4glbTKlqOGWLSCYW2TA0zDugYYlzZ 0uhVlTXwE9K9Y2q3uL+sfC15PTJX2UppA4VPWWAmHU1I5N26QMb1bnBQGIoL1jLDRmz6 vj8Cef6X50G5x82tZ3sKbo6hBPPUQTkf/2zIJ8x03FRNTfMXIGRN2hgYIOg+KTLel8FL D/0sIiVKZ+wO5EsXyKybUiGkzNaxUPIDoATCk3PPvmUPoxBmy/yVokN6y8Jyd5Ufyx2m BZwA== X-Gm-Message-State: AOPr4FWEzfHhnVPz1t/FmFdGX8LhBn1mB5oJwH7wBBfC6UHblmQ3oSGmt3zUs9KqOyVkW/KL X-Received: by 10.194.65.103 with SMTP id w7mr4012380wjs.36.1463752362959; Fri, 20 May 2016 06:52:42 -0700 (PDT) Received: from localhost.localdomain ([94.18.191.146]) by smtp.gmail.com with ESMTPSA id iv1sm19855799wjb.34.2016.05.20.06.52.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 20 May 2016 06:52:42 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2] KVM: arm/arm64: vgic-new: Synchronize changes to active state Date: Fri, 20 May 2016 15:53:30 +0200 Message-Id: <1463752410-3800-1-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1463751027-17517-1-git-send-email-christoffer.dall@linaro.org> References: <1463751027-17517-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160520_065305_070864_6438DCD4 X-CRM114-Status: GOOD ( 19.20 ) X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [2a00:1450:400c:c09:0:0:0:22a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Andre Przywara , Christoffer Dall , Eric Auger MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org When modifying the active state of an interrupt via the MMIO interface, we should ensure that the write has the intended effect. If a guest sets an interrupt to active, but that interrupt is already flushed into a list register on a running VCPU, then that VCPU will write the active state back into the struct vgic_irq upon returning from the guest and syncing its state. This is a non-benign race, because the guest can observe that an interrupt is not active, and it can have a reasonable expectations that other VCPUs will not ack any IRQs, and then set the state to active, and expect it to stay that way. Currently we are not honoring this case. Thefore, change both the SACTIVE and CACTIVE mmio handlers to stop the world, change the irq state, potentially queue the irq if we're setting it to active, and then continue. We take this chance to slightly optimize these functions by not stopping the world when touching private interrupts where there is inherently no possible race. Signed-off-by: Christoffer Dall --- Changes since v1: - Dont' stop the world for private IRQs virt/kvm/arm/vgic/vgic-mmio.c | 87 ++++++++++++++++++++++++------------------- 1 file changed, 48 insertions(+), 39 deletions(-) -- 2.1.2.330.g565301e.dirty _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index 4ef3571..b014c8c 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -173,6 +173,36 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, return value; } +static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, + bool new_active_state) +{ + spin_lock(&irq->irq_lock); + /* + * If this virtual IRQ was written into a list register, we + * have to make sure the CPU that runs the VCPU thread has + * synced back LR state to the struct vgic_irq. We can only + * know this for sure, when either this irq is not assigned to + * anyone's AP list anymore, or the VCPU thread is not + * running on any CPUs. + * + * In the opposite case, we know the VCPU thread may be on its + * way back from the guest and still has to sync back this + * IRQ, so we release and re-acquire the spin_lock to let the + * other thread sync back the IRQ. + */ + while (irq->vcpu && /* IRQ may have state in an LR somewhere */ + irq->vcpu->cpu != -1) { /* VCPU thread is running */ + BUG_ON(irq->intid < VGIC_NR_PRIVATE_IRQS); + cond_resched_lock(&irq->irq_lock); + } + + irq->active = new_active_state; + if (new_active_state) + vgic_queue_irq_unlock(vcpu->kvm, irq); + else + spin_unlock(&irq->irq_lock); +} + void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) @@ -180,32 +210,18 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu, u32 intid = VGIC_ADDR_TO_INTID(addr, 1); int i; - kvm_arm_halt_guest(vcpu->kvm); + /* Only the VCPU itself can access its active state regs */ + if (intid >= VGIC_NR_PRIVATE_IRQS) + kvm_arm_halt_guest(vcpu->kvm); + for_each_set_bit(i, &val, len * 8) { struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); - - spin_lock(&irq->irq_lock); - /* - * If this virtual IRQ was written into a list register, we - * have to make sure the CPU that runs the VCPU thread has - * synced back LR state to the struct vgic_irq. We can only - * know this for sure, when either this irq is not assigned to - * anyone's AP list anymore, or the VCPU thread is not - * running on any CPUs. - * - * In the opposite case, we know the VCPU thread may be on its - * way back from the guest and still has to sync back this - * IRQ, so we release and re-acquire the spin_lock to let the - * other thread sync back the IRQ. - */ - while (irq->vcpu && /* IRQ may have state in an LR somewhere */ - irq->vcpu->cpu != -1) /* VCPU thread is running */ - cond_resched_lock(&irq->irq_lock); - - irq->active = false; - spin_unlock(&irq->irq_lock); + vgic_mmio_change_active(vcpu, irq, false); } - kvm_arm_resume_guest(vcpu->kvm); + + /* Only the VCPU itself can access its active state regs */ + if (intid >= VGIC_NR_PRIVATE_IRQS) + kvm_arm_resume_guest(vcpu->kvm); } void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu, @@ -215,25 +231,18 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu, u32 intid = VGIC_ADDR_TO_INTID(addr, 1); int i; + /* Only the VCPU itself can access its active state regs */ + if (intid >= VGIC_NR_PRIVATE_IRQS) + kvm_arm_halt_guest(vcpu->kvm); + for_each_set_bit(i, &val, len * 8) { struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); - - spin_lock(&irq->irq_lock); - - /* - * If the IRQ was already active or there is no target VCPU - * assigned at the moment, then just proceed. - */ - if (irq->active || !irq->target_vcpu) { - irq->active = true; - - spin_unlock(&irq->irq_lock); - continue; - } - - irq->active = true; - vgic_queue_irq_unlock(vcpu->kvm, irq); + vgic_mmio_change_active(vcpu, irq, true); } + + /* Only the VCPU itself can access its active state regs */ + if (intid >= VGIC_NR_PRIVATE_IRQS) + kvm_arm_resume_guest(vcpu->kvm); } unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu,