From patchwork Wed Mar 21 16:32:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 132215 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp2356034ljb; Wed, 21 Mar 2018 09:34:58 -0700 (PDT) X-Google-Smtp-Source: AG47ELuoc1qkll/RhrdZN4tJuSMgVzvpAE07eBiC3mV14yiFUgL7INSWP3xP5+s3IKWNsr/HYKol X-Received: by 2002:a24:d786:: with SMTP id y128-v6mr4693220itg.98.1521650098654; Wed, 21 Mar 2018 09:34:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521650098; cv=none; d=google.com; s=arc-20160816; b=oUfCNT3STQIh/6tuSTsLqmT89WISvixWdONoHqi/tzpKgO5FImd8gaKUlU1Ge3qzqJ vAm9uxVT13hDewf73/X9XacuGNG9tfBmofnYIIK8s6BKMrsHQ6YUyF9Lfx3XBZn8sDff 9xcEjVFLUCTIF99OIJ+Gt7bh4XPG5sLdN+vHEW94RxTUxAQWdZ5NFm0heRxayZPO48zR h9iJvXuE70xK3tjBmuJCYCyQnAXkFtKrpox+dq3kaGxIEW+FEp2P/piOJZFznZ0W9W8D Rtu9JPplM86/bJ5UcIRyzLsr4emLfHb6HOZ0wIru9tnGmbnZLcUj5/GvZF3yFb9C2T3v K59g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :dkim-signature:arc-authentication-results; bh=QyLn5v3ao0FAV6F716dZ4qrzOC59Y35/WDxNLClzrbk=; b=Pjppx8lyGCKpAm7bJPHuyZWfVAALM1gBpMufvtrXsJdLTUkYJ+u1iwzh674IhoQzyp ErAf8aQZfnHPWZk0PRNzi4gaa/mOLYihX3wTELuvVrIzIPToFa7ZVFnB3AYMT8EDLA9Z PM4NBTLCKHfU6r8WHGU36VkQhC5b3XWsSb0x3lBGi6+1XLJq/V2uoa/8m/jPwBdCR93O PAsSaD1yQDjfteyukkY2pc4IAVzJe1Ju6Zk2ae0MtFimANgychYi91yVEWh0+91fhcBX K48gSGEMpW9YLJs9C6/PMuSPl9e48mnxlREcB3w+W2VZDK/mynePrZx1d1aNm6ohFAig KUvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=PsAPLYjb; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id g94si3563901ioi.41.2018.03.21.09.34.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 21 Mar 2018 09:34:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=PsAPLYjb; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eygfq-0002WX-MU; Wed, 21 Mar 2018 16:33:14 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eygfo-0002Rj-En for xen-devel@lists.xenproject.org; Wed, 21 Mar 2018 16:33:12 +0000 X-Inumbo-ID: 82502f79-2d25-11e8-9728-bc764e045a96 Received: from mail-wr0-x241.google.com (unknown [2a00:1450:400c:c0c::241]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 82502f79-2d25-11e8-9728-bc764e045a96; Wed, 21 Mar 2018 17:32:56 +0100 (CET) Received: by mail-wr0-x241.google.com with SMTP id z12so5845820wrg.4 for ; Wed, 21 Mar 2018 09:33:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tA8Ynf/91Hxa42cpk7YZ4cgXigh9pfnkb4PafKkh2CE=; b=PsAPLYjbOgMtbX85y5T3tR1jRf62PpNiTjd70uFCFAJ4Iy0G4g4tlu5TwoPhni/7Dt OpEqGdozVQKNJj3Kd6ELnwOEITGW/iUrSMvXzVi87ni2jzcpVAK/pRQkN9krf8jrGFrD 0yRsJ5BuZeej/YXSiyFQ2CHLKe7oYApfms9Ig= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tA8Ynf/91Hxa42cpk7YZ4cgXigh9pfnkb4PafKkh2CE=; b=USwmFKm0dxF9ihYiMTTa9IUghMaORlKQaqQ1p23S8T4CPrem/pYqwojkGeyFWNGb/J YL0J0vChSbB/OESPiaMiOz4HgKrpal7mz852HpXIMFmP8PFAtBuLkCvUZnd/+OKFfgM6 kHZzyXYDIbutJqgUDhsm15JwwV+gocQlXe3RcRiMw9Hbb4VDMw29ob7Md+wdF5GucSLo El9wN6QNMLrlQ26g2Bx3dGv3QIMEYrEUBrFUvZOp6og+ad0YCFPRjx1KW0GQbMocuHHg iv/NZ1t45ivgDSLz9uB1+/Phro1eruc7RNmcHtOxF8XkmritGe7hRypr05v+Pi6D91Jv yY/Q== X-Gm-Message-State: AElRT7FCRfVw4c063y9FTuYDsCeROA4miq8ThpfawSIvw8wOXqTrjJgK d48NzqqzxwGi4e4zrqbb73yvqWrFOVc= X-Received: by 10.223.150.161 with SMTP id u30mr16426534wrb.151.1521649989975; Wed, 21 Mar 2018 09:33:09 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id n64sm4423724wmd.11.2018.03.21.09.33.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 21 Mar 2018 09:33:09 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Wed, 21 Mar 2018 16:32:16 +0000 Message-Id: <20180321163235.12529-21-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180321163235.12529-1-andre.przywara@linaro.org> References: <20180321163235.12529-1-andre.przywara@linaro.org> Subject: [Xen-devel] [PATCH v3 20/39] ARM: new VGIC: Add PENDING registers handlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org, Andre Przywara MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The pending register handlers are shared between the v2 and v3 emulation, so their implementation goes into vgic-mmio.c, to be easily referenced from the v3 emulation as well later. For level triggered interrupts the real line level is unaffected by this write, so we keep this state separate and combine it with the device's level to get the actual pending state. Hardware mapped IRQs need some special handling, as their hardware state has to be coordinated with the virtual pending bit to avoid hanging or masked interrupts. This is based on Linux commit 96b298000db4, written by Andre Przywara. Signed-off-by: Andre Przywara Reviewed-by: Julien Grall --- xen/arch/arm/vgic/vgic-mmio-v2.c | 4 +- xen/arch/arm/vgic/vgic-mmio.c | 125 +++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic-mmio.h | 11 ++++ 3 files changed, 138 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c index 7efd1c4eb4..a48c554040 100644 --- a/xen/arch/arm/vgic/vgic-mmio-v2.c +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c @@ -95,10 +95,10 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_enable, vgic_mmio_write_cenable, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISPENDR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_pending, vgic_mmio_write_spending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ICPENDR, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_pending, vgic_mmio_write_cpending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISACTIVER, vgic_mmio_read_raz, vgic_mmio_write_wi, 1, diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index f219b7c509..53b8978c02 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -156,6 +156,131 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, } } +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu, + paddr_t addr, unsigned int len) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + uint32_t value = 0; + unsigned int i; + + /* Loop over all IRQs affected by this read */ + for ( i = 0; i < len * 8; i++ ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + if ( irq_is_pending(irq) ) + value |= (1U << i); + + vgic_put_irq(vcpu->domain, irq); + } + + return value; +} + +void vgic_mmio_write_spending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + unsigned long flags; + irq_desc_t *desc; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = true; + + /* To observe the locking order, just take the irq_desc pointer here. */ + if ( irq->hw ) + desc = irq_to_desc(irq->hwintid); + else + desc = NULL; + + vgic_queue_irq_unlock(vcpu->domain, irq, flags); + + /* + * When the VM sets the pending state for a HW interrupt on the virtual + * distributor we set the active state on the physical distributor, + * because the virtual interrupt can become active and then the guest + * can deactivate it. + */ + if ( desc ) + { + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* This h/w IRQ should still be assigned to the virtual IRQ. */ + ASSERT(irq->hw && desc->irq == irq->hwintid); + + gic_set_active_state(desc, true); + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); + } + + vgic_put_irq(vcpu->domain, irq); + } +} + +void vgic_mmio_write_cpending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + unsigned long flags; + irq_desc_t *desc; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->pending_latch = false; + + /* To observe the locking order, just take the irq_desc pointer here. */ + if ( irq->hw ) + desc = irq_to_desc(irq->hwintid); + else + desc = NULL; + + spin_unlock_irqrestore(&irq->irq_lock, flags); + + /* + * We don't want the guest to effectively mask the physical + * interrupt by doing a write to SPENDR followed by a write to + * CPENDR for HW interrupts, so we clear the active state on + * the physical side if the virtual interrupt is not active. + * This may lead to taking an additional interrupt on the + * host, but that should not be a problem as the worst that + * can happen is an additional vgic injection. We also clear + * the pending state to maintain proper semantics for edge HW + * interrupts. + */ + if ( desc ) + { + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* This h/w IRQ should still be assigned to the virtual IRQ. */ + ASSERT(irq->hw && desc->irq == irq->hwintid); + + gic_set_pending_state(desc, false); + if (!irq->active) + gic_set_active_state(desc, false); + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); + } + + + vgic_put_irq(vcpu->domain, irq); + } +} + static int match_region(const void *key, const void *elt) { const unsigned int offset = (unsigned long)key; diff --git a/xen/arch/arm/vgic/vgic-mmio.h b/xen/arch/arm/vgic/vgic-mmio.h index a2cebd77f4..5c927f28b0 100644 --- a/xen/arch/arm/vgic/vgic-mmio.h +++ b/xen/arch/arm/vgic/vgic-mmio.h @@ -97,6 +97,17 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, paddr_t addr, unsigned int len, unsigned long val); +unsigned long vgic_mmio_read_pending(struct vcpu *vcpu, + paddr_t addr, unsigned int len); + +void vgic_mmio_write_spending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + +void vgic_mmio_write_cpending(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); #endif