From patchwork Wed Mar 21 16:32:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 132210 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp2355931ljb; Wed, 21 Mar 2018 09:34:52 -0700 (PDT) X-Google-Smtp-Source: AG47ELt7O0f5lqRRNQdYwBukj/s8nfJ8mgV0wr5R4473PCNjTC3tKEUsGUZNFQ47feksIYDTBWqc X-Received: by 10.107.169.81 with SMTP id s78mr20869060ioe.83.1521650092429; Wed, 21 Mar 2018 09:34:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521650092; cv=none; d=google.com; s=arc-20160816; b=phAHZ0aUAGm5OHG5yN7nxbm4LALFr73huVkwCNu2aLR4IPS/1TDmZ23dRMZzRygPVg VVJIHBn+A5dtrdbnIwLUQyIC/uIGT5oljgZFNUOTHuD6tlL3F6XPBuOEqOmjUxhv/AZW ERzPSVKdk0Fc/VRptMqqygfXQE/lfpiikHxFsqRQWWj2Oo5+tTHxSYOSBvzb6PvqNWvJ TfAWS57MFHxzSAsS9TcW45DuiEyiGG8otFbJEhE+LspHTn27y8vo/vm2HSsVFrSi4HuQ cV8FFgVYhydQ4/Oe34mhcjqxBRVf9LeynAoBijB5pMrP6JsEOQwDvy8o3rJJVXiBqRTR 8uMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :dkim-signature:arc-authentication-results; bh=etdJANUnyhM0fpv/kbPIncV2pEYfKD/o0Fy4W8yftu4=; b=JvBsH8TV56Qn6Vznf63ZV1CEfKqIw5mKjhZLQm053Yst5IxewmjcF34ek7/5tP0nzQ z/td1JSHAoKUH1lVZkCL/LGxEQHJ99ToFdoUwVGryH5OwV33l229Rh6NeKrcAkqR/DoH /wpv4KbMTJGiSlZhma1X+G1pdOTY6AHPYO6lw0vlrp91Lh+jj6oxPayppUNrPqDK2ATL wv6A3C2TFOjLrsCEmbBJd4pB7jsqQik0GPz9t3//inQoxaH7cUlunXuK5wyxfkGhTUEt cl27YkMPXOjxoCan2gdUiN2Kj+KbVpj5+uXsxSGE+BPYFdNnirZUooSU7TKLBzBR9zpl O3SQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=a5xX9EFz; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id 80si3532948ioi.40.2018.03.21.09.34.52 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 21 Mar 2018 09:34:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=a5xX9EFz; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eygfq-0002VA-2r; Wed, 21 Mar 2018 16:33:14 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eygfo-0002Rf-9z for xen-devel@lists.xenproject.org; Wed, 21 Mar 2018 16:33:12 +0000 X-Inumbo-ID: 81ae13da-2d25-11e8-9728-bc764e045a96 Received: from mail-wr0-x241.google.com (unknown [2a00:1450:400c:c0c::241]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 81ae13da-2d25-11e8-9728-bc764e045a96; Wed, 21 Mar 2018 17:32:55 +0100 (CET) Received: by mail-wr0-x241.google.com with SMTP id z8so5831401wrh.7 for ; Wed, 21 Mar 2018 09:33:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WOHJvhLOVaNdNPVpqlj3Hiyw/j6FgXmvcl6oeT8CORQ=; b=a5xX9EFzP/flDoPlgPB1X3WC+vl4QwLuH8S+GrX+gXbQq/Vg0pSG1ZNts2u40+PmBZ AK5L/ObEG8+Z4BcYCvY2esMrQMIQiEZDqzDA77aqNtgcrGMnSq+ABBW61Kzk563EPLtx KdZjqZTTys1bXHzJS4p8JJJGzK/a1kbuWTRe8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WOHJvhLOVaNdNPVpqlj3Hiyw/j6FgXmvcl6oeT8CORQ=; b=RZA0ok5bJoGkf7tLEXwB/vNFY2rljOA6umU0I0KBKSZcYPPvYhQSBeqrHOtPjRikTM +AHHLycoN5wmMJCmk62V92Yr3vS23PFtB+1hXMOTlo9AfCYwfRrCV/9XuVX7+U19IpZ7 fO5wOuWOYiFVMo7IIwBVZiJKrzco9ZSneL7bLFuLYFEnyRMW2tQSTx0mv5uRtsFQWEbQ LxW0cWsNAEsh8UzwTeiZzqNAUMJ7otCa8Jin/9UhYKRShHnN0ZxuNk0Ci3P+YZjtsLPo NhizGrMXvPRsN8jOVCpieK972jbLHbCSEi0UhapK/JJ6JDrasYMQvsXAktXDF/8Kj8cD JUPw== X-Gm-Message-State: AElRT7FwGB9xoFHqU+RJ5UjZI2GmwXlRg+qv5yvE76s9TUZBVzLk04lZ QlY6PfcfzLjDZbKallZnDBZutw== X-Received: by 10.223.168.51 with SMTP id l48mr18049450wrc.84.1521649988855; Wed, 21 Mar 2018 09:33:08 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id n64sm4423724wmd.11.2018.03.21.09.33.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 21 Mar 2018 09:33:08 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Wed, 21 Mar 2018 16:32:15 +0000 Message-Id: <20180321163235.12529-20-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180321163235.12529-1-andre.przywara@linaro.org> References: <20180321163235.12529-1-andre.przywara@linaro.org> Subject: [Xen-devel] [PATCH v3 19/39] ARM: new VGIC: Add ENABLE registers handlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org, Andre Przywara MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" As the enable register handlers are shared between the v2 and v3 emulation, their implementation goes into vgic-mmio.c, to be easily referenced from the v3 emulation as well later. This introduces a vgic_sync_hardware_irq() function, which updates the physical side of a hardware mapped virtual IRQ. Because the existing locking order between vgic_irq->irq_lock and irq_desc->lock dictates so, we drop the irq_lock and retake them in the proper order. Signed-off-by: Andre Przywara Reviewed-by: Julien Grall Acked-by: Stefano Stabellini --- Changelog v2 ... v3: - fix indentation - fix wording in comment - add Reviewed-by: Changelog v1 ... v2: - ASSERT on h/w IRQ and vIRQ staying in sync xen/arch/arm/vgic/vgic-mmio-v2.c | 4 +- xen/arch/arm/vgic/vgic-mmio.c | 117 +++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic-mmio.h | 11 ++++ xen/arch/arm/vgic/vgic.c | 40 +++++++++++++ xen/arch/arm/vgic/vgic.h | 3 + 5 files changed, 173 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c index 43c1ab5906..7efd1c4eb4 100644 --- a/xen/arch/arm/vgic/vgic-mmio-v2.c +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c @@ -89,10 +89,10 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_rao, vgic_mmio_write_wi, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISENABLER, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_enable, vgic_mmio_write_senable, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ICENABLER, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_enable, vgic_mmio_write_cenable, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GICD_ISPENDR, vgic_mmio_read_raz, vgic_mmio_write_wi, 1, diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index a03e8d88b9..f219b7c509 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -39,6 +39,123 @@ void vgic_mmio_write_wi(struct vcpu *vcpu, paddr_t addr, /* Ignore */ } +/* + * Read accesses to both GICD_ICENABLER and GICD_ISENABLER return the value + * of the enabled bit, so there is only one function for both here. + */ +unsigned long vgic_mmio_read_enable(struct vcpu *vcpu, + paddr_t addr, unsigned int len) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + uint32_t value = 0; + unsigned int i; + + /* Loop over all IRQs affected by this read */ + for ( i = 0; i < len * 8; i++ ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + + if ( irq->enabled ) + value |= (1U << i); + + vgic_put_irq(vcpu->domain, irq); + } + + return value; +} + +void vgic_mmio_write_senable(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + unsigned long flags; + irq_desc_t *desc; + + spin_lock_irqsave(&irq->irq_lock, flags); + + if ( irq->enabled ) /* skip already enabled IRQs */ + { + spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->domain, irq); + continue; + } + + irq->enabled = true; + if ( irq->hw ) + { + /* + * The irq cannot be a PPI, we only support delivery + * of SPIs to guests. + */ + ASSERT(irq->hwintid >= VGIC_NR_PRIVATE_IRQS); + + desc = irq_to_desc(irq->hwintid); + } + else + desc = NULL; + + vgic_queue_irq_unlock(vcpu->domain, irq, flags); + + if ( desc ) + vgic_sync_hardware_irq(vcpu->domain, desc, irq); + + vgic_put_irq(vcpu->domain, irq); + } +} + +void vgic_mmio_write_cenable(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val) +{ + uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); + unsigned int i; + + for_each_set_bit( i, &val, len * 8 ) + { + struct vgic_irq *irq; + unsigned long flags; + irq_desc_t *desc; + + irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); + spin_lock_irqsave(&irq->irq_lock, flags); + + if ( !irq->enabled ) /* skip already disabled IRQs */ + { + spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->domain, irq); + continue; + } + + irq->enabled = false; + + if ( irq->hw ) + { + /* + * The irq cannot be a PPI, we only support delivery + * of SPIs to guests. + */ + ASSERT(irq->hwintid >= VGIC_NR_PRIVATE_IRQS); + + desc = irq_to_desc(irq->hwintid); + } + else + desc = NULL; + + spin_unlock_irqrestore(&irq->irq_lock, flags); + + if ( desc ) + vgic_sync_hardware_irq(vcpu->domain, desc, irq); + + vgic_put_irq(vcpu->domain, irq); + } +} + static int match_region(const void *key, const void *elt) { const unsigned int offset = (unsigned long)key; diff --git a/xen/arch/arm/vgic/vgic-mmio.h b/xen/arch/arm/vgic/vgic-mmio.h index c280668694..a2cebd77f4 100644 --- a/xen/arch/arm/vgic/vgic-mmio.h +++ b/xen/arch/arm/vgic/vgic-mmio.h @@ -86,6 +86,17 @@ unsigned long vgic_mmio_read_rao(struct vcpu *vcpu, void vgic_mmio_write_wi(struct vcpu *vcpu, paddr_t addr, unsigned int len, unsigned long val); +unsigned long vgic_mmio_read_enable(struct vcpu *vcpu, + paddr_t addr, unsigned int len); + +void vgic_mmio_write_senable(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + +void vgic_mmio_write_cenable(struct vcpu *vcpu, + paddr_t addr, unsigned int len, + unsigned long val); + unsigned int vgic_v2_init_dist_iodev(struct vgic_io_device *dev); #endif diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index 37b425a16c..90041eb071 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -699,6 +699,46 @@ void vgic_kick_vcpus(struct domain *d) } } +static unsigned int translate_irq_type(bool is_level) +{ + return is_level ? IRQ_TYPE_LEVEL_HIGH : IRQ_TYPE_EDGE_RISING; +} + +void vgic_sync_hardware_irq(struct domain *d, + irq_desc_t *desc, struct vgic_irq *irq) +{ + unsigned long flags; + + spin_lock_irqsave(&desc->lock, flags); + spin_lock(&irq->irq_lock); + + /* + * We forbid tinkering with the hardware IRQ association during + * a domain's lifetime. + */ + ASSERT(irq->hw && desc->irq == irq->hwintid); + + if ( irq->enabled ) + { + /* + * We might end up from various callers, so check that the + * interrrupt is disabled before trying to change the config. + */ + if ( irq_type_set_by_domain(d) && + test_bit(_IRQ_DISABLED, &desc->status) ) + gic_set_irq_type(desc, translate_irq_type(irq->config)); + + if ( irq->target_vcpu ) + irq_set_affinity(desc, cpumask_of(irq->target_vcpu->processor)); + desc->handler->enable(desc); + } + else + desc->handler->disable(desc); + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&desc->lock, flags); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index aed7e4179a..071e061066 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -55,6 +55,9 @@ static inline void vgic_get_irq_kref(struct vgic_irq *irq) atomic_inc(&irq->refcount); } +void vgic_sync_hardware_irq(struct domain *d, + irq_desc_t *desc, struct vgic_irq *irq); + void vgic_v2_fold_lr_state(struct vcpu *vcpu); void vgic_v2_populate_lr(struct vcpu *vcpu, struct vgic_irq *irq, int lr); void vgic_v2_set_underflow(struct vcpu *vcpu);