From patchwork Thu Mar 22 11:56:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 132266 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp741132ljb; Thu, 22 Mar 2018 04:59:11 -0700 (PDT) X-Google-Smtp-Source: AG47ELvLRJppMLjG+wKP6NIgEqDRxK1Vc/ZNlZwdkJU+51fiuYj9lDDbUkErc4x5NL1KjHKchZuW X-Received: by 10.107.38.206 with SMTP id m197mr24923191iom.11.1521719950940; Thu, 22 Mar 2018 04:59:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521719950; cv=none; d=google.com; s=arc-20160816; b=vNCTsHPs1yLwN2zwfFxyDEjSEzmdl9pqqHce2iKJo2DOZNcsmjm6nxgukTV+aDaMsk Thf6viTwn8W/0gYchA5xiRMdL7B1wzPZfSKHdlPcbwNGC7oZC1PDkuaFKXLjj+3ziC/4 GMGMUm6+I3N6EO8ovI1iEu21Yxz2Y1nDwO9EKWep53FwM88zPrZubfBVTmook3UM7jGu DC3sL57fmcqe1ieRjnWOEq7iE0jnosUm5j+BR+Ul27u7bKZWbh7/+b4J9h7mbCDgGekt 0wDVbp5DSg/WmGsIy+rkMf6sTIcM7Z41V0c6vLZmhH3Yk+zUJvVvZ7zeKSfrXjcmXlli KqXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :dkim-signature:arc-authentication-results; bh=mtuL9QQjWerlWqEOUaPEFG+00CrDRC6UEMsUadhXUCM=; b=DZihNLq24b9j6DtH2c7g1zVvVvQYQxoHcxKBjE5FFLFVRYjrAiSMLE3wVJsWoWI6tE V1UMUHZnl0SEGLkmZ9+ScLXKSBYHZHxPiGnxXYrK5M0kxYPwZNYx60UWkpB2UvMeJvIt Mr8KDhlNdYl5oB0T970IIQepS9SZYJgpbgj+je91mKCtsdrnPF//OR38EC3yIDWZpide BYfV5NhFlXMMMD/xqOVFTffsGuz0lwMtx19G/Jb5rwoD7p2cpRmOhSvbc4lQCHtdigcF 5/355RsFWyIfAW+LY53+4lTTGAxjDI0oam863QB8lYN3AkfVF2ioV/00o+q4WfeLXFlj Pf5Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gOLvgq13; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id k66-v6si5044875itk.173.2018.03.22.04.59.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 22 Mar 2018 04:59:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=gOLvgq13; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eyyqB-0005yG-F2; Thu, 22 Mar 2018 11:57:07 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eyyqA-0005xA-EU for xen-devel@lists.xenproject.org; Thu, 22 Mar 2018 11:57:06 +0000 X-Inumbo-ID: 195d02a0-2dc8-11e8-9728-bc764e045a96 Received: from mail-wm0-x243.google.com (unknown [2a00:1450:400c:c09::243]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 195d02a0-2dc8-11e8-9728-bc764e045a96; Thu, 22 Mar 2018 12:56:48 +0100 (CET) Received: by mail-wm0-x243.google.com with SMTP id v21so4225409wmc.1 for ; Thu, 22 Mar 2018 04:57:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XVmouSEb4XctKIrzXJwNrfM8IQQKOLD2QEVedJwBJl0=; b=gOLvgq13wvej+IBFNUA3INA3ZXJfdcGSYRS1b5wfFCp1X2R3ua+ke3V+HbNDquQgz/ Po1Sb73NLAv/pbKRJ/+42SvUJ4F4fycjofu/6n47Ak58kJHt1cK93A7otxus/UHTsYFS 3Z5RhqLrtYgupJr3+NCjkBIDXaruFDNRKUAC0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XVmouSEb4XctKIrzXJwNrfM8IQQKOLD2QEVedJwBJl0=; b=Wv15bAX43EqENMfrHcYS6rSb4EoZTaTebW+y0nI7hknG18Y1DqDqvENRluUKRc6rnQ feEHk3hpba00tGhNot8iWUdL2ffxPLUw8lwCvjPCYe6P+/XgaFLK5oEA2NORfP6J2AGB o8FWHAg3tZLbYNVG0ntvylC3FFIBYKjkSLvYs1DfjQddl9YmJh1u/GkFXNMovVZVx0RE Yc13cOWXX8XLY7+LXNHvP+VodewG/UW5osfgDc3g/YjoWC1vuTnryPN5bdqveZG5AdfV 4VvNpAH089v+qa/9l/0tuUf9KJG6pBFWvVSa6w1JkfkpXS8I3qA9IlFJyzkFjX6bVj4n CLjg== X-Gm-Message-State: AElRT7HPY2MHv3j0ZVdIoXVFt96ialvMghWly1iXAi/LYSgkTXMiPuBD 0eNukG/Wyda62UDze5FJQZN/fg== X-Received: by 10.28.27.194 with SMTP id b185mr5801395wmb.57.1521719822904; Thu, 22 Mar 2018 04:57:02 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id e67sm12207356wmf.20.2018.03.22.04.57.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Mar 2018 04:57:02 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Thu, 22 Mar 2018 11:56:48 +0000 Message-Id: <20180322115649.5283-3-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180322115649.5283-1-andre.przywara@linaro.org> References: <20180322115649.5283-1-andre.przywara@linaro.org> Subject: [Xen-devel] [PATCH v3a 14/39] ARM: new VGIC: Add GICv2 world switch backend X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Processing maintenance interrupts and accessing the list registers are dependent on the host's GIC version. Introduce vgic-v2.c to contain GICv2 specific functions. Implement the GICv2 specific code for syncing the emulation state into the VGIC registers. This also adds the hook to let Xen setup the host GIC addresses. This is based on Linux commit 140b086dd197, written by Marc Zyngier. Signed-off-by: Andre Przywara Acked-by: Julien Grall . + */ + +#include +#include +#include +#include +#include + +#include "vgic.h" + +static struct { + bool enabled; + paddr_t dbase; /* Distributor interface address */ + paddr_t cbase; /* CPU interface address & size */ + paddr_t csize; + paddr_t vbase; /* Virtual CPU interface address */ + + /* Offset to add to get an 8kB contiguous region if GIC is aliased */ + uint32_t aliased_offset; +} gic_v2_hw_data; + +void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr_t csize, + paddr_t vbase, uint32_t aliased_offset) +{ + gic_v2_hw_data.enabled = true; + gic_v2_hw_data.dbase = dbase; + gic_v2_hw_data.cbase = cbase; + gic_v2_hw_data.csize = csize; + gic_v2_hw_data.vbase = vbase; + gic_v2_hw_data.aliased_offset = aliased_offset; + + printk("Using the new VGIC implementation.\n"); +} + +/* + * transfer the content of the LRs back into the corresponding ap_list: + * - active bit is transferred as is + * - pending bit is + * - transferred as is in case of edge sensitive IRQs + * - set to the line-level (resample time) for level sensitive IRQs + */ +void vgic_v2_fold_lr_state(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + unsigned int used_lrs = vcpu->arch.vgic.used_lrs; + unsigned long flags; + unsigned int lr; + + if ( !used_lrs ) /* No LRs used, so nothing to sync back here. */ + return; + + gic_hw_ops->update_hcr_status(GICH_HCR_UIE, false); + + for ( lr = 0; lr < used_lrs; lr++ ) + { + struct gic_lr lr_val; + uint32_t intid; + struct vgic_irq *irq; + struct irq_desc *desc = NULL; + bool have_desc_lock = false; + + gic_hw_ops->read_lr(lr, &lr_val); + + /* + * TODO: Possible optimization to avoid reading LRs: + * Read the ELRSR to find out which of our LRs have been cleared + * by the guest. We just need to know the IRQ number for those, which + * we could save in an array when populating the LRs. + * This trades one MMIO access (ELRSR) for possibly more than one (LRs), + * but requires some more code to save the IRQ number and to handle + * those finished IRQs according to the algorithm below. + * We need some numbers to justify this: chances are that we don't + * have many LRs in use most of the time, so we might not save much. + */ + gic_hw_ops->clear_lr(lr); + + intid = lr_val.virq; + irq = vgic_get_irq(vcpu->domain, vcpu, intid); + + local_irq_save(flags); + spin_lock(&irq->irq_lock); + + /* The locking order forces us to drop and re-take the locks here. */ + if ( irq->hw ) + { + spin_unlock(&irq->irq_lock); + + desc = irq_to_desc(irq->hwintid); + spin_lock(&desc->lock); + spin_lock(&irq->irq_lock); + + /* This h/w IRQ should still be assigned to the virtual IRQ. */ + ASSERT(irq->hw && desc->irq == irq->hwintid); + + have_desc_lock = true; + } + + /* + * If a hardware mapped IRQ has been handled for good, we need to + * clear the _IRQ_INPROGRESS bit to allow handling of new IRQs. + * + * TODO: This is probably racy, but is so already in the existing + * VGIC. A fix does not seem to be trivial. + */ + if ( irq->hw && !lr_val.active && !lr_val.pending ) + clear_bit(_IRQ_INPROGRESS, &desc->status); + + /* Always preserve the active bit */ + irq->active = lr_val.active; + + /* Edge is the only case where we preserve the pending bit */ + if ( irq->config == VGIC_CONFIG_EDGE && lr_val.pending ) + { + irq->pending_latch = true; + + if ( vgic_irq_is_sgi(intid) ) + irq->source |= (1U << lr_val.virt.source); + } + + /* Clear soft pending state when level irqs have been acked. */ + if ( irq->config == VGIC_CONFIG_LEVEL && !lr_val.pending ) + irq->pending_latch = false; + + /* + * Level-triggered mapped IRQs are special because we only + * observe rising edges as input to the VGIC. + * + * If the guest never acked the interrupt we have to sample + * the physical line and set the line level, because the + * device state could have changed or we simply need to + * process the still pending interrupt later. + * + * If this causes us to lower the level, we have to also clear + * the physical active state, since we will otherwise never be + * told when the interrupt becomes asserted again. + */ + if ( vgic_irq_is_mapped_level(irq) && lr_val.pending ) + { + ASSERT(irq->hwintid >= VGIC_NR_PRIVATE_IRQS); + + irq->line_level = gic_read_pending_state(desc); + + if ( !irq->line_level ) + gic_set_active_state(desc, false); + } + + spin_unlock(&irq->irq_lock); + if ( have_desc_lock ) + spin_unlock(&desc->lock); + local_irq_restore(flags); + + vgic_put_irq(vcpu->domain, irq); + } + + gic_hw_ops->update_hcr_status(GICH_HCR_EN, false); + vgic_cpu->used_lrs = 0; +} + +/** + * vgic_v2_populate_lr() - Populates an LR with the state of a given IRQ. + * @vcpu: The VCPU which the given @irq belongs to. + * @irq: The IRQ to convert into an LR. The irq_lock must be held already. + * @lr: The LR number to transfer the state into. + * + * This moves a virtual IRQ, represented by its vgic_irq, into a list register. + * Apart from translating the logical state into the LR bitfields, it also + * changes some state in the vgic_irq. + * For an edge sensitive IRQ the pending state is cleared in struct vgic_irq, + * for a level sensitive IRQ the pending state value is unchanged, as it is + * dictated directly by the input line level. + * + * If @irq describes an SGI with multiple sources, we choose the + * lowest-numbered source VCPU and clear that bit in the source bitmap. + * + * The irq_lock must be held by the caller. + */ +void vgic_v2_populate_lr(struct vcpu *vcpu, struct vgic_irq *irq, int lr) +{ + struct gic_lr lr_val = {0}; + + lr_val.virq = irq->intid; + + if ( irq_is_pending(irq) ) + { + lr_val.pending = true; + + if ( irq->config == VGIC_CONFIG_EDGE ) + irq->pending_latch = false; + + if ( vgic_irq_is_sgi(irq->intid) ) + { + uint32_t src = ffs(irq->source); + + BUG_ON(!src); + lr_val.virt.source = (src - 1); + irq->source &= ~(1 << (src - 1)); + if ( irq->source ) + irq->pending_latch = true; + } + } + + lr_val.active = irq->active; + + if ( irq->hw ) + { + lr_val.hw_status = true; + lr_val.hw.pirq = irq->hwintid; + /* + * Never set pending+active on a HW interrupt, as the + * pending state is kept at the physical distributor + * level. + */ + if ( irq->active && irq_is_pending(irq) ) + lr_val.pending = false; + } + else + { + if ( irq->config == VGIC_CONFIG_LEVEL ) + lr_val.virt.eoi = true; + } + + /* + * Level-triggered mapped IRQs are special because we only observe + * rising edges as input to the VGIC. We therefore lower the line + * level here, so that we can take new virtual IRQs. See + * vgic_v2_fold_lr_state for more info. + */ + if ( vgic_irq_is_mapped_level(irq) && lr_val.pending ) + irq->line_level = false; + + /* The GICv2 LR only holds five bits of priority. */ + lr_val.priority = irq->priority >> 3; + + gic_hw_ops->write_lr(lr, &lr_val); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index d91ed29d96..214176c14e 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -520,6 +520,7 @@ retry: static void vgic_fold_lr_state(struct vcpu *vcpu) { + vgic_v2_fold_lr_state(vcpu); } /* Requires the irq_lock to be held. */ @@ -527,6 +528,8 @@ static void vgic_populate_lr(struct vcpu *vcpu, struct vgic_irq *irq, int lr) { ASSERT(spin_is_locked(&irq->irq_lock)); + + vgic_v2_populate_lr(vcpu, irq, lr); } static void vgic_set_underflow(struct vcpu *vcpu) @@ -640,7 +643,10 @@ void vgic_sync_to_lrs(void) spin_lock(¤t->arch.vgic.ap_list_lock); vgic_flush_lr_state(current); spin_unlock(¤t->arch.vgic.ap_list_lock); + + gic_hw_ops->update_hcr_status(GICH_HCR_EN, 1); } + /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index 1547478518..e2b6d51e47 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -27,6 +27,11 @@ static inline bool irq_is_pending(struct vgic_irq *irq) return irq->pending_latch || irq->line_level; } +static inline bool vgic_irq_is_mapped_level(struct vgic_irq *irq) +{ + return irq->config == VGIC_CONFIG_LEVEL && irq->hw; +} + struct vgic_irq *vgic_get_irq(struct domain *d, struct vcpu *vcpu, uint32_t intid); void vgic_put_irq(struct domain *d, struct vgic_irq *irq); @@ -41,6 +46,10 @@ static inline void vgic_get_irq_kref(struct vgic_irq *irq) atomic_inc(&irq->refcount); } +void vgic_v2_fold_lr_state(struct vcpu *vcpu); +void vgic_v2_populate_lr(struct vcpu *vcpu, struct vgic_irq *irq, int lr); +void vgic_v2_set_underflow(struct vcpu *vcpu); + #endif /*