From patchwork Wed Mar 21 16:32:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 132234 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp2356349ljb; Wed, 21 Mar 2018 09:35:13 -0700 (PDT) X-Google-Smtp-Source: AG47ELvDToo9W93Bt3Cna6vQk8aGRE84MyPXJAIdLhRMWjmuGTgKAJnoh1BXakhv848VzWaX2YP4 X-Received: by 10.107.55.133 with SMTP id e127mr20505697ioa.138.1521650113787; Wed, 21 Mar 2018 09:35:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521650113; cv=none; d=google.com; s=arc-20160816; b=sFjua2CvqK1xoG9CDTV7Xgho3YWg7anyMvcmHUI/wqgO6xqSSCBIL+oqSr6yGQWdqy 4BbS3wlGB5pSbqfPUwBLVhk674t6bVnUOr43C2EQAp01ZfDat7Msux4Ysd8zJCUBTCs+ y+XibWJCEGcoLhdVpn3jI7oz/zhqZUtCMtlGebVr+NVizPoMLLMYubcbjAe8bkKUTH7H AQTYR/OlI4WeXVtfD4YJFfal5wEtbKFel3ZZlrWFH99Bp0asEo35orE69uKLMI7to5dX tSKnFw1cYSJyP7duE0AdDms0kbkeRjHyU73ZFy8EzRvbH2y9eTzz5Xr1tg/BeGmNv5vw k4Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version:cc :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :dkim-signature:arc-authentication-results; bh=v2abx+GpMxRy3B8kBaYIUzH+XOmp5dJR5hUjpXf17ms=; b=lCIdtgW45t08pwR+6Xgl1vmvZzP/JVxtwOX/mlKEgNdWluSYK2MXb6nk9uRKnSHFbW mqKCeh6WUrFOj4nbRe/x9N2pOgH9BYMPUGp/P9dpymhgWC5DazEXwNSDz3LcRtkSnrrQ CepfYG1KNov6mvwL9Q6CUrzqc1v00GNmpoRP6e8QNVK6LDh6CYRCYqgSi/ZVzqHxBH90 EgYVs0G1WO+ctgsxEpJCYpaFSwM+iH77bUn/Rit0F1zNr5O9B+H1N5mUX5vSNGq+LYma 20gCYkdP3CIOvOyeFcZg0qOiPHr20/oYVXeoSb1KN/5YHPJfSmzyDVYLxwU5T/N8+9l2 CzKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=LKo/Aydk; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id w81si3591308ioi.307.2018.03.21.09.35.13 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 21 Mar 2018 09:35:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=LKo/Aydk; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eygfj-0002Ho-OJ; Wed, 21 Mar 2018 16:33:07 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1eygfi-0002FL-3j for xen-devel@lists.xenproject.org; Wed, 21 Mar 2018 16:33:06 +0000 X-Inumbo-ID: 7dd0e4e4-2d25-11e8-9728-bc764e045a96 Received: from mail-wr0-x242.google.com (unknown [2a00:1450:400c:c0c::242]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 7dd0e4e4-2d25-11e8-9728-bc764e045a96; Wed, 21 Mar 2018 17:32:49 +0100 (CET) Received: by mail-wr0-x242.google.com with SMTP id c24so5847302wrc.6 for ; Wed, 21 Mar 2018 09:33:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WrNaPP8yIyJqtN948MzRo2qvQALLOxoqdBfFrZfVON8=; b=LKo/AydkWp/ggT4nVJigCPosogYujEWJ1QlpAj3+rBskFyk6Ou5BqscL54olHlCDrG xXmkJGiJ0KxdGxbt9pr4t2n5rB/UdtPEfX/K3Vb9WGLSNEAbs6DHt+8mNyAvSVZsxEYH WIRo3h6YiwolZVHnmGVtRJcBBLy8GCrdJML/I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WrNaPP8yIyJqtN948MzRo2qvQALLOxoqdBfFrZfVON8=; b=P8TyqKkTbjhJovq/b9asUGopemVqpt6k1L6u9X1ocvxEk8/J4CAaqnmcd/fgWhjyCq aI2OrEYcluiC86IdvYTSp5Jju1NUVXyv2FXMzweXztwDdxbLXAer5bH2LKqTs4nFk2nB BNKLfaNxU2aavPPet/tHr6yXZ1ygifG8un7cnoAe6rpCQgwOTbKQT4BAvBVhWDhujkGp A6rncjQ+qbkEuqt/QSgnR2YZweRoisEBGuEpq1Chy8Qh8u24e5etnOklfYVNxkT1+6jT RprqueeL921Xlw6/Sl8Bv+XFyXkhkZi1XpAp5X5Ea8mkXQeLsXrCJhvaIGrzS6cJKTca VYOQ== X-Gm-Message-State: AElRT7FRhW+BiAqX+v+qjehRML6qPqwpEAyoZteRHh6O43lQK8/pnJPY J+DZ1qbbiwamzRB44CTZxTPeYg== X-Received: by 10.223.142.23 with SMTP id n23mr17792559wrb.28.1521649982318; Wed, 21 Mar 2018 09:33:02 -0700 (PDT) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id n64sm4423724wmd.11.2018.03.21.09.33.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 21 Mar 2018 09:33:01 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Wed, 21 Mar 2018 16:32:09 +0000 Message-Id: <20180321163235.12529-14-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180321163235.12529-1-andre.przywara@linaro.org> References: <20180321163235.12529-1-andre.przywara@linaro.org> Subject: [Xen-devel] [PATCH v3 13/39] ARM: new VGIC: Add IRQ sync/flush framework X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org, Andre Przywara MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement the framework for syncing IRQs between our emulation and the list registers, which represent the guest's view of IRQs. This is done in vgic_sync_from_lrs() and vgic_sync_to_lrs(), which get called on guest entry and exit, respectively. The code talking to the actual GICv2/v3 hardware is added in the following patches. This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier. Signed-off-by: Andre Przywara Reviewed-by: Julien Grall Acked-by: Stefano Stabellini --- Changelog v2 ... v3: - replace "true" instead of "1" for the boolean parameter Changelog v1 ... v2: - make functions void - do underflow setting directly (no v2/v3 indirection) - fix multiple SGIs injections (as the late Linux bugfix) xen/arch/arm/vgic/vgic.c | 232 +++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic.h | 2 + 2 files changed, 234 insertions(+) diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index ee0de8d2e0..52e1669888 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -409,6 +409,238 @@ void vgic_inject_irq(struct domain *d, struct vcpu *vcpu, unsigned int intid, return; } +/** + * vgic_prune_ap_list() - Remove non-relevant interrupts from the ap_list + * + * @vcpu: The VCPU of which the ap_list should be pruned. + * + * Go over the list of interrupts on a VCPU's ap_list, and prune those that + * we won't have to consider in the near future. + * This removes interrupts that have been successfully handled by the guest, + * or that have otherwise became obsolete (not pending anymore). + * Also this moves interrupts between VCPUs, if their affinity has changed. + */ +static void vgic_prune_ap_list(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq, *tmp; + unsigned long flags; + +retry: + spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); + + list_for_each_entry_safe( irq, tmp, &vgic_cpu->ap_list_head, ap_list ) + { + struct vcpu *target_vcpu, *vcpuA, *vcpuB; + + spin_lock(&irq->irq_lock); + + BUG_ON(vcpu != irq->vcpu); + + target_vcpu = vgic_target_oracle(irq); + + if ( !target_vcpu ) + { + /* + * We don't need to process this interrupt any + * further, move it off the list. + */ + list_del(&irq->ap_list); + irq->vcpu = NULL; + spin_unlock(&irq->irq_lock); + + /* + * This vgic_put_irq call matches the + * vgic_get_irq_kref in vgic_queue_irq_unlock, + * where we added the LPI to the ap_list. As + * we remove the irq from the list, we drop + * also drop the refcount. + */ + vgic_put_irq(vcpu->domain, irq); + continue; + } + + if ( target_vcpu == vcpu ) + { + /* We're on the right CPU */ + spin_unlock(&irq->irq_lock); + continue; + } + + /* This interrupt looks like it has to be migrated. */ + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + + /* + * Ensure locking order by always locking the smallest + * ID first. + */ + if ( vcpu->vcpu_id < target_vcpu->vcpu_id ) + { + vcpuA = vcpu; + vcpuB = target_vcpu; + } + else + { + vcpuA = target_vcpu; + vcpuB = vcpu; + } + + spin_lock_irqsave(&vcpuA->arch.vgic.ap_list_lock, flags); + spin_lock(&vcpuB->arch.vgic.ap_list_lock); + spin_lock(&irq->irq_lock); + + /* + * If the affinity has been preserved, move the + * interrupt around. Otherwise, it means things have + * changed while the interrupt was unlocked, and we + * need to replay this. + * + * In all cases, we cannot trust the list not to have + * changed, so we restart from the beginning. + */ + if ( target_vcpu == vgic_target_oracle(irq) ) + { + struct vgic_cpu *new_cpu = &target_vcpu->arch.vgic; + + list_del(&irq->ap_list); + irq->vcpu = target_vcpu; + list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); + } + + spin_unlock(&irq->irq_lock); + spin_unlock(&vcpuB->arch.vgic.ap_list_lock); + spin_unlock_irqrestore(&vcpuA->arch.vgic.ap_list_lock, flags); + goto retry; + } + + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); +} + +static void vgic_fold_lr_state(struct vcpu *vcpu) +{ +} + +/* Requires the irq_lock to be held. */ +static void vgic_populate_lr(struct vcpu *vcpu, + struct vgic_irq *irq, int lr) +{ + ASSERT(spin_is_locked(&irq->irq_lock)); +} + +static void vgic_set_underflow(struct vcpu *vcpu) +{ + ASSERT(vcpu == current); + + gic_hw_ops->update_hcr_status(GICH_HCR_UIE, true); +} + +/* Requires the ap_list_lock to be held. */ +static int compute_ap_list_depth(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) + { + spin_lock(&irq->irq_lock); + /* GICv2 SGIs can count for more than one... */ + if ( vgic_irq_is_sgi(irq->intid) && irq->source ) + count += hweight8(irq->source); + else + count++; + spin_unlock(&irq->irq_lock); + } + return count; +} + +/* Requires the VCPU's ap_list_lock to be held. */ +static void vgic_flush_lr_state(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + if ( compute_ap_list_depth(vcpu) > gic_get_nr_lrs() ) + vgic_sort_ap_list(vcpu); + + list_for_each_entry( irq, &vgic_cpu->ap_list_head, ap_list ) + { + spin_lock(&irq->irq_lock); + + if ( likely(vgic_target_oracle(irq) == vcpu) ) + vgic_populate_lr(vcpu, irq, count++); + + spin_unlock(&irq->irq_lock); + + if ( count == gic_get_nr_lrs() ) + { + if ( !list_is_last(&irq->ap_list, &vgic_cpu->ap_list_head) ) + vgic_set_underflow(vcpu); + break; + } + } + + vcpu->arch.vgic.used_lrs = count; +} + +/** + * vgic_sync_from_lrs() - Update VGIC state from hardware after a guest's run. + * @vcpu: the VCPU for which to transfer from the LRs to the IRQ list. + * + * Sync back the hardware VGIC state after the guest has run, into our + * VGIC emulation structures, It reads the LRs and updates the respective + * struct vgic_irq, taking level/edge into account. + * This is the high level function which takes care of the conditions, + * also bails out early if there were no interrupts queued. + * Was: kvm_vgic_sync_hwstate() + */ +void vgic_sync_from_lrs(struct vcpu *vcpu) +{ + /* An empty ap_list_head implies used_lrs == 0 */ + if ( list_empty(&vcpu->arch.vgic.ap_list_head) ) + return; + + vgic_fold_lr_state(vcpu); + + vgic_prune_ap_list(vcpu); +} + +/** + * vgic_sync_to_lrs() - flush emulation state into the hardware on guest entry + * + * Before we enter a guest, we have to translate the virtual GIC state of a + * VCPU into the GIC virtualization hardware registers, namely the LRs. + * This is the high level function which takes care about the conditions + * and the locking, also bails out early if there are no interrupts queued. + * Was: kvm_vgic_flush_hwstate() + */ +void vgic_sync_to_lrs(void) +{ + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if ( list_empty(¤t->arch.vgic.ap_list_head) ) + return; + + ASSERT(!local_irq_is_enabled()); + + spin_lock(¤t->arch.vgic.ap_list_lock); + vgic_flush_lr_state(current); + spin_unlock(¤t->arch.vgic.ap_list_lock); +} /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index f9e2eeb2d6..f530cfa078 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -17,6 +17,8 @@ #ifndef __XEN_ARM_VGIC_VGIC_H__ #define __XEN_ARM_VGIC_VGIC_H__ +#define vgic_irq_is_sgi(intid) ((intid) < VGIC_NR_SGIS) + static inline bool irq_is_pending(struct vgic_irq *irq) { if ( irq->config == VGIC_CONFIG_EDGE )