From patchwork Fri Feb 9 14:39:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 127818 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp679538ljc; Fri, 9 Feb 2018 06:41:55 -0800 (PST) X-Google-Smtp-Source: AH8x2279OCbExJDZWK3FTjAsvcqCwzZu0L2OMGwDFwi8XJiTui2ARjRpWibfr1MypfdtBTxcIb4I X-Received: by 10.107.104.8 with SMTP id d8mr3223187ioc.119.1518187315062; Fri, 09 Feb 2018 06:41:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518187315; cv=none; d=google.com; s=arc-20160816; b=Ow76rEcVG+yrh2qKQBsPYG6BflLpoaVosx6feL/+r5RM+lxKAhO4WIibI7Xdq8aRPm I8IyTYD8hHvkHbXhh0t7sIQ0/mv1HuFQb4XlcYGFun1wtjB7+DGFh7W/TKManNtYkKqg uXL8swlLmgzH8uxiKlPcg9NzIn6Uac53QB7WE14LDUHTrY0xxxaNQSy32WwZ68uEMSiB 1rva6/6TgxSRGSO9DfSek1WbHi+A+R4Q7jmzQrLIvAhJCSvi1LMKCM3IMR7qiW/KREMM vZqSNi+K5i11mb9aWih17QMiI2YF9NjBNUPpHiyTHHKRYICXPqB2EpwZ+5KG2LtDoVsR 9iPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:mime-version :list-subscribe:list-help:list-post:list-unsubscribe:list-id :precedence:subject:references:in-reply-to:message-id:date:to:from :dkim-signature:arc-authentication-results; bh=wfE1PlucApM3qZaqTSS1eBbX12eZ47ZvMwSkxgpWhgU=; b=mO49ePcL9lMu2DjiNve+DoVIw3+e470HV4TRPY1oWBGALCMxyBFpYXGrrxZQWqGI4K JnxnLLaHr2qFNpDTLIDYTiDPXTq4Nywuqkyaqyrrs+P0Jtj0+LpJuWxFa74S0UPayB5q H3inOxMrDBIt5T+0rHFzAGnrd2yT8toBvwOrEZBU171HrdQkJPqGzq9eAAAPcUj21oZX 9DKK+r78uJyXtCbqg0XwLvaPPxXAxA48ULz+Njp1wmO8Zj1TtDfZsAK2FGw/oGMYvS6E ptyGG589Wcq6jqRWzwtaGU8JOa9VdL1V8E6lNcDagnPKLwg0cJLiniKSsDrgPg/XNHBa JWTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=QumgKZws; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id w69si1751878ith.131.2018.02.09.06.41.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Feb 2018 06:41:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=QumgKZws; spf=pass (google.com: best guess record for domain of xen-devel-bounces@lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ek9qY-0000oC-9Y; Fri, 09 Feb 2018 14:40:14 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ek9qX-0000l0-BD for xen-devel@lists.xenproject.org; Fri, 09 Feb 2018 14:40:13 +0000 X-Inumbo-ID: 10e1182a-0da7-11e8-ba59-bc764e045a96 Received: from mail-wr0-x243.google.com (unknown [2a00:1450:400c:c0c::243]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 10e1182a-0da7-11e8-ba59-bc764e045a96; Fri, 09 Feb 2018 15:39:43 +0100 (CET) Received: by mail-wr0-x243.google.com with SMTP id 41so8460945wrc.9 for ; Fri, 09 Feb 2018 06:40:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=lOs5JkyXYBe4BLrNdmZxt5v92qlZ0SIVbJzDAiSeLxk=; b=QumgKZwsn7nAf3owKfUK9bBboS5leAVnhL5Q0J1/Vj+WB+ejp6H9IU66MB3jhM8QHB nFeRZgcQO6RfREDG0dkee3OGyJ7EqSrmh5lv6Y5BWMaxO+XA3cHzm44RoqRfbrhPJKiP ShKeCTuI7B1g8nH1Z4ksMPbYqUr/98pT7qXeA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=lOs5JkyXYBe4BLrNdmZxt5v92qlZ0SIVbJzDAiSeLxk=; b=Lq9qGes9xxvGS+XJc+OzFi5xWLWCleOKlF6ogn75e2hknbNGTvsZFu0WMogQgBeeok xg9Ta96LI+9PNIpVNba+2uX9UFmfp2BG836XThurwb9Vv9Z6u5DpMICvM/BJKjVU7Exk +uV68zqH3SQbGrXJu5uYmNsD6C1lC6lGJkO8Q26BkaUZBqju3hXo8ZuXc9wiCczsD4fc 6EJJYZEilNlR2eZYcyYgALY+vY5hhw1hjvWTNKvkXdgkw/CBm8l+7CoBdrKYv4EPtAap +d5sQp80M1Ln+4y1btZlfNoTQ85nk3wX7802jie27wW/2xjfY26LrUvZpP9IW4QxWgIp uUuw== X-Gm-Message-State: APf1xPD3qhnTTJke9wiiDRi0Z8TbDg/iYVq6cZUjol2Lo/uls6/T5lNY MqyJ8z5ypK8UxE1FuLQfk0mDOQ== X-Received: by 10.223.163.16 with SMTP id c16mr2827248wrb.21.1518187210498; Fri, 09 Feb 2018 06:40:10 -0800 (PST) Received: from e104803-lin.lan (mail.andrep.de. [217.160.17.100]) by smtp.gmail.com with ESMTPSA id b35sm2552229wra.13.2018.02.09.06.40.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 09 Feb 2018 06:40:09 -0800 (PST) From: Andre Przywara To: Stefano Stabellini , Julien Grall , xen-devel@lists.xenproject.org Date: Fri, 9 Feb 2018 14:39:12 +0000 Message-Id: <20180209143937.28866-25-andre.przywara@linaro.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180209143937.28866-1-andre.przywara@linaro.org> References: <20180209143937.28866-1-andre.przywara@linaro.org> Subject: [Xen-devel] [RFC PATCH 24/49] ARM: new VGIC: Add IRQ sync/flush framework X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement the framework for syncing IRQs between our emulation and the list registers, which represent the guest's view of IRQs. This is done in kvm_vgic_flush_hwstate and kvm_vgic_sync_hwstate, which gets called on guest entry and exit. The code talking to the actual GICv2/v3 hardware is added in the following patches. This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic/vgic.c | 246 +++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic.h | 2 + 2 files changed, 248 insertions(+) diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index a4efd1fd03..a1f77130d4 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -380,6 +380,252 @@ int vgic_inject_irq(struct domain *d, struct vcpu *vcpu, unsigned int intid, return 0; } +/** + * vgic_prune_ap_list - Remove non-relevant interrupts from the list + * + * @vcpu: The VCPU pointer + * + * Go over the list of "interesting" interrupts, and prune those that we + * won't have to consider in the near future. + */ +static void vgic_prune_ap_list(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + struct vgic_irq *irq, *tmp; + unsigned long flags; + +retry: + spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); + + list_for_each_entry_safe( irq, tmp, &vgic_cpu->ap_list_head, ap_list ) + { + struct vcpu *target_vcpu, *vcpuA, *vcpuB; + + spin_lock(&irq->irq_lock); + + BUG_ON(vcpu != irq->vcpu); + + target_vcpu = vgic_target_oracle(irq); + + if ( !target_vcpu ) + { + /* + * We don't need to process this interrupt any + * further, move it off the list. + */ + list_del(&irq->ap_list); + irq->vcpu = NULL; + spin_unlock(&irq->irq_lock); + + /* + * This vgic_put_irq call matches the + * vgic_get_irq_kref in vgic_queue_irq_unlock, + * where we added the LPI to the ap_list. As + * we remove the irq from the list, we drop + * also drop the refcount. + */ + vgic_put_irq(vcpu->domain, irq); + continue; + } + + if ( target_vcpu == vcpu ) + { + /* We're on the right CPU */ + spin_unlock(&irq->irq_lock); + continue; + } + + /* This interrupt looks like it has to be migrated. */ + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + + /* + * Ensure locking order by always locking the smallest + * ID first. + */ + if ( vcpu->vcpu_id < target_vcpu->vcpu_id ) + { + vcpuA = vcpu; + vcpuB = target_vcpu; + } + else + { + vcpuA = target_vcpu; + vcpuB = vcpu; + } + + spin_lock_irqsave(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + spin_lock(&vcpuB->arch.vgic_cpu.ap_list_lock); + spin_lock(&irq->irq_lock); + + /* + * If the affinity has been preserved, move the + * interrupt around. Otherwise, it means things have + * changed while the interrupt was unlocked, and we + * need to replay this. + * + * In all cases, we cannot trust the list not to have + * changed, so we restart from the beginning. + */ + if ( target_vcpu == vgic_target_oracle(irq) ) + { + struct vgic_cpu *new_cpu = &target_vcpu->arch.vgic_cpu; + + list_del(&irq->ap_list); + irq->vcpu = target_vcpu; + list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); + } + + spin_unlock(&irq->irq_lock); + spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); + spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + goto retry; + } + + spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); +} + +static inline void vgic_fold_lr_state(struct vcpu *vcpu) +{ +} + +/* Requires the irq_lock to be held. */ +static inline void vgic_populate_lr(struct vcpu *vcpu, + struct vgic_irq *irq, int lr) +{ + ASSERT(spin_is_locked(&irq->irq_lock)); +} + +static inline void vgic_clear_lr(struct vcpu *vcpu, int lr) +{ +} + +static inline void vgic_set_underflow(struct vcpu *vcpu) +{ +} + +/* Requires the ap_list_lock to be held. */ +static int compute_ap_list_depth(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) + { + spin_lock(&irq->irq_lock); + /* GICv2 SGIs can count for more than one... */ + if ( vgic_irq_is_sgi(irq->intid) && irq->source ) + count += hweight8(irq->source); + else + count++; + spin_unlock(&irq->irq_lock); + } + return count; +} + +/* Requires the VCPU's ap_list_lock to be held. */ +static void vgic_flush_lr_state(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + struct vgic_irq *irq; + int count = 0; + + ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); + + if ( compute_ap_list_depth(vcpu) > gic_get_nr_lrs() ) + vgic_sort_ap_list(vcpu); + + list_for_each_entry( irq, &vgic_cpu->ap_list_head, ap_list ) + { + spin_lock(&irq->irq_lock); + + if ( unlikely(vgic_target_oracle(irq) != vcpu) ) + goto next; + + /* + * If we get an SGI with multiple sources, try to get + * them in all at once. + */ + do + { + vgic_populate_lr(vcpu, irq, count++); + } while ( irq->source && count < gic_get_nr_lrs() ); + +next: + spin_unlock(&irq->irq_lock); + + if ( count == gic_get_nr_lrs() ) + { + if ( !list_is_last(&irq->ap_list, &vgic_cpu->ap_list_head) ) + vgic_set_underflow(vcpu); + break; + } + } + + vcpu->arch.vgic_cpu.used_lrs = count; + + /* Nuke remaining LRs */ + for ( ; count < gic_get_nr_lrs(); count++) + vgic_clear_lr(vcpu, count); +} + +/* + * gic_clear_lrs() - Update the VGIC state from hardware after a guest's run. + * @vcpu: the VCPU. + * + * Sync back the hardware VGIC state after the guest has run, into our + * VGIC emulation structures, It reads the LRs and updates the respective + * struct vgic_irq, taking level/edge into account. + * This is the high level function which takes care of the conditions, + * also bails out early if there were no interrupts queued. + * Was: kvm_vgic_sync_hwstate() + */ +void gic_clear_lrs(struct vcpu *vcpu) +{ + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + + /* An empty ap_list_head implies used_lrs == 0 */ + if ( list_empty(&vcpu->arch.vgic_cpu.ap_list_head) ) + return; + + if ( vgic_cpu->used_lrs ) + vgic_fold_lr_state(vcpu); + vgic_prune_ap_list(vcpu); +} + +/* + * gic_inject() - flush the emulation state into the hardware on guest entry + * + * Before we enter a guest, we have to translate the virtual GIC state of a + * VCPU into the GIC virtualization hardware registers, namely the LRs. + * This is the high level function which takes care about the conditions + * and the locking, also bails out early if there are no interrupts queued. + * Was: kvm_vgic_flush_hwstate() + */ +void gic_inject(void) +{ + /* + * If there are no virtual interrupts active or pending for this + * VCPU, then there is no work to do and we can bail out without + * taking any lock. There is a potential race with someone injecting + * interrupts to the VCPU, but it is a benign race as the VCPU will + * either observe the new interrupt before or after doing this check, + * and introducing additional synchronization mechanism doesn't change + * this. + */ + if ( list_empty(¤t->arch.vgic_cpu.ap_list_head) ) + return; + + ASSERT(!local_irq_is_enabled()); + + spin_lock(¤t->arch.vgic_cpu.ap_list_lock); + vgic_flush_lr_state(current); + spin_unlock(¤t->arch.vgic_cpu.ap_list_lock); +} /* * Local variables: * mode: C diff --git a/xen/arch/arm/vgic/vgic.h b/xen/arch/arm/vgic/vgic.h index 5127739f0f..47fc58b81e 100644 --- a/xen/arch/arm/vgic/vgic.h +++ b/xen/arch/arm/vgic/vgic.h @@ -17,6 +17,8 @@ #ifndef __XEN_ARM_VGIC_NEW_H__ #define __XEN_ARM_VGIC_NEW_H__ +#define vgic_irq_is_sgi(intid) ((intid) < VGIC_NR_SGIS) + static inline bool irq_is_pending(struct vgic_irq *irq) { if ( irq->config == VGIC_CONFIG_EDGE )