From patchwork Mon Aug 10 13:21:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 52241 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by patches.linaro.org (Postfix) with ESMTPS id 2B34F22919 for ; Mon, 10 Aug 2015 13:22:13 +0000 (UTC) Received: by labia3 with SMTP id ia3sf39650096lab.1 for ; Mon, 10 Aug 2015 06:22:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=hwniYWAtlW8dwn6ECmSvjoerGTboqcvs9N6TwEFVG8U=; b=cwWCy801UT+rVKumJuCnDU8Y5PXYJmlYdRxtSP40+NKbfDk++/afGq/ZU2ZIvealkl ybHZ42pOp5lkHTkLzn1PXiEgTasG/K+7wwXkHHiH7mTnT3Xr0a35wcdvaQYWkllAtFRm urckcrOCllJnfB4QibYCr8h2JWX7DG35er9ckxvVJyRs7oHfDsXKs+s+igxADFN8o1uU /PIWSY3LoU83zPoTazyBcyrEIBhNQmYRUp4xiQXYg1iyXErDVcaQZj5dHtIaV+zPQYcH 0jyWjv/OFc0zpeKDuosnJdrNsAqZe4pTH1sXL6EGHH7CbDF9P9injDxHhXuE3fgnMXGC bcJg== X-Gm-Message-State: ALoCoQmxDffh/aWyjqcDokiicpEBgSOHBIvKwiisSzWhFg+M8UUh/Tlo77GdU/IjzWadwUFRypbQ X-Received: by 10.112.139.137 with SMTP id qy9mr6276116lbb.17.1439212932154; Mon, 10 Aug 2015 06:22:12 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.129 with SMTP id s1ls769968las.11.gmail; Mon, 10 Aug 2015 06:22:11 -0700 (PDT) X-Received: by 10.152.42.209 with SMTP id q17mr19974816lal.33.1439212931876; Mon, 10 Aug 2015 06:22:11 -0700 (PDT) Received: from mail-lb0-f169.google.com (mail-lb0-f169.google.com. [209.85.217.169]) by mx.google.com with ESMTPS id ds10si13992787lbc.133.2015.08.10.06.22.11 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Aug 2015 06:22:11 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.169 as permitted sender) client-ip=209.85.217.169; Received: by lbbtg9 with SMTP id tg9so57114724lbb.1 for ; Mon, 10 Aug 2015 06:22:11 -0700 (PDT) X-Received: by 10.112.219.70 with SMTP id pm6mr19691882lbc.41.1439212931772; Mon, 10 Aug 2015 06:22:11 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1931809lba; Mon, 10 Aug 2015 06:22:10 -0700 (PDT) X-Received: by 10.194.109.229 with SMTP id hv5mr47069405wjb.119.1439212923274; Mon, 10 Aug 2015 06:22:03 -0700 (PDT) Received: from mail-wi0-f172.google.com (mail-wi0-f172.google.com. [209.85.212.172]) by mx.google.com with ESMTPS id mi6si16096250wic.25.2015.08.10.06.22.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Aug 2015 06:22:03 -0700 (PDT) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 209.85.212.172 as permitted sender) client-ip=209.85.212.172; Received: by wicja10 with SMTP id ja10so25460225wic.1 for ; Mon, 10 Aug 2015 06:22:03 -0700 (PDT) X-Received: by 10.180.96.34 with SMTP id dp2mr19534918wib.57.1439212922893; Mon, 10 Aug 2015 06:22:02 -0700 (PDT) Received: from gnx2579.home (LCaen-156-56-7-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id fq15sm29517024wjc.12.2015.08.10.06.22.01 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Aug 2015 06:22:01 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, alex.williamson@redhat.com, feng.wu@intel.com Cc: linux-kernel@vger.kernel.org, patches@linaro.org, pbonzini@redhat.com Subject: [PATCH v3 07/10] KVM: arm/arm64: vgic: Allow HW interrupts for non-shared devices Date: Mon, 10 Aug 2015 15:21:01 +0200 Message-Id: <1439212864-12954-8-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1439212864-12954-1-git-send-email-eric.auger@linaro.org> References: <1439212864-12954-1-git-send-email-eric.auger@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: eric.auger@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Marc Zyngier So far, the only use of the HW interrupt facility was the timer, implying that the active state is context-switched for each vcpu, as the device is is shared across all vcpus. This does not work for a device that has been assigned to a VM, as the guest is entierely in control of that device (the HW is not shared). In that case, it makes sense to bypass the whole active state switching. Also the VGIC state machine is adapted to support those assigned (non shared) HW IRQs: - nly can be sampled when it is pending - when queueing the IRQ (programming the LR), the pending state is removed as for edge sensitive IRQs - queued state is not modelled. Level state is not modelled - its injection always is valid since steming from the HW. Signed-off-by: Marc Zyngier Signed-off-by: Eric Auger --- - a mix of [PATCH v4 11/11] KVM: arm/arm64: vgic: Allow HW interrupts for non-shared devices [RFC v2 2/4] KVM: arm: vgic: fix state machine for forwarded IRQ --- include/kvm/arm_vgic.h | 6 +++-- virt/kvm/arm/arch_timer.c | 3 ++- virt/kvm/arm/vgic.c | 58 +++++++++++++++++++++++++++++++++++------------ 3 files changed, 49 insertions(+), 18 deletions(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index d901f1a..7ef9ce0 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -163,7 +163,8 @@ struct irq_phys_map { u32 virt_irq; u32 phys_irq; u32 irq; - bool active; + bool shared; + bool active; /* Only valid if shared */ }; struct irq_phys_map_entry { @@ -356,7 +357,8 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg); int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu); int kvm_vgic_vcpu_active_irq(struct kvm_vcpu *vcpu); struct irq_phys_map *kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, - int virt_irq, int irq); + int virt_irq, int irq, + bool shared); int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, struct irq_phys_map *map); bool kvm_vgic_get_phys_irq_active(struct irq_phys_map *map); void kvm_vgic_set_phys_irq_active(struct irq_phys_map *map, bool active); diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 76e38d2..db21d8f 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -203,7 +203,8 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu, * Tell the VGIC that the virtual interrupt is tied to a * physical interrupt. We do that once per VCPU. */ - map = kvm_vgic_map_phys_irq(vcpu, irq->irq, host_vtimer_irq); + map = kvm_vgic_map_phys_irq(vcpu, irq->irq, + host_vtimer_irq, true); if (WARN_ON(IS_ERR(map))) return PTR_ERR(map); diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 9eb489a..fbd5ba5 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -400,7 +400,11 @@ void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq) static bool vgic_can_sample_irq(struct kvm_vcpu *vcpu, int irq) { - return !vgic_irq_is_queued(vcpu, irq); + struct irq_phys_map *map = vgic_irq_map_search(vcpu, irq); + bool shared_hw = map && !map->shared; + + return !vgic_irq_is_queued(vcpu, irq) || + (shared_hw && vgic_dist_irq_is_pending(vcpu, irq)); } /** @@ -1150,19 +1154,26 @@ static void vgic_queue_irq_to_lr(struct kvm_vcpu *vcpu, int irq, * active in the physical world. Otherwise the * physical interrupt will fire and the guest will * exit before processing the virtual interrupt. + * + * This is of course only valid for a shared + * interrupt. A non shared interrupt should already be + * active. */ if (map) { - int ret; - - BUG_ON(!map->active); vlr.hwirq = map->phys_irq; vlr.state |= LR_HW; vlr.state &= ~LR_EOI_INT; - ret = irq_set_irqchip_state(map->irq, - IRQCHIP_STATE_ACTIVE, - true); - WARN_ON(ret); + if (map->shared) { + int ret; + + BUG_ON(!map->active); + ret = irq_set_irqchip_state( + map->irq, + IRQCHIP_STATE_ACTIVE, + true); + WARN_ON(ret); + } /* * Make sure we're not going to sample this @@ -1229,10 +1240,13 @@ bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq) static bool vgic_queue_hwirq(struct kvm_vcpu *vcpu, int irq) { + struct irq_phys_map *map = vgic_irq_map_search(vcpu, irq); + bool shared_hw = map && !map->shared; + if (!vgic_can_sample_irq(vcpu, irq)) return true; /* level interrupt, already queued */ - if (vgic_queue_irq(vcpu, 0, irq)) { + if (vgic_queue_irq(vcpu, 0, irq) || shared_hw) { if (vgic_irq_is_edge(vcpu, irq)) { vgic_dist_irq_clear_pending(vcpu, irq); vgic_cpu_irq_clear(vcpu, irq); @@ -1411,7 +1425,12 @@ static int vgic_sync_hwirq(struct kvm_vcpu *vcpu, struct vgic_lr vlr) return 0; map = vgic_irq_map_search(vcpu, vlr.irq); - BUG_ON(!map || !map->active); + BUG_ON(!map); + + if (!map->shared) + return 0; + + BUG_ON(map->shared && !map->active); ret = irq_get_irqchip_state(map->irq, IRQCHIP_STATE_ACTIVE, @@ -1563,6 +1582,7 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid, int edge_triggered, level_triggered; int enabled; bool ret = true, can_inject = true; + bool shared_hw = map && !map->shared; if (irq_num >= min(kvm->arch.vgic.nr_irqs, 1020)) return -EINVAL; @@ -1573,7 +1593,8 @@ static int vgic_update_irq_pending(struct kvm *kvm, int cpuid, edge_triggered = vgic_irq_is_edge(vcpu, irq_num); level_triggered = !edge_triggered; - if (!vgic_validate_injection(vcpu, irq_num, level)) { + if (!vgic_validate_injection(vcpu, irq_num, level) && + !shared_hw) { ret = false; goto out; } @@ -1742,16 +1763,21 @@ static struct list_head *vgic_get_irq_phys_map_list(struct kvm_vcpu *vcpu, * @vcpu: The VCPU pointer * @virt_irq: The virtual irq number * @irq: The Linux IRQ number + * @shared: Indicates if the interrupt has to be context-switched or + * if it is private to a VM * * Establish a mapping between a guest visible irq (@virt_irq) and a * Linux irq (@irq). On injection, @virt_irq will be associated with * the physical interrupt represented by @irq. This mapping can be * established multiple times as long as the parameters are the same. + * If @shared is true, the active state of the interrupt will be + * context-switched. * * Returns a valid pointer on success, and an error pointer otherwise */ struct irq_phys_map *kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, - int virt_irq, int irq) + int virt_irq, int irq, + bool shared) { struct vgic_dist *dist = &vcpu->kvm->arch.vgic; struct list_head *root = vgic_get_irq_phys_map_list(vcpu, virt_irq); @@ -1785,7 +1811,8 @@ struct irq_phys_map *kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, if (map) { /* Make sure this mapping matches */ if (map->phys_irq != phys_irq || - map->irq != irq) + map->irq != irq || + map->shared != shared) map = ERR_PTR(-EINVAL); /* Found an existing, valid mapping */ @@ -1796,6 +1823,7 @@ struct irq_phys_map *kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, map->virt_irq = virt_irq; map->phys_irq = phys_irq; map->irq = irq; + map->shared = shared; list_add_tail_rcu(&entry->entry, root); @@ -1846,7 +1874,7 @@ static void vgic_free_phys_irq_map_rcu(struct rcu_head *rcu) */ bool kvm_vgic_get_phys_irq_active(struct irq_phys_map *map) { - BUG_ON(!map); + BUG_ON(!map || !map->shared); return map->active; } @@ -1858,7 +1886,7 @@ bool kvm_vgic_get_phys_irq_active(struct irq_phys_map *map) */ void kvm_vgic_set_phys_irq_active(struct irq_phys_map *map, bool active) { - BUG_ON(!map); + BUG_ON(!map || !map->shared); map->active = active; }