From patchwork Sun Nov 23 18:35:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Auger Eric X-Patchwork-Id: 41393 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 987AA2036A for ; Sun, 23 Nov 2014 18:37:50 +0000 (UTC) Received: by mail-lb0-f197.google.com with SMTP id n15sf425808lbi.8 for ; Sun, 23 Nov 2014 10:37:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=E/MqEl7DKiwQ2y8csPc8ik50z2lIaMLUb0x0Sz+SlYw=; b=L9CKh1FgtsKmApIDFk/IdvHbnYHM5LZAnTH/UEQ4LOyGiXDItnM1KtLbLtPKYnHLhD ONUe984FmQISQJT7DHwwcwjDk4sS1s43Rimb4a+CXM9By27fL/q6u53jCUxsIr+8SOG6 ZBBT/UPeWaVnubPxTpOx1u66eQVkSHjxqnMjJi6pMsscTlTHZ7ABW+rvGNp4tUSBstyj YIlJeUsIF2bEvWw4fAep5mGK36kxjOk78pa6RVjEf0urQQmHR+Y4W3Ohel0P1qRITqql Q/sw+VgN41FIkH1l4xITsdIxlkVRhSa9BNabYDGaxxNFT7HhK8Dzr+on+2EVakOBdX/+ e4GA== X-Gm-Message-State: ALoCoQktNJ/oDJrRRaJkhD2TuEd5l59gIi6WmQAf3e7qZkuyI3Wm/LVn23ODxKDHKD95WUR9z+x0 X-Received: by 10.194.104.227 with SMTP id gh3mr4230wjb.7.1416767869555; Sun, 23 Nov 2014 10:37:49 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.97 with SMTP id r1ls374560lar.0.gmail; Sun, 23 Nov 2014 10:37:49 -0800 (PST) X-Received: by 10.152.29.97 with SMTP id j1mr9548998lah.3.1416767869266; Sun, 23 Nov 2014 10:37:49 -0800 (PST) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com. [209.85.217.178]) by mx.google.com with ESMTPS id jd8si8907118lbc.99.2014.11.23.10.37.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 23 Nov 2014 10:37:49 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by mail-lb0-f178.google.com with SMTP id f15so5467500lbj.9 for ; Sun, 23 Nov 2014 10:37:49 -0800 (PST) X-Received: by 10.152.37.69 with SMTP id w5mr16209542laj.67.1416767869119; Sun, 23 Nov 2014 10:37:49 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp156327lbc; Sun, 23 Nov 2014 10:37:48 -0800 (PST) X-Received: by 10.180.78.9 with SMTP id x9mr15108985wiw.39.1416767868132; Sun, 23 Nov 2014 10:37:48 -0800 (PST) Received: from mail-wg0-f42.google.com (mail-wg0-f42.google.com. [74.125.82.42]) by mx.google.com with ESMTPS id r5si8732016wiw.33.2014.11.23.10.37.48 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 23 Nov 2014 10:37:48 -0800 (PST) Received-SPF: pass (google.com: domain of eric.auger@linaro.org designates 74.125.82.42 as permitted sender) client-ip=74.125.82.42; Received: by mail-wg0-f42.google.com with SMTP id z12so10628747wgg.15 for ; Sun, 23 Nov 2014 10:37:47 -0800 (PST) X-Received: by 10.181.13.80 with SMTP id ew16mr15175018wid.47.1416767867908; Sun, 23 Nov 2014 10:37:47 -0800 (PST) Received: from gnx2579.gnb.st.com (weg38-3-78-232-41-119.fbx.proxad.net. [78.232.41.119]) by mx.google.com with ESMTPSA id hk9sm17535471wjb.46.2014.11.23.10.37.45 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 23 Nov 2014 10:37:47 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, alex.williamson@redhat.com, joel.schopp@amd.com, kim.phillips@freescale.com, paulus@samba.org, gleb@kernel.org, pbonzini@redhat.com, agraf@suse.de Cc: linux-kernel@vger.kernel.org, patches@linaro.org, will.deacon@arm.com, a.motakis@virtualopensystems.com, a.rigo@virtualopensystems.com, john.liuli@huawei.com, ming.lei@canonical.com, feng.wu@intel.com Subject: [PATCH v3 7/8] KVM: kvm-vfio: generic forwarding control Date: Sun, 23 Nov 2014 19:35:59 +0100 Message-Id: <1416767760-14487-8-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1416767760-14487-1-git-send-email-eric.auger@linaro.org> References: <1416767760-14487-1-git-send-email-eric.auger@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: eric.auger@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch introduces a new KVM_DEV_VFIO_DEVICE group. This is a new control channel which enables KVM to cooperate with viable VFIO devices. Functions are introduced to check the validity of a VFIO device file descriptor, increment/decrement the ref counter of the VFIO device. The patch introduces 2 attributes for this new device group: KVM_DEV_VFIO_DEVICE_FORWARD_IRQ, KVM_DEV_VFIO_DEVICE_UNFORWARD_IRQ. Their purpose is to turn a VFIO device IRQ into a forwarded IRQ and unset respectively unset the feature. The VFIO device stores a list of registered forwarded IRQs. The reference counter of the device is incremented each time a new IRQ is forwarded. Reference counter is decremented when the IRQ forwarding is unset. The forwarding programmming is architecture specific, implemented in kvm_arch_set_fwd_state function. Architecture specific implementation is enabled when __KVM_HAVE_ARCH_KVM_VFIO_FORWARD is set. When not set those functions are void. Signed-off-by: Eric Auger --- v2 -> v3: - add API comments in kvm_host.h - improve the commit message - create a private kvm_vfio_fwd_irq struct - fwd_irq_action replaced by a bool and removal of VFIO_IRQ_CLEANUP. This latter action will be handled in vgic. - add a vfio_device handle argument to kvm_arch_set_fwd_state. The goal is to move platform specific stuff in architecture specific code. - kvm_arch_set_fwd_state renamed into kvm_arch_vfio_set_forward - increment the ref counter each time we do an IRQ forwarding and decrement this latter each time one IRQ forward is unset. Simplifies the whole ref counting. - simplification of list handling: create, search, removal v1 -> v2: - __KVM_HAVE_ARCH_KVM_VFIO renamed into __KVM_HAVE_ARCH_KVM_VFIO_FORWARD - original patch file separated into 2 parts: generic part moved in vfio.c and ARM specific part(kvm_arch_set_fwd_state) --- include/linux/kvm_host.h | 28 ++++++ virt/kvm/vfio.c | 249 ++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 274 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ea53b04..0b9659d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1076,6 +1076,15 @@ struct kvm_device_ops { unsigned long arg); }; +/* internal self-contained structure describing a forwarded IRQ */ +struct kvm_fwd_irq { + struct kvm *kvm; /* VM to inject the GSI into */ + struct vfio_device *vdev; /* vfio device the IRQ belongs to */ + __u32 index; /* VFIO device IRQ index */ + __u32 subindex; /* VFIO device IRQ subindex */ + __u32 gsi; /* gsi, ie. virtual IRQ number */ +}; + void kvm_device_get(struct kvm_device *dev); void kvm_device_put(struct kvm_device *dev); struct kvm_device *kvm_device_from_filp(struct file *filp); @@ -1085,6 +1094,25 @@ void kvm_unregister_device_ops(u32 type); extern struct kvm_device_ops kvm_mpic_ops; extern struct kvm_device_ops kvm_xics_ops; +#ifdef __KVM_HAVE_ARCH_KVM_VFIO_FORWARD +/** + * kvm_arch_vfio_set_forward - changes the forwarded state of an IRQ + * + * @fwd_irq: handle to the forwarded irq struct + * @forward: true means forwarded, false means not forwarded + * returns 0 on success, < 0 on failure + */ +int kvm_arch_vfio_set_forward(struct kvm_fwd_irq *fwd_irq, + bool forward); + +#else +static inline int kvm_arch_vfio_set_forward(struct kvm_fwd_irq *fwd_irq, + bool forward) +{ + return 0; +} +#endif + #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT static inline void kvm_vcpu_set_in_spin_loop(struct kvm_vcpu *vcpu, bool val) diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c index 6f0cc34..af178bb 100644 --- a/virt/kvm/vfio.c +++ b/virt/kvm/vfio.c @@ -25,8 +25,16 @@ struct kvm_vfio_group { struct vfio_group *vfio_group; }; +/* private linkable kvm_fwd_irq struct */ +struct kvm_vfio_fwd_irq_node { + struct list_head link; + struct kvm_fwd_irq fwd_irq; +}; + struct kvm_vfio { struct list_head group_list; + /* list of registered VFIO forwarded IRQs */ + struct list_head fwd_node_list; struct mutex lock; bool noncoherent; }; @@ -247,12 +255,239 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) return -ENXIO; } +/** + * kvm_vfio_get_vfio_device - Returns a handle to a vfio-device + * + * Checks it is a valid vfio device and increments its reference counter + * @fd: file descriptor of the vfio platform device + */ +static struct vfio_device *kvm_vfio_get_vfio_device(int fd) +{ + struct fd f = fdget(fd); + struct vfio_device *vdev; + + if (!f.file) + return NULL; + vdev = kvm_vfio_device_get_external_user(f.file); + fdput(f); + return vdev; +} + +/** + * kvm_vfio_put_vfio_device: decrements the reference counter of the + * vfio platform * device + * + * @vdev: vfio_device handle to release + */ +static void kvm_vfio_put_vfio_device(struct vfio_device *vdev) +{ + kvm_vfio_device_put_external_user(vdev); +} + +/** + * kvm_vfio_find_fwd_irq - checks whether a forwarded IRQ already is + * registered in the list of forwarded IRQs + * + * @kv: handle to the kvm-vfio device + * @fwd: handle to the forwarded irq struct + * In the positive returns the handle to its node in the kvm-vfio + * forwarded IRQ list, returns NULL otherwise. + * Must be called with kv->lock hold. + */ +static struct kvm_vfio_fwd_irq_node *kvm_vfio_find_fwd_irq( + struct kvm_vfio *kv, + struct kvm_fwd_irq *fwd) +{ + struct kvm_vfio_fwd_irq_node *node; + + list_for_each_entry(node, &kv->fwd_node_list, link) { + if ((node->fwd_irq.index == fwd->index) && + (node->fwd_irq.subindex == fwd->subindex) && + (node->fwd_irq.vdev == fwd->vdev)) + return node; + } + return NULL; +} +/** + * kvm_vfio_register_fwd_irq - Allocates, populates and registers a + * forwarded IRQ + * + * @kv: handle to the kvm-vfio device + * @fwd: handle to the forwarded irq struct + * In case of success returns a handle to the new list node, + * NULL otherwise. + * Must be called with kv->lock hold. + */ +static struct kvm_vfio_fwd_irq_node *kvm_vfio_register_fwd_irq( + struct kvm_vfio *kv, + struct kvm_fwd_irq *fwd) +{ + struct kvm_vfio_fwd_irq_node *node; + + node = kmalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return NULL; + + node->fwd_irq = *fwd; + + list_add(&node->link, &kv->fwd_node_list); + + return node; +} + +/** + * kvm_vfio_unregister_fwd_irq - unregisters and frees a forwarded IRQ + * + * @node: handle to the node struct + * Must be called with kv->lock hold. + */ +static void kvm_vfio_unregister_fwd_irq(struct kvm_vfio_fwd_irq_node *node) +{ + list_del(&node->link); + kfree(node); +} + +/** + * kvm_vfio_set_forward - turns a VFIO device IRQ into a forwarded IRQ + * @kv: handle to the kvm-vfio device + * @fd: file descriptor of the vfio device the IRQ belongs to + * @fwd: handle to the forwarded irq struct + * + * Registers an IRQ as forwarded and calls the architecture specific + * implementation of set_forward. In case of operation failure, the IRQ + * is unregistered. In case of success, the vfio device ref counter is + * incremented. + */ +static int kvm_vfio_set_forward(struct kvm_vfio *kv, int fd, + struct kvm_fwd_irq *fwd) +{ + int ret; + struct kvm_vfio_fwd_irq_node *node = + kvm_vfio_find_fwd_irq(kv, fwd); + + if (node) + return -EINVAL; + node = kvm_vfio_register_fwd_irq(kv, fwd); + if (!node) + return -ENOMEM; + ret = kvm_arch_vfio_set_forward(fwd, true); + if (ret < 0) { + kvm_vfio_unregister_fwd_irq(node); + return ret; + } + /* increment the ref counter */ + kvm_vfio_get_vfio_device(fd); + return ret; +} + +/** + * kvm_vfio_unset_forward - Sets a VFIO device IRQ as non forwarded + * @kv: handle to the kvm-vfio device + * @fwd: handle to the forwarded irq struct + * + * Calls the architecture specific implementation of set_forward and + * unregisters the IRQ from the forwarded IRQ list. Decrements the vfio + * device reference counter. + */ +static int kvm_vfio_unset_forward(struct kvm_vfio *kv, + struct kvm_fwd_irq *fwd) +{ + int ret; + struct kvm_vfio_fwd_irq_node *node = + kvm_vfio_find_fwd_irq(kv, fwd); + if (!node) + return -EINVAL; + ret = kvm_arch_vfio_set_forward(fwd, false); + kvm_vfio_unregister_fwd_irq(node); + + /* decrement the ref counter */ + kvm_vfio_put_vfio_device(fwd->vdev); + return ret; +} + +static int kvm_vfio_control_irq_forward(struct kvm_device *kdev, long attr, + int32_t __user *argp) +{ + struct kvm_arch_forwarded_irq user_fwd_irq; + struct kvm_fwd_irq fwd; + struct vfio_device *vdev; + struct kvm_vfio *kv = kdev->private; + int ret; + + if (copy_from_user(&user_fwd_irq, argp, sizeof(user_fwd_irq))) + return -EFAULT; + + vdev = kvm_vfio_get_vfio_device(user_fwd_irq.fd); + if (IS_ERR(vdev)) { + ret = PTR_ERR(vdev); + goto out; + } + + fwd.vdev = vdev; + fwd.kvm = kdev->kvm; + fwd.index = user_fwd_irq.index; + fwd.subindex = user_fwd_irq.subindex; + fwd.gsi = user_fwd_irq.gsi; + + switch (attr) { + case KVM_DEV_VFIO_DEVICE_FORWARD_IRQ: + mutex_lock(&kv->lock); + ret = kvm_vfio_set_forward(kv, user_fwd_irq.fd, &fwd); + mutex_unlock(&kv->lock); + break; + case KVM_DEV_VFIO_DEVICE_UNFORWARD_IRQ: + mutex_lock(&kv->lock); + ret = kvm_vfio_unset_forward(kv, &fwd); + mutex_unlock(&kv->lock); + break; + } +out: + kvm_vfio_put_vfio_device(vdev); + return ret; +} + +static int kvm_vfio_set_device(struct kvm_device *kdev, long attr, u64 arg) +{ + int32_t __user *argp = (int32_t __user *)(unsigned long)arg; + int ret; + + switch (attr) { + case KVM_DEV_VFIO_DEVICE_FORWARD_IRQ: + case KVM_DEV_VFIO_DEVICE_UNFORWARD_IRQ: + ret = kvm_vfio_control_irq_forward(kdev, attr, argp); + break; + default: + ret = -ENXIO; + } + return ret; +} + +/** + * kvm_vfio_clean_fwd_irq - Unset forwarding state of all + * registered forwarded IRQs and free their list nodes. + * @kv: kvm-vfio device + * + * Loop on all registered device/IRQ combos, reset the non forwarded state, + * void the lists and release the reference + */ +static int kvm_vfio_clean_fwd_irq(struct kvm_vfio *kv) +{ + struct kvm_vfio_fwd_irq_node *node, *tmp; + + list_for_each_entry_safe(node, tmp, &kv->fwd_node_list, link) { + kvm_vfio_unset_forward(kv, &node->fwd_irq); + } + return 0; +} + static int kvm_vfio_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr) { switch (attr->group) { case KVM_DEV_VFIO_GROUP: return kvm_vfio_set_group(dev, attr->attr, attr->addr); + case KVM_DEV_VFIO_DEVICE: + return kvm_vfio_set_device(dev, attr->attr, attr->addr); } return -ENXIO; @@ -268,10 +503,17 @@ static int kvm_vfio_has_attr(struct kvm_device *dev, case KVM_DEV_VFIO_GROUP_DEL: return 0; } - break; +#ifdef __KVM_HAVE_ARCH_KVM_VFIO_FORWARD + case KVM_DEV_VFIO_DEVICE: + switch (attr->attr) { + case KVM_DEV_VFIO_DEVICE_FORWARD_IRQ: + case KVM_DEV_VFIO_DEVICE_UNFORWARD_IRQ: + return 0; + } + break; +#endif } - return -ENXIO; } @@ -285,7 +527,7 @@ static void kvm_vfio_destroy(struct kvm_device *dev) list_del(&kvg->node); kfree(kvg); } - + kvm_vfio_clean_fwd_irq(kv); kvm_vfio_update_coherency(dev); kfree(kv); @@ -317,6 +559,7 @@ static int kvm_vfio_create(struct kvm_device *dev, u32 type) return -ENOMEM; INIT_LIST_HEAD(&kv->group_list); + INIT_LIST_HEAD(&kv->fwd_node_list); mutex_init(&kv->lock); dev->private = kv;