From patchwork Tue Jul 23 16:06:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169548 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9072148ilk; Tue, 23 Jul 2019 09:08:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyXk1m3pXJ462J5P8Ap9UIeymFdoKqtuvyPv7/Kipp29uakV2pfpbb9JCux175SC5AkEKcy X-Received: by 2002:a63:20a:: with SMTP id 10mr76573263pgc.226.1563898127802; Tue, 23 Jul 2019 09:08:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898127; cv=none; d=google.com; s=arc-20160816; b=halaXd5kGLJ7MOKuY/uYhUlaowwZTiF0UTjWnT6sfLZ88eZ1FOROn2G2dgFkyLjp6E UpMeLZEKGlLKHCmme4MYlRC19PnG3MKw82kVBfCn1nAvOCAzHR2Id2eTXGgblBTXJw5f IIkrAQA8/AT7QYNEYqk5G3iT3ZEKGhKf1XS1NfZLxQXmpdWmSfpsGz8TNDLEU4i/k9mC iBGxSMtH+8aKqSKBxyTgrHMqAW4nvF7gQ7AWN50kY33P2XQTKIXv0FAciR+CBh6s5qsP DByOPr0e0RFs3wrjETVhTbXs83xd0tgGdhQE511JyPqbQ9QPebPGfr9Bhx2whjXTcpDl RcQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=dWwwhbUlEziiQghYKzpaE/Eye7dRXSFyMQS04Ao8w7M=; b=SwzvJP2p6gJcAIoOFcmyLwZ6YyMkEpbHNIbHIHeVMQwhI34CKUe3KXy84sMUhLlNeV nhBYFhAEiA8SHcEOzKoTWLT8Sa1mFew7z9zS+e+wnKDMhlmodWh8Jw7wGaec5MUAxtuM md5I1AaLHJbMKXG5UXxqtr9NV7GmuEDAziWPY45ojU0GuU5yd+k7TSiHsw6c5F+CYrMw RpbdATtUQLCybf6vgD9rJHbzeRWrKSf1s1pNWyNcESCQTSBL1xFFzyEJPXHCTJgu9LpP akmGgNCMOpvEO4BRpOQ0td4luQ2eRS2LSJAArJQRStUqF9cl0lcmW6dsANNDEh/iPHkX 6Y4Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g7si14525718plt.244.2019.07.23.09.08.47; Tue, 23 Jul 2019 09:08:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727363AbfGWQIq (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:46 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2707 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388853AbfGWQIo (ORCPT ); Tue, 23 Jul 2019 12:08:44 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 20200FD2400338E74217; Wed, 24 Jul 2019 00:08:29 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:18 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 1/6] vfio/type1: Introduce iova list and add iommu aperture validity check Date: Tue, 23 Jul 2019 17:06:32 +0100 Message-ID: <20190723160637.8384-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This introduces an iova list that is valid for dma mappings. Make sure the new iommu aperture window doesn't conflict with the current one or with any existing dma mappings during attach. Signed-off-by: Shameer Kolothum --- v7-->v8 -Addressed suggestions by Eric to update comments. --- drivers/vfio/vfio_iommu_type1.c | 184 +++++++++++++++++++++++++++++++- 1 file changed, 181 insertions(+), 3 deletions(-) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 054391f30fa8..6a69652b406b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -62,6 +62,7 @@ MODULE_PARM_DESC(dma_entry_limit, struct vfio_iommu { struct list_head domain_list; + struct list_head iova_list; struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; struct rb_root dma_list; @@ -97,6 +98,12 @@ struct vfio_group { bool mdev_group; /* An mdev group */ }; +struct vfio_iova { + struct list_head list; + dma_addr_t start; + dma_addr_t end; +}; + /* * Guest RAM pinning working set or DMA target */ @@ -1388,6 +1395,146 @@ static int vfio_mdev_iommu_device(struct device *dev, void *data) return 0; } +/* + * This is a helper function to insert an address range to iova list. + * The list is initially created with a single entry corresponding to + * the IOMMU domain geometry to which the device group is attached. + * The list aperture gets modified when a new domain is added to the + * container if the new aperture doesn't conflict with the current one + * or with any existing dma mappings. The list is also modified to + * exclude any reserved regions associated with the device group. + */ +static int vfio_iommu_iova_insert(struct list_head *head, + dma_addr_t start, dma_addr_t end) +{ + struct vfio_iova *region; + + region = kmalloc(sizeof(*region), GFP_KERNEL); + if (!region) + return -ENOMEM; + + INIT_LIST_HEAD(®ion->list); + region->start = start; + region->end = end; + + list_add_tail(®ion->list, head); + return 0; +} + +/* + * Check the new iommu aperture conflicts with existing aper or with any + * existing dma mappings. + */ +static bool vfio_iommu_aper_conflict(struct vfio_iommu *iommu, + dma_addr_t start, dma_addr_t end) +{ + struct vfio_iova *first, *last; + struct list_head *iova = &iommu->iova_list; + + if (list_empty(iova)) + return false; + + /* Disjoint sets, return conflict */ + first = list_first_entry(iova, struct vfio_iova, list); + last = list_last_entry(iova, struct vfio_iova, list); + if (start > last->end || end < first->start) + return true; + + /* Check for any existing dma mappings below the new start */ + if (start > first->start) { + if (vfio_find_dma(iommu, first->start, start - first->start)) + return true; + } + + /* Check for any existing dma mappings beyond the new end */ + if (end < last->end) { + if (vfio_find_dma(iommu, end + 1, last->end - end)) + return true; + } + + return false; +} + +/* + * Resize iommu iova aperture window. This is called only if the new + * aperture has no conflict with existing aperture and dma mappings. + */ +static int vfio_iommu_aper_resize(struct list_head *iova, + dma_addr_t start, dma_addr_t end) +{ + struct vfio_iova *node, *next; + + if (list_empty(iova)) + return vfio_iommu_iova_insert(iova, start, end); + + /* Adjust iova list start */ + list_for_each_entry_safe(node, next, iova, list) { + if (start < node->start) + break; + if (start >= node->start && start < node->end) { + node->start = start; + break; + } + /* Delete nodes before new start */ + list_del(&node->list); + kfree(node); + } + + /* Adjust iova list end */ + list_for_each_entry_safe(node, next, iova, list) { + if (end > node->end) + continue; + if (end > node->start && end <= node->end) { + node->end = end; + continue; + } + /* Delete nodes after new end */ + list_del(&node->list); + kfree(node); + } + + return 0; +} + +static void vfio_iommu_iova_free(struct list_head *iova) +{ + struct vfio_iova *n, *next; + + list_for_each_entry_safe(n, next, iova, list) { + list_del(&n->list); + kfree(n); + } +} + +static int vfio_iommu_iova_get_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *n; + int ret; + + list_for_each_entry(n, iova, list) { + ret = vfio_iommu_iova_insert(iova_copy, n->start, n->end); + if (ret) + goto out_free; + } + + return 0; + +out_free: + vfio_iommu_iova_free(iova_copy); + return ret; +} + +static void vfio_iommu_iova_insert_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct list_head *iova = &iommu->iova_list; + + vfio_iommu_iova_free(iova); + + list_splice_tail(iova_copy, iova); +} static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -1398,6 +1545,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, int ret; bool resv_msi, msi_remap; phys_addr_t resv_msi_base; + struct iommu_domain_geometry geo; + LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -1474,6 +1623,29 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_domain; + /* Get aperture info */ + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, &geo); + + if (vfio_iommu_aper_conflict(iommu, geo.aperture_start, + geo.aperture_end)) { + ret = -EINVAL; + goto out_detach; + } + + /* + * We don't want to work on the original iova list as the list + * gets modified and in case of failure we have to retain the + * original list. Get a copy here. + */ + ret = vfio_iommu_iova_get_copy(iommu, &iova_copy); + if (ret) + goto out_detach; + + ret = vfio_iommu_aper_resize(&iova_copy, geo.aperture_start, + geo.aperture_end); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1507,8 +1679,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, list_add(&group->next, &d->group_list); iommu_domain_free(domain->domain); kfree(domain); - mutex_unlock(&iommu->lock); - return 0; + goto done; } ret = vfio_iommu_attach_group(domain, group); @@ -1531,7 +1702,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, } list_add(&domain->next, &iommu->domain_list); - +done: + /* Delete the old one and insert new iova list */ + vfio_iommu_iova_insert_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); return 0; @@ -1540,6 +1713,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, vfio_iommu_detach_group(domain, group); out_domain: iommu_domain_free(domain->domain); + vfio_iommu_iova_free(&iova_copy); out_free: kfree(domain); kfree(group); @@ -1679,6 +1853,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) } INIT_LIST_HEAD(&iommu->domain_list); + INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; iommu->dma_avail = dma_entry_limit; mutex_init(&iommu->lock); @@ -1722,6 +1897,9 @@ static void vfio_iommu_type1_release(void *iommu_data) list_del(&domain->next); kfree(domain); } + + vfio_iommu_iova_free(&iommu->iova_list); + kfree(iommu); } From patchwork Tue Jul 23 16:06:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169547 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9072061ilk; Tue, 23 Jul 2019 09:08:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqx1wW8h8c9XTCmoMphAm0TKZF7Iy3hqA9wysZl4RAWThfdjp5EIn5gmluddxQAYRF4AyIo2 X-Received: by 2002:a17:90a:ba94:: with SMTP id t20mr83111056pjr.8.1563898125273; Tue, 23 Jul 2019 09:08:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898125; cv=none; d=google.com; s=arc-20160816; b=dj40lvhjps3e5G3FaltflJNGmJLVVIUbCiPr84e3/8iQLCL0KWb5OdahG1Mf+KDP3J 0CLmhmuMZQuAM05zLFFhGdu45GLeD0a4IyoXtEQdGwyXKc0krYO7CXI4eWV+2YtxLe9K BcrSZqh4gL5/4uHTVkE8CqFf/ENCJxkzv4nncabSmJ/6M1u4JgZpqS7LTcd3Ghv6Kgcn ZnYYz58GNzSFaMKRdC6dQ1iV5UpJQW3yBmh8Ox0XP08fjRx1hUnRvHSNA91S8Z55F5wQ ujz+o5wcDzjVALTDidG8Vs3gO5s2kZunQLSAn7WmGaZDJXxMTuqKdJTKLV0dW0a+fdlw FZCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=VtZtBs9LxSg12r7GoaFJEjk0WZCodwUK4N3AZmR3EJY=; b=o9e0paC8Qbd4CcMqq/fmEvViNBOf1hmPYlMcr5kEkkH/QkGUFodSm9oyniqtqV4iqX m9cHNr9bkb6wL/3aIa82Kup2w/mdhxW38ekmtw1D9coTtLEz2aYI+8yI5KbjAYr1EYk2 taFc2JsDzjI63h6anayLyU2Eud5AYSxMPpQpyTl5owbwfMmYCijZ07nSWUpVoptfbAi1 ybrm1SPTQQdb5O+9jVPB9uIBWG1uy7Mva/VmtK1OEbYINjITFYJW2GapPTSzPUzb7+Vr Q2TdvJZ4ynZxgYH1bWKAIiL+x4ZmKP7p4rR+XEBIM6PW1x9uEF68yvufGGpQ/mBpwZBZ ol9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si11799673plb.345.2019.07.23.09.08.45; Tue, 23 Jul 2019 09:08:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388866AbfGWQIo (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:44 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:2706 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727363AbfGWQIj (ORCPT ); Tue, 23 Jul 2019 12:08:39 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 1ABEF7B5C9FE4BA3DA5B; Wed, 24 Jul 2019 00:08:29 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:21 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 2/6] vfio/type1: Check reserved region conflict and update iova list Date: Tue, 23 Jul 2019 17:06:33 +0100 Message-ID: <20190723160637.8384-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This retrieves the reserved regions associated with dev group and checks for conflicts with any existing dma mappings. Also update the iova list excluding the reserved regions. Reserved regions with type IOMMU_RESV_DIRECT_RELAXABLE are excluded from above checks as they are considered as directly mapped regions which are known to be relaxable. Signed-off-by: Shameer Kolothum --- v7-->v8 -Added check for iommu_get_group_resv_regions() error ret. --- drivers/vfio/vfio_iommu_type1.c | 98 +++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 6a69652b406b..a3c9794ccf83 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1496,6 +1496,88 @@ static int vfio_iommu_aper_resize(struct list_head *iova, return 0; } +/* + * Check reserved region conflicts with existing dma mappings + */ +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu, + struct list_head *resv_regions) +{ + struct iommu_resv_region *region; + + /* Check for conflict with existing dma mappings */ + list_for_each_entry(region, resv_regions, list) { + if (region->type == IOMMU_RESV_DIRECT_RELAXABLE) + continue; + + if (vfio_find_dma(iommu, region->start, region->length)) + return true; + } + + return false; +} + +/* + * Check iova region overlap with reserved regions and + * exclude them from the iommu iova range + */ +static int vfio_iommu_resv_exclude(struct list_head *iova, + struct list_head *resv_regions) +{ + struct iommu_resv_region *resv; + struct vfio_iova *n, *next; + + list_for_each_entry(resv, resv_regions, list) { + phys_addr_t start, end; + + if (resv->type == IOMMU_RESV_DIRECT_RELAXABLE) + continue; + + start = resv->start; + end = resv->start + resv->length - 1; + + list_for_each_entry_safe(n, next, iova, list) { + int ret = 0; + + /* No overlap */ + if (start > n->end || end < n->start) + continue; + /* + * Insert a new node if current node overlaps with the + * reserve region to exlude that from valid iova range. + * Note that, new node is inserted before the current + * node and finally the current node is deleted keeping + * the list updated and sorted. + */ + if (start > n->start) + ret = vfio_iommu_iova_insert(&n->list, n->start, + start - 1); + if (!ret && end < n->end) + ret = vfio_iommu_iova_insert(&n->list, end + 1, + n->end); + if (ret) + return ret; + + list_del(&n->list); + kfree(n); + } + } + + if (list_empty(iova)) + return -EINVAL; + + return 0; +} + +static void vfio_iommu_resv_free(struct list_head *resv_regions) +{ + struct iommu_resv_region *n, *next; + + list_for_each_entry_safe(n, next, resv_regions, list) { + list_del(&n->list); + kfree(n); + } +} + static void vfio_iommu_iova_free(struct list_head *iova) { struct vfio_iova *n, *next; @@ -1547,6 +1629,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, phys_addr_t resv_msi_base; struct iommu_domain_geometry geo; LIST_HEAD(iova_copy); + LIST_HEAD(group_resv_regions); mutex_lock(&iommu->lock); @@ -1632,6 +1715,15 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, goto out_detach; } + ret = iommu_get_group_resv_regions(iommu_group, &group_resv_regions); + if (ret) + goto out_detach; + + if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) { + ret = -EINVAL; + goto out_detach; + } + /* * We don't want to work on the original iova list as the list * gets modified and in case of failure we have to retain the @@ -1646,6 +1738,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; + ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1706,6 +1802,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); + vfio_iommu_resv_free(&group_resv_regions); return 0; @@ -1714,6 +1811,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, out_domain: iommu_domain_free(domain->domain); vfio_iommu_iova_free(&iova_copy); + vfio_iommu_resv_free(&group_resv_regions); out_free: kfree(domain); kfree(group); From patchwork Tue Jul 23 16:06:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169546 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9071992ilk; Tue, 23 Jul 2019 09:08:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqwp8cMypb4qd1vjYTdBpEQAZTq0QM2kdW/KL5Nzbb2kfa2SmKFcIoKTn3o1U5lkSYR9Lcp9 X-Received: by 2002:a17:90a:3aed:: with SMTP id b100mr83895424pjc.63.1563898121113; Tue, 23 Jul 2019 09:08:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898121; cv=none; d=google.com; s=arc-20160816; b=aqvVyuZUs6WZ28pDcFoXHZFjOxar1sSCn6xfvP/29Jr8kpWfB9WlDveSIvWAVMRcFs 4OghwiIG2gsKzRH2SvLH7Fg0llupx/Ev8Pi+b7ScnoTzF3j0qdm2B88uV+iXkgERf8Eh x9LLsbkg4vBtvei4Y3rnawvyYXrz0oL/xPBImd9RkYiVMXSG13vvtRr3+5f4jkxvFlgw wlvaWhDewXIWA2M+qLLwtQZCLyD/KDq72+U3XGExbCVvrnvP3VW/kvVjCk9hdYU/ZPsB dHk0FUW394kVLR+gu9kcFBV9e1EA/Nzp+Zi8Au7dHiiIJil3HFtryL+/iZ10cpaLfqsx rU5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=yAxowK39EeuxEZK0z/5l3XaoNwJGj4pXn25FwltaG2M=; b=NDhITOv0++Kti9iBqscZAXbpCp7SGg993NrIoVgT4+JsDk9gonsQcMxkvANJGqB3gh +4eMFtDElpZ6+oxzsKNr6nrg7mLRgWzkDFEIwsrXy0uNSP0AOjHzUXxnmFuu9Xsceney d+ceh4NDW7d8tZkNLOn7D+wGoQYTP7EhXMu62/mrswzQJsRyJboUtjgZbzvQat/CEq94 XkpcjLPRBLJauAvuKU8KbMo6K5ak0CRktL9P6G+Pp0p7U5WUIxvEbRyW4rEDM2SyuJi/ Q3m5vK7/2N0L1x4tr2wvYqZrxCS0P7J2rijE7t4U/vLhg3wW0QnLMMil+4U58owGOjrJ j9Rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si11799673plb.345.2019.07.23.09.08.40; Tue, 23 Jul 2019 09:08:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388814AbfGWQIi (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:38 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2740 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388777AbfGWQIg (ORCPT ); Tue, 23 Jul 2019 12:08:36 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 2C1734A5F9A2A176738C; Wed, 24 Jul 2019 00:08:34 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:25 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 3/6] vfio/type1: Update iova list on detach Date: Tue, 23 Jul 2019 17:06:34 +0100 Message-ID: <20190723160637.8384-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Get a copy of iova list on _group_detach and try to update the list. On success replace the current one with the copy. Leave the list as it is if update fails. Signed-off-by: Shameer Kolothum --- v7 --> v8 -Fixed possible invalid holes in iova list if there are no more reserved regions in vfio_iommu_resv_refresh(). -Handled iommu_get_group_resv_regions() err case in vfio_iommu_resv_refresh() -Tidy up of iova_copy list fail case. --- drivers/vfio/vfio_iommu_type1.c | 94 +++++++++++++++++++++++++++++++++ 1 file changed, 94 insertions(+) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index a3c9794ccf83..7005a8cfca1b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1867,12 +1867,93 @@ static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) WARN_ON(iommu->notifier.head); } +/* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the + * iova aperture with the smallest window among existing domains. + */ +static void vfio_iommu_aper_expand(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *domain; + struct iommu_domain_geometry geo; + struct vfio_iova *node; + dma_addr_t start = 0; + dma_addr_t end = (dma_addr_t)~0; + + if (list_empty(iova_copy)) + return; + + list_for_each_entry(domain, &iommu->domain_list, next) { + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, + &geo); + if (geo.aperture_start > start) + start = geo.aperture_start; + if (geo.aperture_end < end) + end = geo.aperture_end; + } + + /* Modify aperture limits. The new aper is either same or bigger */ + node = list_first_entry(iova_copy, struct vfio_iova, list); + node->start = start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + node->end = end; +} + +/* + * Called when a group is detached. The reserved regions for that + * group can be part of valid iova now. But since reserved regions + * may be duplicated among groups, populate the iova valid regions + * list again. + */ +static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *d; + struct vfio_group *g; + struct vfio_iova *node; + dma_addr_t start, end; + LIST_HEAD(resv_regions); + int ret; + + if (list_empty(iova_copy)) + return -EINVAL; + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) { + ret = iommu_get_group_resv_regions(g->iommu_group, + &resv_regions); + if (ret) + goto done; + } + } + + node = list_first_entry(iova_copy, struct vfio_iova, list); + start = node->start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + end = node->end; + + /* purge the iova list and create new one */ + vfio_iommu_iova_free(iova_copy); + + ret = vfio_iommu_aper_resize(iova_copy, start, end); + if (ret) + goto done; + + /* Exclude current reserved regions from iova ranges */ + ret = vfio_iommu_resv_exclude(iova_copy, &resv_regions); +done: + vfio_iommu_resv_free(&resv_regions); + return ret; +} + static void vfio_iommu_type1_detach_group(void *iommu_data, struct iommu_group *iommu_group) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; + LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -1895,6 +1976,13 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, } } + /* + * Get a copy of iova list. This will be used to update + * and to replace the current one later. Please note that + * we will leave the original list as it is if update fails. + */ + vfio_iommu_iova_get_copy(iommu, &iova_copy); + list_for_each_entry(domain, &iommu->domain_list, next) { group = find_iommu_group(domain, iommu_group); if (!group) @@ -1920,10 +2008,16 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, iommu_domain_free(domain->domain); list_del(&domain->next); kfree(domain); + vfio_iommu_aper_expand(iommu, &iova_copy); } break; } + if (!vfio_iommu_resv_refresh(iommu, &iova_copy)) + vfio_iommu_iova_insert_copy(iommu, &iova_copy); + else + vfio_iommu_iova_free(&iova_copy); + detach_group_done: mutex_unlock(&iommu->lock); } From patchwork Tue Jul 23 16:06:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169550 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9072373ilk; Tue, 23 Jul 2019 09:08:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqyWxCL0r76137h9cfpEnax6zjzvhzDyvnLlQH9tsRAQmTntaqgi9EAwC+e8eGwrWsl/gyLP X-Received: by 2002:a62:ac1a:: with SMTP id v26mr6677033pfe.184.1563898138240; Tue, 23 Jul 2019 09:08:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898138; cv=none; d=google.com; s=arc-20160816; b=EbhqsytOmCBwZca/sidR3auTCFVOYZo9QLJ8L3v123/ldb8NtcgzS06UaWdIbgjGAw hkEO6jNYK61lgziXdPvzyutEKt4o1m7Lq1bTS7CMwuCMS7mhiVXYQ+honA2TvCunyGFK yDf7OkMkkt9K77je4j3lLkzMjcNKAbUvWb3eexP1Azw5Lid8ti5afvisdVwNSHCfZGnf JlbKq1EJpVOwq/ygmVXNeSAv/XKgmimECwUmxPc4lp2uXJzL1GhLe8yp0LY85yNwsuaK EpnhwDf3DLEBilQp9w9dtAiUZi/PxT/MRVDWrQIPlkgliGtztG0fvchZIsF/PXMmDG74 YceA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=Qsi9pCBY0bVPm3Igrj6wkd5fhfRA20XeonxFuQZ0Flo=; b=IK0bHDkHF6G912CVn1gMItaGsXvXqCBItwhGDuQEuS9UOABJkE6tsQEZUMVG3G8VhQ Tx+KkLoJAzPJGiZIZUJksEAnnHQlRsdifeeh+ws6UcK8kZSUia0MK99IYS074S2bLHtu JXfnVY9Yqul6j9w8F6WTXEWC/wJbX+I7QEjf7WLjTWzs6tXRWySVc8Y/4EawzkiOvpCf z874cspfO2gz7pxueBfZCz/mhVJilzqf+i44ugvG54c+bVckfHWz8SuqCLw/prpZFxpX +6iu+bOcTDpO0hL2uRX1Hoo11s2z0Xl+fhZHxA+4Sl9CuN/jk1rvAtfKoiI5sI8VyahR zMlw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w198si12785750pfd.106.2019.07.23.09.08.57; Tue, 23 Jul 2019 09:08:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390765AbfGWQI5 (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:57 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2741 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388820AbfGWQIl (ORCPT ); Tue, 23 Jul 2019 12:08:41 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 3E9C692B2A821DFB467A; Wed, 24 Jul 2019 00:08:39 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:28 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 4/6] vfio/type1: check dma map request is within a valid iova range Date: Tue, 23 Jul 2019 17:06:35 +0100 Message-ID: <20190723160637.8384-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This checks and rejects any dma map request outside valid iova range. Signed-off-by: Shameer Kolothum Reviewed-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) -- 2.17.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 7005a8cfca1b..56cf55776d6c 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1038,6 +1038,27 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma, return ret; } +/* + * Check dma map request is within a valid iova range + */ +static bool vfio_iommu_iova_dma_valid(struct vfio_iommu *iommu, + dma_addr_t start, dma_addr_t end) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *node; + + list_for_each_entry(node, iova, list) { + if (start >= node->start && end <= node->end) + return true; + } + + /* + * Check for list_empty() as well since a container with + * a single mdev device will have an empty list. + */ + return list_empty(iova); +} + static int vfio_dma_do_map(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_map *map) { @@ -1081,6 +1102,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, goto out_unlock; } + if (!vfio_iommu_iova_dma_valid(iommu, iova, iova + size - 1)) { + ret = -EINVAL; + goto out_unlock; + } + dma = kzalloc(sizeof(*dma), GFP_KERNEL); if (!dma) { ret = -ENOMEM; From patchwork Tue Jul 23 16:06:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169549 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9072199ilk; Tue, 23 Jul 2019 09:08:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqx4qSY0MPbaSnLwkMh4UjubJIKuYmPQGEqtoyhBhUBdCRX6HvEjkq4Nsvy3ksHpD7wv2EH9 X-Received: by 2002:a17:902:61:: with SMTP id 88mr77443181pla.50.1563898130482; Tue, 23 Jul 2019 09:08:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898130; cv=none; d=google.com; s=arc-20160816; b=DIHjz3dcKMq0rdXNz6G6VNQ2GmPyKgTej2rVH3U83mM/TzEqNE0DzSw3CimTCNpUSu Yd/ji66hV/XSn1HIvPAYk017auax9AjDcdwiUndghHIAwCi4aVGvHGksnhCIWV+DS496 KcJ2h2Kd95VI6+VU9V/MXjMRow+6bbtWEqwYpYiOk57stitRnhxX5bewnhasYzWeHSP2 BaBEJyCQ/OIZXk1kZP5z4WQYuJilEPrdXWncLgvGFfRZWr8uGrYjb3RJidWVRpWL7Kzf dn2R3qo0/WnkgkI00xCBChF3bguJ8GkDgVFp0JB/z9vrK7hDHI22AbplsffX5O/dltae IhUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=2TdiSYLwvL17cDLMXN1/7PWxpm0l7K5RBNVoxfHU108=; b=NOikYU7v3PA+m8XoRyIjV/ksfSWJ8xc9u2oNaXvz6aXtmK0jv6Oi+zlrsqdk6zVEvM R04BJKrvit6HNPoiQ0zqukum6EvXKNoFTAgXAa7mlSOapthoIY3xxz1b4ox9E2XBpwwk e7LydS662elzUcfvrFfS5AhsgACcuvWWHmyrWX9U4S2Y6ULh4WbvEnntrd1GzzDtgtDg jM3tjLMTipUbhQrJGUcqgxjgykOyXDbWzc3E/Zi72l3LjSjPFXh4ViqgR3mKpRJQFD69 Y9AKahwPeg/5d9R4O0p+zo3XBCFJ4m5xgDUmyodTndYdx4kKok3H4HghYIplQO2uZDRN OJEQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g7si14525718plt.244.2019.07.23.09.08.50; Tue, 23 Jul 2019 09:08:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390715AbfGWQIt (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:49 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2742 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388840AbfGWQIo (ORCPT ); Tue, 23 Jul 2019 12:08:44 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 43523B08A765E3E9D256; Wed, 24 Jul 2019 00:08:39 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:31 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 5/6] vfio/type1: Add IOVA range capability support Date: Tue, 23 Jul 2019 17:06:36 +0100 Message-ID: <20190723160637.8384-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This allows the user-space to retrieve the supported IOVA range(s), excluding any non-relaxable reserved regions. The implementation is based on capability chains, added to VFIO_IOMMU_GET_INFO ioctl. Signed-off-by: Shameer Kolothum Reviewed-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 101 ++++++++++++++++++++++++++++++++ include/uapi/linux/vfio.h | 26 +++++++- 2 files changed, 126 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 56cf55776d6c..d0c5e768acb7 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2138,6 +2138,73 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu) return ret; } +static int vfio_iommu_iova_add_cap(struct vfio_info_cap *caps, + struct vfio_iommu_type1_info_cap_iova_range *cap_iovas, + size_t size) +{ + struct vfio_info_cap_header *header; + struct vfio_iommu_type1_info_cap_iova_range *iova_cap; + + header = vfio_info_cap_add(caps, size, + VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE, 1); + if (IS_ERR(header)) + return PTR_ERR(header); + + iova_cap = container_of(header, + struct vfio_iommu_type1_info_cap_iova_range, + header); + iova_cap->nr_iovas = cap_iovas->nr_iovas; + memcpy(iova_cap->iova_ranges, cap_iovas->iova_ranges, + cap_iovas->nr_iovas * sizeof(*cap_iovas->iova_ranges)); + return 0; +} + +static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu, + struct vfio_info_cap *caps) +{ + struct vfio_iommu_type1_info_cap_iova_range *cap_iovas; + struct vfio_iova *iova; + size_t size; + int iovas = 0, i = 0, ret; + + mutex_lock(&iommu->lock); + + list_for_each_entry(iova, &iommu->iova_list, list) + iovas++; + + if (!iovas) { + /* + * Return 0 as a container with a single mdev device + * will have an empty list + */ + ret = 0; + goto out_unlock; + } + + size = sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges)); + + cap_iovas = kzalloc(size, GFP_KERNEL); + if (!cap_iovas) { + ret = -ENOMEM; + goto out_unlock; + } + + cap_iovas->nr_iovas = iovas; + + list_for_each_entry(iova, &iommu->iova_list, list) { + cap_iovas->iova_ranges[i].start = iova->start; + cap_iovas->iova_ranges[i].end = iova->end; + i++; + } + + ret = vfio_iommu_iova_add_cap(caps, cap_iovas, size); + + kfree(cap_iovas); +out_unlock: + mutex_unlock(&iommu->lock); + return ret; +} + static long vfio_iommu_type1_ioctl(void *iommu_data, unsigned int cmd, unsigned long arg) { @@ -2159,19 +2226,53 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } } else if (cmd == VFIO_IOMMU_GET_INFO) { struct vfio_iommu_type1_info info; + struct vfio_info_cap caps = { .buf = NULL, .size = 0 }; + unsigned long capsz; + int ret; minsz = offsetofend(struct vfio_iommu_type1_info, iova_pgsizes); + /* For backward compatibility, cannot require this */ + capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset); + if (copy_from_user(&info, (void __user *)arg, minsz)) return -EFAULT; if (info.argsz < minsz) return -EINVAL; + if (info.argsz >= capsz) { + minsz = capsz; + info.cap_offset = 0; /* output, no-recopy necessary */ + } + info.flags = VFIO_IOMMU_INFO_PGSIZES; info.iova_pgsizes = vfio_pgsize_bitmap(iommu); + ret = vfio_iommu_iova_build_caps(iommu, &caps); + if (ret) + return ret; + + if (caps.size) { + info.flags |= VFIO_IOMMU_INFO_CAPS; + + if (info.argsz < sizeof(info) + caps.size) { + info.argsz = sizeof(info) + caps.size; + } else { + vfio_info_cap_shift(&caps, sizeof(info)); + if (copy_to_user((void __user *)arg + + sizeof(info), caps.buf, + caps.size)) { + kfree(caps.buf); + return -EFAULT; + } + info.cap_offset = sizeof(info); + } + + kfree(caps.buf); + } + return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0; diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 8f10748dac79..1259dccd09d2 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -714,7 +714,31 @@ struct vfio_iommu_type1_info { __u32 argsz; __u32 flags; #define VFIO_IOMMU_INFO_PGSIZES (1 << 0) /* supported page sizes info */ - __u64 iova_pgsizes; /* Bitmap of supported page sizes */ +#define VFIO_IOMMU_INFO_CAPS (1 << 1) /* Info supports caps */ + __u64 iova_pgsizes; /* Bitmap of supported page sizes */ + __u32 cap_offset; /* Offset within info struct of first cap */ +}; + +/* + * The IOVA capability allows to report the valid IOVA range(s) + * excluding any non-relaxable reserved regions exposed by + * devices attached to the container. Any DMA map attempt + * outside the valid iova range will return error. + * + * The structures below define version 1 of this capability. + */ +#define VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE 1 + +struct vfio_iova_range { + __u64 start; + __u64 end; +}; + +struct vfio_iommu_type1_info_cap_iova_range { + struct vfio_info_cap_header header; + __u32 nr_iovas; + __u32 reserved; + struct vfio_iova_range iova_ranges[]; }; #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) From patchwork Tue Jul 23 16:06:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169551 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9072523ilk; Tue, 23 Jul 2019 09:09:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqwGr4e/wRaZ3XnMv4Y/eTTdnqOoUYUw3Jy9YDbah0o3gSyxCDtfCQngcVxIC6KTF+BxUTZj X-Received: by 2002:a17:902:2ae7:: with SMTP id j94mr81250314plb.270.1563898146596; Tue, 23 Jul 2019 09:09:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898146; cv=none; d=google.com; s=arc-20160816; b=BbtayW+mXWrpinLDJHA1ZPg7uglCXrW8NnszLD0r8frwUd87VwRd6V1sPQE/mNVmtB 6M6rAFKbGC42sUAQzNdp+LbbAyoUUJ1AMYw110LPP4OZaTDJh5EW94NY5+qKnT0Sjmsb b1zKycIaS0K20cQ7HzMgWuELbaQ8UxPJzDoNSTxlX/Ng8YplF3Phziattkxp1SVAonUA tUKoxJVOuZE88P8ydIVaNR4EzveuRz1dJs/NmINKJWRDlJY/J1mQPKjC/08vpfXQA5O3 fzglf5/FR6EWoed/abOZUBdHjRTN750l9pGkzlE+kwi8xU39SIV9JP+MMFivbkpnzDLK xzvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=qH6Nd9bqJtiZYL/nDEGvzAicp+D2R74jEO7uQq5DwIo=; b=tVIytvGR3eSUv5nrG7JH6NvVpmAiMpqglK+qePDKTe66URm4nRNe5UNcWmMAQMlVyp fFCZwXgidOMWgbsG9bCCpblKMyeFD0c5DXPCdAKkDC5cRaKnMDFDgdeFScpb6wKIKNBc sNLbKvjRsNhoSvRQ4xyKxflesU6+XbFEirYYosjPQa/Bmfx7jWb1Hugqxj5QVuehcLXs vZOEQd8qO0eb90RJ97ny5rnnM5dx176zJaNa7k3LeIUbMEH6B3sKn2U1V3xQBm7lfNuc LmJptOUJslIXTbOlKnrZMqnMStVxTOF8+hAYfGUkyYNFxBivjlW6eNCX8CtV4NhtRFGK bKmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k15si16989285pgj.216.2019.07.23.09.09.06; Tue, 23 Jul 2019 09:09:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390800AbfGWQJF (ORCPT + 29 others); Tue, 23 Jul 2019 12:09:05 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:48070 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388931AbfGWQJE (ORCPT ); Tue, 23 Jul 2019 12:09:04 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 4F2D251B1C3190D8618A; Wed, 24 Jul 2019 00:08:44 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:34 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 6/6] vfio/type1: remove duplicate retrieval of reserved regions Date: Tue, 23 Jul 2019 17:06:37 +0100 Message-ID: <20190723160637.8384-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As we now already have the reserved regions list, just pass that into vfio_iommu_has_sw_msi() fn. Signed-off-by: Shameer Kolothum Reviewed-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) -- 2.17.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index d0c5e768acb7..a68405f24fbf 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1296,15 +1296,13 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain, return NULL; } -static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) +static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, + phys_addr_t *base) { - struct list_head group_resv_regions; - struct iommu_resv_region *region, *next; + struct iommu_resv_region *region; bool ret = false; - INIT_LIST_HEAD(&group_resv_regions); - iommu_get_group_resv_regions(group, &group_resv_regions); - list_for_each_entry(region, &group_resv_regions, list) { + list_for_each_entry(region, group_resv_regions, list) { /* * The presence of any 'real' MSI regions should take * precedence over the software-managed one if the @@ -1320,8 +1318,7 @@ static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) ret = true; } } - list_for_each_entry_safe(region, next, &group_resv_regions, list) - kfree(region); + return ret; } @@ -1768,7 +1765,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; - resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); + resv_msi = vfio_iommu_has_sw_msi(&group_resv_regions, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); list_add(&group->next, &domain->group_list);