From patchwork Wed Jun 26 15:12:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 167832 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1072996ilk; Wed, 26 Jun 2019 08:13:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqzoKewKW8fqtBtzmup+M55OtIdFxZS+mbTNB8QrNNBpiUDm3+jIijGHW9rgyCcKwlzjGSJz X-Received: by 2002:a17:902:9a04:: with SMTP id v4mr5878940plp.95.1561562029080; Wed, 26 Jun 2019 08:13:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561562029; cv=none; d=google.com; s=arc-20160816; b=DVYQHaJC6fbWEF8HIXViGzFxgRed6Z13kkYqBIXN3NfqT8c4tRUeEX3680AUEhjq87 VlHG09II1zbroAzJWQ2sIaTb6zphHlLTNluP3f9uDo11WCXmiXtMpadxQTYTLAlrkkrn 2LGWli8v1BTuPRDDb1jY3FNJVP/sS8qZ9mP2IxumEe/B33kVjcyCYVr4SGTnihNn/yz6 tqMSW2+mFEC8Db2yhsFC3W29FpkFbJ3bvhUyumnXbtv0VVsW5P1huO4zqt4vnOUuHa+G Z6wNkQoSoUQ5Xbqv4OFybzONtN7dr+ls5xD4D8W2H5lVlL5pnHPdg0xBo7XjZd1QWWzZ T9aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=fdSkaUUF7aZYYPhnmn3qS6Vnt3WXfreNkFatKyddfVo=; b=X0p351M4TYerEkR37E783Y27vRP1GVgwEsWUlRqgasFWv6EeS3viFUp71KK9BIx6dU RSeNK9WVgf+yWzCLHVIbl4z/zSPnW3MzVEw1CYB/BwmSJIs82iYuIrDWmwczRaB6l/Af XRcoFW+C8WIHA2fEjN3aU2zNeUII4aeGuUUYmzAVQ346ES7kMLZyaMkYgQ1yDMXcR0Rm XK0AdmQrbe+HZfbITUAyX9aEI5N4gvdu7GOUy8a/HuoOw22nuhKzqLxZdxQcougc4mLI dGRid/9REP/koLxbuRo1aNEeHWV3zX0NT8GwGyEJmdaepWSN5auLr6f5QfFeMYP+soy8 epVw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f1si15794304pgo.517.2019.06.26.08.13.48; Wed, 26 Jun 2019 08:13:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728278AbfFZPNr (ORCPT + 30 others); Wed, 26 Jun 2019 11:13:47 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:39086 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726157AbfFZPNr (ORCPT ); Wed, 26 Jun 2019 11:13:47 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 7AA32F9D522535525BE4; Wed, 26 Jun 2019 23:13:42 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.439.0; Wed, 26 Jun 2019 23:13:32 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v7 1/6] vfio/type1: Introduce iova list and add iommu aperture validity check Date: Wed, 26 Jun 2019 16:12:43 +0100 Message-ID: <20190626151248.11776-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> References: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This introduces an iova list that is valid for dma mappings. Make sure the new iommu aperture window doesn't conflict with the current one or with any existing dma mappings during attach. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 181 +++++++++++++++++++++++++++++++- 1 file changed, 177 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index add34adfadc7..970d1ec06aed 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1,4 +1,3 @@ -// SPDX-License-Identifier: GPL-2.0-only /* * VFIO: IOMMU DMA mapping support for Type1 IOMMU * @@ -62,6 +61,7 @@ MODULE_PARM_DESC(dma_entry_limit, struct vfio_iommu { struct list_head domain_list; + struct list_head iova_list; struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; struct rb_root dma_list; @@ -97,6 +97,12 @@ struct vfio_group { bool mdev_group; /* An mdev group */ }; +struct vfio_iova { + struct list_head list; + dma_addr_t start; + dma_addr_t end; +}; + /* * Guest RAM pinning working set or DMA target */ @@ -1401,6 +1407,146 @@ static int vfio_mdev_iommu_device(struct device *dev, void *data) return 0; } +/* + * This is a helper function to insert an address range to iova list. + * The list starts with a single entry corresponding to the IOMMU + * domain geometry to which the device group is attached. The list + * aperture gets modified when a new domain is added to the container + * if the new aperture doesn't conflict with the current one or with + * any existing dma mappings. The list is also modified to exclude + * any reserved regions associated with the device group. + */ +static int vfio_iommu_iova_insert(struct list_head *head, + dma_addr_t start, dma_addr_t end) +{ + struct vfio_iova *region; + + region = kmalloc(sizeof(*region), GFP_KERNEL); + if (!region) + return -ENOMEM; + + INIT_LIST_HEAD(®ion->list); + region->start = start; + region->end = end; + + list_add_tail(®ion->list, head); + return 0; +} + +/* + * Check the new iommu aperture conflicts with existing aper or with any + * existing dma mappings. + */ +static bool vfio_iommu_aper_conflict(struct vfio_iommu *iommu, + dma_addr_t start, dma_addr_t end) +{ + struct vfio_iova *first, *last; + struct list_head *iova = &iommu->iova_list; + + if (list_empty(iova)) + return false; + + /* Disjoint sets, return conflict */ + first = list_first_entry(iova, struct vfio_iova, list); + last = list_last_entry(iova, struct vfio_iova, list); + if (start > last->end || end < first->start) + return true; + + /* Check for any existing dma mappings outside the new start */ + if (start > first->start) { + if (vfio_find_dma(iommu, first->start, start - first->start)) + return true; + } + + /* Check for any existing dma mappings outside the new end */ + if (end < last->end) { + if (vfio_find_dma(iommu, end + 1, last->end - end)) + return true; + } + + return false; +} + +/* + * Resize iommu iova aperture window. This is called only if the new + * aperture has no conflict with existing aperture and dma mappings. + */ +static int vfio_iommu_aper_resize(struct list_head *iova, + dma_addr_t start, dma_addr_t end) +{ + struct vfio_iova *node, *next; + + if (list_empty(iova)) + return vfio_iommu_iova_insert(iova, start, end); + + /* Adjust iova list start */ + list_for_each_entry_safe(node, next, iova, list) { + if (start < node->start) + break; + if (start >= node->start && start < node->end) { + node->start = start; + break; + } + /* Delete nodes before new start */ + list_del(&node->list); + kfree(node); + } + + /* Adjust iova list end */ + list_for_each_entry_safe(node, next, iova, list) { + if (end > node->end) + continue; + if (end > node->start && end <= node->end) { + node->end = end; + continue; + } + /* Delete nodes after new end */ + list_del(&node->list); + kfree(node); + } + + return 0; +} + +static void vfio_iommu_iova_free(struct list_head *iova) +{ + struct vfio_iova *n, *next; + + list_for_each_entry_safe(n, next, iova, list) { + list_del(&n->list); + kfree(n); + } +} + +static int vfio_iommu_iova_get_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *n; + int ret; + + list_for_each_entry(n, iova, list) { + ret = vfio_iommu_iova_insert(iova_copy, n->start, n->end); + if (ret) + goto out_free; + } + + return 0; + +out_free: + vfio_iommu_iova_free(iova_copy); + return ret; +} + +static void vfio_iommu_iova_insert_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct list_head *iova = &iommu->iova_list; + + vfio_iommu_iova_free(iova); + + list_splice_tail(iova_copy, iova); +} static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -1411,6 +1557,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, int ret; bool resv_msi, msi_remap; phys_addr_t resv_msi_base; + struct iommu_domain_geometry geo; + LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -1487,6 +1635,25 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_domain; + /* Get aperture info */ + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, &geo); + + if (vfio_iommu_aper_conflict(iommu, geo.aperture_start, + geo.aperture_end)) { + ret = -EINVAL; + goto out_detach; + } + + /* Get a copy of the current iova list and work on it */ + ret = vfio_iommu_iova_get_copy(iommu, &iova_copy); + if (ret) + goto out_detach; + + ret = vfio_iommu_aper_resize(&iova_copy, geo.aperture_start, + geo.aperture_end); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1520,8 +1687,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, list_add(&group->next, &d->group_list); iommu_domain_free(domain->domain); kfree(domain); - mutex_unlock(&iommu->lock); - return 0; + goto done; } ret = vfio_iommu_attach_group(domain, group); @@ -1544,7 +1710,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, } list_add(&domain->next, &iommu->domain_list); - +done: + /* Delete the old one and insert new iova list */ + vfio_iommu_iova_insert_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); return 0; @@ -1553,6 +1721,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, vfio_iommu_detach_group(domain, group); out_domain: iommu_domain_free(domain->domain); + vfio_iommu_iova_free(&iova_copy); out_free: kfree(domain); kfree(group); @@ -1692,6 +1861,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) } INIT_LIST_HEAD(&iommu->domain_list); + INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; iommu->dma_avail = dma_entry_limit; mutex_init(&iommu->lock); @@ -1735,6 +1905,9 @@ static void vfio_iommu_type1_release(void *iommu_data) list_del(&domain->next); kfree(domain); } + + vfio_iommu_iova_free(&iommu->iova_list); + kfree(iommu); } From patchwork Wed Jun 26 15:12:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 167833 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1073081ilk; Wed, 26 Jun 2019 08:13:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqz2TYpUvKlwCKe+5vRm/D4f1Ld+B+gLpV61jZJ9hLcAelvVPp9l4SHnVuowT6QcG2zNKz3X X-Received: by 2002:a63:3d02:: with SMTP id k2mr3434008pga.36.1561562033834; Wed, 26 Jun 2019 08:13:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561562033; cv=none; d=google.com; s=arc-20160816; b=b5z2WWcJxdpaF9wxuchSEkV8foMf5MaMs2xNPm1ofNENgrv+/OIPkxAM6Ei4GEBUph NIKUYxN+AnwchLUKm0utrCd3Or7eu7tfsg8I3yVV/FvEj7JWA11ORrfi5QnnyrpNjbVZ aQLEnYECNhOafk+KMxLVwtt7srxZhmkc2aOpUAZYxi7JjlMq85CF0yVwPIOMheN8EzCj zMCfgBn1CObPkJWc0YZtcTBDotsCP6XJDRx5fpDXIF7uOprZjbOQIcYzWCEqo9RPDRDG 5EeT1kLcqUqFyN8Go115hsehO1x1aeeoC/m8CnnqoV2UIS4WIwA6S2U8++7PJ5lwyZF9 pSaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=Au1vsrDNrCEm3oixuEjIbPhSOBCh2NpYBgmx1kp4oS4=; b=MVAGdG0q51ZU0fVxKqjrOfirjQJhkGJja6fNKYf4oHyaLqsmUrLMP2TY/VewhMbYwx /SNGc/RYMrvjPqW3GTKAY/7j6TeLpv4JNxXnCxCXRffhmn8MEHE5cqegtcOjr7h46O/s IIHsxpnoi6KANEjMpC/GQoVzLhLmmtEhm++7C3DPME0Mi1h51/hrxKbg6t1NQMMNs6g7 ItZiUZmkvTKR+39rz1m2PMNdjH/yFHjaVCOf2F7/swjGyZwII8IFJJL6ecsQORDn3udE 6/1HX+PKi5JjEu3Q4Sfo6ct2GbfwOCPVSDjnsHzpMn8a3pit6mRBz7yJuNMEx0wtj823 jRog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w7si15618857pgs.168.2019.06.26.08.13.53; Wed, 26 Jun 2019 08:13:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728349AbfFZPNw (ORCPT + 30 others); Wed, 26 Jun 2019 11:13:52 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:40824 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726157AbfFZPNu (ORCPT ); Wed, 26 Jun 2019 11:13:50 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 5A61315EC890CCFBC107; Wed, 26 Jun 2019 23:13:47 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.439.0; Wed, 26 Jun 2019 23:13:36 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v7 2/6] vfio/type1: Check reserve region conflict and update iova list Date: Wed, 26 Jun 2019 16:12:44 +0100 Message-ID: <20190626151248.11776-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> References: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This retrieves the reserved regions associated with dev group and checks for conflicts with any existing dma mappings. Also update the iova list excluding the reserved regions. Reserved regions with type IOMMU_RESV_DIRECT_RELAXABLE are excluded from above checks as they are considered as directly mapped regions which are known to be relaxable. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 96 +++++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) -- 2.17.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 970d1ec06aed..b6bfdfa16c33 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1508,6 +1508,88 @@ static int vfio_iommu_aper_resize(struct list_head *iova, return 0; } +/* + * Check reserved region conflicts with existing dma mappings + */ +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu, + struct list_head *resv_regions) +{ + struct iommu_resv_region *region; + + /* Check for conflict with existing dma mappings */ + list_for_each_entry(region, resv_regions, list) { + if (region->type == IOMMU_RESV_DIRECT_RELAXABLE) + continue; + + if (vfio_find_dma(iommu, region->start, region->length)) + return true; + } + + return false; +} + +/* + * Check iova region overlap with reserved regions and + * exclude them from the iommu iova range + */ +static int vfio_iommu_resv_exclude(struct list_head *iova, + struct list_head *resv_regions) +{ + struct iommu_resv_region *resv; + struct vfio_iova *n, *next; + + list_for_each_entry(resv, resv_regions, list) { + phys_addr_t start, end; + + if (resv->type == IOMMU_RESV_DIRECT_RELAXABLE) + continue; + + start = resv->start; + end = resv->start + resv->length - 1; + + list_for_each_entry_safe(n, next, iova, list) { + int ret = 0; + + /* No overlap */ + if (start > n->end || end < n->start) + continue; + /* + * Insert a new node if current node overlaps with the + * reserve region to exlude that from valid iova range. + * Note that, new node is inserted before the current + * node and finally the current node is deleted keeping + * the list updated and sorted. + */ + if (start > n->start) + ret = vfio_iommu_iova_insert(&n->list, n->start, + start - 1); + if (!ret && end < n->end) + ret = vfio_iommu_iova_insert(&n->list, end + 1, + n->end); + if (ret) + return ret; + + list_del(&n->list); + kfree(n); + } + } + + if (list_empty(iova)) + return -EINVAL; + + return 0; +} + +static void vfio_iommu_resv_free(struct list_head *resv_regions) +{ + struct iommu_resv_region *n, *next; + + list_for_each_entry_safe(n, next, resv_regions, list) { + list_del(&n->list); + kfree(n); + } +} + static void vfio_iommu_iova_free(struct list_head *iova) { struct vfio_iova *n, *next; @@ -1559,6 +1641,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, phys_addr_t resv_msi_base; struct iommu_domain_geometry geo; LIST_HEAD(iova_copy); + LIST_HEAD(group_resv_regions); mutex_lock(&iommu->lock); @@ -1644,6 +1727,13 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, goto out_detach; } + iommu_get_group_resv_regions(iommu_group, &group_resv_regions); + + if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) { + ret = -EINVAL; + goto out_detach; + } + /* Get a copy of the current iova list and work on it */ ret = vfio_iommu_iova_get_copy(iommu, &iova_copy); if (ret) @@ -1654,6 +1744,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; + ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1714,6 +1808,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, /* Delete the old one and insert new iova list */ vfio_iommu_iova_insert_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); + vfio_iommu_resv_free(&group_resv_regions); return 0; @@ -1722,6 +1817,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, out_domain: iommu_domain_free(domain->domain); vfio_iommu_iova_free(&iova_copy); + vfio_iommu_resv_free(&group_resv_regions); out_free: kfree(domain); kfree(group); From patchwork Wed Jun 26 15:12:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 167834 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1073204ilk; Wed, 26 Jun 2019 08:13:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqydw3LfbMTfvlAdlnfLETNmRsaDy2CtKOQ1Z3vjBs9lmjIa1dw3goELfW9GXLu9srDbIE5N X-Received: by 2002:a17:902:9a82:: with SMTP id w2mr5960908plp.291.1561562039510; Wed, 26 Jun 2019 08:13:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561562039; cv=none; d=google.com; s=arc-20160816; b=UK4WuP4QUSgk4yZHfFzuc7x1KbQvfg9Qdl+f+bGcCHIBRedSMZXWLigoEQBkdJuFeE +gbp8Pw5fV6iY+FmtbH/rfZBfiT7QEQWMYwQS5qO3V6K7gRLmL0Me/KV/FjXgFm+/yCD tBpGfOWh47lCOCQpZrNnqhdaOxIcx5oyNUCzpxOajCc6eSzsBHvXGI5ITuUvTeTu7u1+ T21HnHdPa4p27+g6aYIsGTUCfCImaEXyS9xb+eMkz53wxsV0Otf0aE2WdfRQIVjGBqFC aD4v35SLrW35nffCjk1vQVRNJD+P3Cg4Ca1gIdk6tmz8NVEZjtCATwRJ8VUOwebRVqRn ir5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=ZvcDuGJ8gnVF9FtnWdZ6GyLve5tSQtdV1UnykS/uJ48=; b=bp6aHWOKAADXbazKt7rcF6TkEnibOqoy6zUAhJeoOjxtY9/PLC+bem+/9eDlwWdHjX ecsBEFaNswlxRwRp6Tttqq5/ZgzPFBW4P+f68cICpBwpeGE/HnUkcG+rt4j/P+quJqEX SbxXNwCoo4m3Ar64fzv5SW5S9GE2mtG7p3g/U+D0TBesgIZzgH+W/bGynd4Yz19yUoTl +KqIN/C8AbpNT9BwAfzNKskKIIKjthjlPLg84Is92yIDqARiGjKeHcTFZATGa33rLCw/ +mcrjl6K4WVXEqtTlRV+3rNWL8Qtr9YQ0XEXTKsEGxkMX6mXR0KGGvZ/O9U7qs1l+ydT TN1w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w7si15618857pgs.168.2019.06.26.08.13.59; Wed, 26 Jun 2019 08:13:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728402AbfFZPN5 (ORCPT + 30 others); Wed, 26 Jun 2019 11:13:57 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:40826 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728283AbfFZPNw (ORCPT ); Wed, 26 Jun 2019 11:13:52 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 616A5788C24F903C57E1; Wed, 26 Jun 2019 23:13:47 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.439.0; Wed, 26 Jun 2019 23:13:39 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v7 3/6] vfio/type1: Update iova list on detach Date: Wed, 26 Jun 2019 16:12:45 +0100 Message-ID: <20190626151248.11776-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> References: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Get a copy of iova list on _group_detach and try to update the list. On success replace the current one with the copy. Leave the list as it is if update fails. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 91 +++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) -- 2.17.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index b6bfdfa16c33..e872fb3a0f39 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1873,12 +1873,88 @@ static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) WARN_ON(iommu->notifier.head); } +/* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the + * iova aperture with the smallest window among existing domains. + */ +static void vfio_iommu_aper_expand(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *domain; + struct iommu_domain_geometry geo; + struct vfio_iova *node; + dma_addr_t start = 0; + dma_addr_t end = (dma_addr_t)~0; + + list_for_each_entry(domain, &iommu->domain_list, next) { + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, + &geo); + if (geo.aperture_start > start) + start = geo.aperture_start; + if (geo.aperture_end < end) + end = geo.aperture_end; + } + + /* Modify aperture limits. The new aper is either same or bigger */ + node = list_first_entry(iova_copy, struct vfio_iova, list); + node->start = start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + node->end = end; +} + +/* + * Called when a group is detached. The reserved regions for that + * group can be part of valid iova now. But since reserved regions + * may be duplicated among groups, populate the iova valid regions + * list again. + */ +static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *d; + struct vfio_group *g; + struct vfio_iova *node; + dma_addr_t start, end; + LIST_HEAD(resv_regions); + int ret; + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) + iommu_get_group_resv_regions(g->iommu_group, + &resv_regions); + } + + if (list_empty(&resv_regions)) + return 0; + + node = list_first_entry(iova_copy, struct vfio_iova, list); + start = node->start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + end = node->end; + + /* purge the iova list and create new one */ + vfio_iommu_iova_free(iova_copy); + + ret = vfio_iommu_aper_resize(iova_copy, start, end); + if (ret) + goto done; + + /* Exclude current reserved regions from iova ranges */ + ret = vfio_iommu_resv_exclude(iova_copy, &resv_regions); +done: + vfio_iommu_resv_free(&resv_regions); + return ret; +} + static void vfio_iommu_type1_detach_group(void *iommu_data, struct iommu_group *iommu_group) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; + bool iova_copy_fail; + LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -1901,6 +1977,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, } } + /* + * Get a copy of iova list. If success, use copy to update the + * list and to replace the current one. + */ + iova_copy_fail = !!vfio_iommu_iova_get_copy(iommu, &iova_copy); + list_for_each_entry(domain, &iommu->domain_list, next) { group = find_iommu_group(domain, iommu_group); if (!group) @@ -1926,10 +2008,19 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, iommu_domain_free(domain->domain); list_del(&domain->next); kfree(domain); + if (!iova_copy_fail && !list_empty(&iommu->domain_list)) + vfio_iommu_aper_expand(iommu, &iova_copy); } break; } + if (!iova_copy_fail && !list_empty(&iommu->domain_list)) { + if (!vfio_iommu_resv_refresh(iommu, &iova_copy)) + vfio_iommu_iova_insert_copy(iommu, &iova_copy); + else + vfio_iommu_iova_free(&iova_copy); + } + detach_group_done: mutex_unlock(&iommu->lock); } From patchwork Wed Jun 26 15:12:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 167836 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1073308ilk; Wed, 26 Jun 2019 08:14:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqwF1X+U3AJKD3xuWD+2DihDarC0MWeGxZMWSJ2tIy1tnXw5MP3mrftUsQedhxdp+sI5l5DD X-Received: by 2002:a17:90a:37e9:: with SMTP id v96mr5278125pjb.10.1561562045750; Wed, 26 Jun 2019 08:14:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561562045; cv=none; d=google.com; s=arc-20160816; b=nYa4+wJ1eLD+482l86ktLaGtZUM6sLvsuGuQTrBPiJQ1h1fAs6592Tw321scMcv6Fp nXR2Zb/kgdeXAe32NlhdKZHqFoBJd/ikYufT/w1hELPaGhxMtV/5oJajzZnGG31QkUae r7eZ3D10o9rFttTh5tnI4dGTsUEfNayExN6ORKQVQWzlsvuenJb8/aSeonyt/yW3HVpg X5zasg8YKHnvWPCzvfuHp5MEVVAklCfMldAxchdHPlysEU/MQWAOa7Xw1rz64bMPolLd HOk0S6sCgiLODMdVKd2JF/sUe9XBV/GwI98ZoAiYBi69lJqpCkriVh24cfMW0M+4CW49 Tt4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=PSGpmATupmfnrZwylgl0MfZKBkio32sCclm631JaAZM=; b=YhczGk3cRozLWhveGDoZzWpRNdVBTjmjUjNFYRgvsuTvVMjxZaLUb88j1QLZA+vo62 t+qELKdv9yD/QDjJVCvsI65n9H0KqMgzVY4aVQ+aewBY2guNKrVuSlL1paD2bl4rbx2Y K4le69hN5123nqjflU/y9xwTnQh6fluGY5qFPSZE2fNAPOrwFs2kTtQGY4SISrUiCx/q 2Vlai+L5zwf9WdinwDmBLNXu53TUv75rbLzhblD/FWoIopABHsi3owo6LshiJ3CauHlA lDmViUbEMtAW1npNhDFyvDgk2JiVNTUtdCudQeNhWNgwZsiAedJTO8Vy+6vdLpNdAvnn BHag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w7si15618857pgs.168.2019.06.26.08.14.05; Wed, 26 Jun 2019 08:14:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728388AbfFZPNz (ORCPT + 30 others); Wed, 26 Jun 2019 11:13:55 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:19083 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726157AbfFZPNy (ORCPT ); Wed, 26 Jun 2019 11:13:54 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 6AF3767679FA3F4B31A8; Wed, 26 Jun 2019 23:13:52 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.439.0; Wed, 26 Jun 2019 23:13:42 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v7 4/6] vfio/type1: check dma map request is within a valid iova range Date: Wed, 26 Jun 2019 16:12:46 +0100 Message-ID: <20190626151248.11776-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> References: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This checks and rejects any dma map request outside valid iova range. Signed-off-by: Shameer Kolothum --- v6 --> v7 Addressed the case where a container with only an mdev device will have an empty list(Suggested by Alex). --- drivers/vfio/vfio_iommu_type1.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index e872fb3a0f39..89ad0da7152c 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1050,6 +1050,27 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma, return ret; } +/* + * Check dma map request is within a valid iova range + */ +static bool vfio_iommu_iova_dma_valid(struct vfio_iommu *iommu, + dma_addr_t start, dma_addr_t end) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *node; + + list_for_each_entry(node, iova, list) { + if (start >= node->start && end <= node->end) + return true; + } + + /* + * Check for list_empty() as well since a container with + * only an mdev device will have an empty list. + */ + return list_empty(&iommu->iova_list); +} + static int vfio_dma_do_map(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_map *map) { @@ -1093,6 +1114,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, goto out_unlock; } + if (!vfio_iommu_iova_dma_valid(iommu, iova, iova + size - 1)) { + ret = -EINVAL; + goto out_unlock; + } + dma = kzalloc(sizeof(*dma), GFP_KERNEL); if (!dma) { ret = -ENOMEM; From patchwork Wed Jun 26 15:12:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 167835 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1073262ilk; Wed, 26 Jun 2019 08:14:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqyc/MM4yDrnj3fnghjiXuhoEKIHwhVhAe6jJh2yc3GoELeIfZ3mbje8QQrKMP0l1lluauzq X-Received: by 2002:a17:90a:b387:: with SMTP id e7mr1655169pjr.113.1561562042496; Wed, 26 Jun 2019 08:14:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561562042; cv=none; d=google.com; s=arc-20160816; b=llePhXKNDgIcUdN+2563HNpcKmhes79vsEWgzqLbx+fajxOHczr0A3EBsJ4veJqRo+ GDoume8Eewsudj2xRGRXBj2o+hKyMpz3ze7A1JOQ2Bkzz+lLTzBajsbO2c/ZwDqlKuLE Qm/J+ABSGYksK33mz2uiSr0U6zmLc7HmvXGychQQIuQgXmI4OWBRcSYOHXqBtkUbi3kI cSxb6ZDHc9yWoVPModk3HlQELxNtTVzwI/3NJtJ+W2uuPnOK66/4/01CXpStFSoDAb1J Bf6taRwH0keskv/+hSnPLa2Otp3pd9Z98dNswxsjDYkllc2sKekaLjm9wpNpYMOeC2Vw fZtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=cmq2bYVTQ2cIdN+/IB8H1dkuBP20EFHntY1Yz0LpRCU=; b=BbKDaagd/S0CUvoGHp88LsyRkruY0zrvVw70eiPe2JyZMJSxakQwCve4n+BDULpq4L iCfI6rSuUIfxa/duxH+v9Vir/c0I9Mp3w5815yUjhrLRG4IOQyIKi0Rtez7Jx3qPB30R Ulh0ZDd3SvGcw+7bxZn6w4gwfOAo+DsRE6FYnIJuv/D5o6rVU7fLfDABnzpvOIuK13GN UBV/d0FXbeeJfQpHGLY8Vp3WjeNE/ko0ouCvlUtmXv6qeMSmynHM8Wposa4VOJSv6uN3 s5Ibk6KTehVg9pCaRwxxuCEB0celMVzmFQIfoo4qPe33iPe10RK8gJV+DlaunB5Slpk0 nw/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w7si15618857pgs.168.2019.06.26.08.14.02; Wed, 26 Jun 2019 08:14:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728418AbfFZPOA (ORCPT + 30 others); Wed, 26 Jun 2019 11:14:00 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:19084 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728286AbfFZPNz (ORCPT ); Wed, 26 Jun 2019 11:13:55 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 70573C767A71505CB4E9; Wed, 26 Jun 2019 23:13:52 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.439.0; Wed, 26 Jun 2019 23:13:46 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v7 5/6] vfio/type1: Add IOVA range capability support Date: Wed, 26 Jun 2019 16:12:47 +0100 Message-ID: <20190626151248.11776-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> References: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This allows the user-space to retrieve the supported IOVA range(s), excluding any reserved regions. The implementation is based on capability chains, added to VFIO_IOMMU_GET_INFO ioctl. Signed-off-by: Shameer Kolothum --- v6 --> v7 Addressed mdev case with empty iovas list(Suggested by Alex) --- drivers/vfio/vfio_iommu_type1.c | 101 ++++++++++++++++++++++++++++++++ include/uapi/linux/vfio.h | 23 ++++++++ 2 files changed, 124 insertions(+) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 89ad0da7152c..450081802dcd 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2141,6 +2141,73 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu) return ret; } +static int vfio_iommu_iova_add_cap(struct vfio_info_cap *caps, + struct vfio_iommu_type1_info_cap_iova_range *cap_iovas, + size_t size) +{ + struct vfio_info_cap_header *header; + struct vfio_iommu_type1_info_cap_iova_range *iova_cap; + + header = vfio_info_cap_add(caps, size, + VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE, 1); + if (IS_ERR(header)) + return PTR_ERR(header); + + iova_cap = container_of(header, + struct vfio_iommu_type1_info_cap_iova_range, + header); + iova_cap->nr_iovas = cap_iovas->nr_iovas; + memcpy(iova_cap->iova_ranges, cap_iovas->iova_ranges, + cap_iovas->nr_iovas * sizeof(*cap_iovas->iova_ranges)); + return 0; +} + +static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu, + struct vfio_info_cap *caps) +{ + struct vfio_iommu_type1_info_cap_iova_range *cap_iovas; + struct vfio_iova *iova; + size_t size; + int iovas = 0, i = 0, ret; + + mutex_lock(&iommu->lock); + + list_for_each_entry(iova, &iommu->iova_list, list) + iovas++; + + if (!iovas) { + /* + * Return 0 as a container with only an mdev device + * will have an empty list + */ + ret = 0; + goto out_unlock; + } + + size = sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges)); + + cap_iovas = kzalloc(size, GFP_KERNEL); + if (!cap_iovas) { + ret = -ENOMEM; + goto out_unlock; + } + + cap_iovas->nr_iovas = iovas; + + list_for_each_entry(iova, &iommu->iova_list, list) { + cap_iovas->iova_ranges[i].start = iova->start; + cap_iovas->iova_ranges[i].end = iova->end; + i++; + } + + ret = vfio_iommu_iova_add_cap(caps, cap_iovas, size); + + kfree(cap_iovas); +out_unlock: + mutex_unlock(&iommu->lock); + return ret; +} + static long vfio_iommu_type1_ioctl(void *iommu_data, unsigned int cmd, unsigned long arg) { @@ -2162,19 +2229,53 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } } else if (cmd == VFIO_IOMMU_GET_INFO) { struct vfio_iommu_type1_info info; + struct vfio_info_cap caps = { .buf = NULL, .size = 0 }; + unsigned long capsz; + int ret; minsz = offsetofend(struct vfio_iommu_type1_info, iova_pgsizes); + /* For backward compatibility, cannot require this */ + capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset); + if (copy_from_user(&info, (void __user *)arg, minsz)) return -EFAULT; if (info.argsz < minsz) return -EINVAL; + if (info.argsz >= capsz) { + minsz = capsz; + info.cap_offset = 0; /* output, no-recopy necessary */ + } + info.flags = VFIO_IOMMU_INFO_PGSIZES; info.iova_pgsizes = vfio_pgsize_bitmap(iommu); + ret = vfio_iommu_iova_build_caps(iommu, &caps); + if (ret) + return ret; + + if (caps.size) { + info.flags |= VFIO_IOMMU_INFO_CAPS; + + if (info.argsz < sizeof(info) + caps.size) { + info.argsz = sizeof(info) + caps.size; + } else { + vfio_info_cap_shift(&caps, sizeof(info)); + if (copy_to_user((void __user *)arg + + sizeof(info), caps.buf, + caps.size)) { + kfree(caps.buf); + return -EFAULT; + } + info.cap_offset = sizeof(info); + } + + kfree(caps.buf); + } + return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0; diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 8f10748dac79..1951d87115e8 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -714,7 +714,30 @@ struct vfio_iommu_type1_info { __u32 argsz; __u32 flags; #define VFIO_IOMMU_INFO_PGSIZES (1 << 0) /* supported page sizes info */ +#define VFIO_IOMMU_INFO_CAPS (1 << 1) /* Info supports caps */ __u64 iova_pgsizes; /* Bitmap of supported page sizes */ + __u32 cap_offset; /* Offset within info struct of first cap */ +}; + +/* + * The IOVA capability allows to report the valid IOVA range(s) + * excluding any reserved regions associated with dev group. Any dma + * map attempt outside the valid iova range will return error. + * + * The structures below define version 1 of this capability. + */ +#define VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE 1 + +struct vfio_iova_range { + __u64 start; + __u64 end; +}; + +struct vfio_iommu_type1_info_cap_iova_range { + struct vfio_info_cap_header header; + __u32 nr_iovas; + __u32 reserved; + struct vfio_iova_range iova_ranges[]; }; #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) From patchwork Wed Jun 26 15:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 167837 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp1073397ilk; Wed, 26 Jun 2019 08:14:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqyFX9gQBYYc7TX45VR+RDnAFTdDftre8A/iuHe77rs3VNltyAPCCe5nqRVIm/PbinMx4U2t X-Received: by 2002:a63:461c:: with SMTP id t28mr3503875pga.150.1561562049416; Wed, 26 Jun 2019 08:14:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561562049; cv=none; d=google.com; s=arc-20160816; b=sXJyaW8R9gt1yDkTFGliYKNe3ojRA3j9mqd/TcmYCilCUO32ZhoL5+NVbIOqrqYM25 BiF0XEgLb+jTA8Ig6HBDvO9U3zRTccQpdLHHV/JKPiViVkoSt4lmlnWz/XDba19KNHQG TngxtIF9yVSx2nx4bGUifmMcydcfhkUllpl4WmqS3LtO7qJ7Tk/GNDWVjDMLJLpuGATW lpR2kKGtm1MLs2LEF8nPLuqDjN2NZGLvVPU/JnctInZevPuBqdTbcHItRjZDEt4+c+4l lhSvt1RNQW/9mVh/yYjVjpO+ilibVgsWOi2wf83Aq37YZZjO04BAX0DKkRD4HEAbRhrN gn3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=Z9cVj8GabsU/Mxk5ZX3XJd6iX+tTHVUT60WWdrMbjYQ=; b=wAmluCD/swXJNtxAeDl4k2FKNVBj1JjV+Rvm0rdp0TjIbfIc5Fn7k7ZVgu05mvjA5Q goHRpFHyQlJAl8V40ZcfW6H/SQMgTQvijrSqM9Boc4hT5gJYuQu+lr2trWhr9AXRlcyI liBE2xCP1IiFWcvJ1TOO5+dzdK1v9HcR9cWKvWsQFWjNw5rI2Z30mXJQYA5s+/cltW9T 9xOU6Oi9xaOOWrPxU6QDxCAITQbF6yBcseTYII3JeLWilHPy6PpaNsUzp3Trl4huP2Wm Oz734JsxnxWOsaqZ0itYMfaZsxLs9VZy/mdlf9H0pqDM8iCQ4UsJZk15qSuhwUAikyJX 0Mew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w7si15618857pgs.168.2019.06.26.08.14.09; Wed, 26 Jun 2019 08:14:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728452AbfFZPOH (ORCPT + 30 others); Wed, 26 Jun 2019 11:14:07 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:19114 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728430AbfFZPOE (ORCPT ); Wed, 26 Jun 2019 11:14:04 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7C79BCA769915FB85BF2; Wed, 26 Jun 2019 23:13:57 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.439.0; Wed, 26 Jun 2019 23:13:49 +0800 From: Shameer Kolothum To: , , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v7 6/6] vfio/type1: remove duplicate retrieval of reserved regions Date: Wed, 26 Jun 2019 16:12:48 +0100 Message-ID: <20190626151248.11776-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> References: <20190626151248.11776-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As we now already have the reserved regions list, just pass that into vfio_iommu_has_sw_msi() fn. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 450081802dcd..43b1e68ebce9 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1308,15 +1308,13 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain, return NULL; } -static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) +static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, + phys_addr_t *base) { - struct list_head group_resv_regions; - struct iommu_resv_region *region, *next; + struct iommu_resv_region *region; bool ret = false; - INIT_LIST_HEAD(&group_resv_regions); - iommu_get_group_resv_regions(group, &group_resv_regions); - list_for_each_entry(region, &group_resv_regions, list) { + list_for_each_entry(region, group_resv_regions, list) { /* * The presence of any 'real' MSI regions should take * precedence over the software-managed one if the @@ -1332,8 +1330,7 @@ static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) ret = true; } } - list_for_each_entry_safe(region, next, &group_resv_regions, list) - kfree(region); + return ret; } @@ -1774,7 +1771,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; - resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); + resv_msi = vfio_iommu_has_sw_msi(&group_resv_regions, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); list_add(&group->next, &domain->group_list);