From patchwork Thu Feb 15 09:44:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128401 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1558786ljc; Thu, 15 Feb 2018 01:49:09 -0800 (PST) X-Google-Smtp-Source: AH8x225F/rr2Xmm0wVACV0zKZVx5gRnKxnDHUsUazpAdWNxqbjlKa9ZudeRb107PoShIXzFAHanr X-Received: by 10.99.191.15 with SMTP id v15mr1779425pgf.216.1518688149582; Thu, 15 Feb 2018 01:49:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688149; cv=none; d=google.com; s=arc-20160816; b=gQ+LKJFPuvTC67gwSoqXiDn9HbkzhoiY43Ky1liG3wUmaG8jiAbZU5hx6bw/SpR8Vz bzmZx5kyUg/2eSQEuxoxd4zh1Z/YeQ5ZpM8fNZbWXV2/JHvzj7DYCoNgjQwZjRmn9KBh GcPPnLLhXE7kk3uO5Ab/0UVGJk6fFrLliUtnFPCEfoHK61llaksQjGDNiV/rYyuGaT6e b8NMJYySQAlLBIABqQ0gJrwiDOUbK69m4yJZidVc6WY4wxNet0oGU335fGz8B6546wLJ pL2LFMLH7k5Dr2m9t0kpdGavim2iLSgJIDla0qF9ycaKH8v3DtZArcfg3P5NrsWxfUl8 PUWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=6RPFaZRiXloPL4GAWa/JQUp8hGPjZLFgojU8KBFwP9s=; b=HqnEzqAKhx4cKJK9HFNpZnDrcp39sU4b7po67FscOMmYOYANotvBVMRYFFv3pvVxfr w8e3wWQ97CAfqr1tNKg/ZXm0WiQP0eBPnX62OeNSBzovhLf2/grXLknJDSUzFOz3SH4k pMMKJgArxMDcslPpvUyL99Ac3219/T/MSYoUEC8ANAM/BxYzjaihotq3Xvht248qgi4v f2opxtZbNqdmqYZ9Hxu4GFo2sYr/9tySnnyjgNi8nJeZ3Fz9fS1doFVe78ESBsC+nOdv 9v/uLu2TIgN/rPh437lhuaM88GhNiYlwTMDPUbEexOhl3PSLELECu5ygPI0L5Acht4ni vDUQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2-v6si1665101pli.617.2018.02.15.01.49.09; Thu, 15 Feb 2018 01:49:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966846AbeBOJrU (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:20 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:5653 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755342AbeBOJrQ (ORCPT ); Thu, 15 Feb 2018 04:47:16 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C881DFA304B85; Thu, 15 Feb 2018 17:47:02 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:46:55 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 1/6] vfio/type1: Introduce iova list and add iommu aperture validity check Date: Thu, 15 Feb 2018 09:44:59 +0000 Message-ID: <20180215094504.4972-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This introduces an iova list that is valid for dma mappings. Make sure the new iommu aperture window doesn't conflict with the current one or with any existing dma mappings during attach. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 183 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 181 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index e30e29a..4726f55 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -60,6 +60,7 @@ MODULE_PARM_DESC(disable_hugepages, struct vfio_iommu { struct list_head domain_list; + struct list_head iova_list; struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; struct rb_root dma_list; @@ -92,6 +93,12 @@ struct vfio_group { struct list_head next; }; +struct vfio_iova { + struct list_head list; + dma_addr_t start; + dma_addr_t end; +}; + /* * Guest RAM pinning working set or DMA target */ @@ -1192,6 +1199,142 @@ static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) return ret; } +/* + * This is a helper function to insert an address range to iova list. + * The list starts with a single entry corresponding to the IOMMU + * domain geometry to which the device group is attached. The list + * aperture gets modified when a new domain is added to the container + * if the new aperture doesn't conflict with the current one or with + * any existing dma mappings. The list is also modified to exclude + * any reserved regions associated with the device group. + */ +static int vfio_insert_iova(phys_addr_t start, phys_addr_t end, + struct list_head *head) +{ + struct vfio_iova *region; + + region = kmalloc(sizeof(*region), GFP_KERNEL); + if (!region) + return -ENOMEM; + + INIT_LIST_HEAD(®ion->list); + region->start = start; + region->end = end; + + list_add_tail(®ion->list, head); + return 0; +} + +/* + * Check the new iommu aperture conflicts with existing aper or + * with any existing dma mappings. + */ +static bool vfio_iommu_aper_conflict(struct vfio_iommu *iommu, + phys_addr_t start, + phys_addr_t end) +{ + struct vfio_iova *first, *last; + struct list_head *iova = &iommu->iova_list; + + if (list_empty(iova)) + return false; + + /* Disjoint sets, return conflict */ + first = list_first_entry(iova, struct vfio_iova, list); + last = list_last_entry(iova, struct vfio_iova, list); + if ((start > last->end) || (end < first->start)) + return true; + + /* Check for any existing dma mappings outside the new start */ + if (start > first->start) { + if (vfio_find_dma(iommu, first->start, start - first->start)) + return true; + } + + /* Check for any existing dma mappings outside the new end */ + if (end < last->end) { + if (vfio_find_dma(iommu, end + 1, last->end - end)) + return true; + } + + return false; +} + +/* + * Resize iommu iova aperture window. This is called only if the new + * aperture has no conflict with existing aperture and dma mappings. + */ +static int vfio_iommu_aper_resize(struct list_head *iova, + dma_addr_t start, + dma_addr_t end) +{ + struct vfio_iova *node, *next; + + if (list_empty(iova)) + return vfio_insert_iova(start, end, iova); + + /* Adjust iova list start */ + list_for_each_entry_safe(node, next, iova, list) { + if (start < node->start) + break; + if ((start >= node->start) && (start < node->end)) { + node->start = start; + break; + } + /* Delete nodes before new start */ + list_del(&node->list); + kfree(node); + } + + /* Adjust iova list end */ + list_for_each_entry_safe(node, next, iova, list) { + if (end > node->end) + continue; + + if ((end >= node->start) && (end < node->end)) { + node->end = end; + continue; + } + /* Delete nodes after new end */ + list_del(&node->list); + kfree(node); + } + + return 0; +} + +static int vfio_iommu_get_iova_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *n; + + list_for_each_entry(n, iova, list) { + int ret; + + ret = vfio_insert_iova(n->start, n->end, iova_copy); + if (ret) + return ret; + } + + return 0; +} + +static void vfio_iommu_insert_iova_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *n, *next; + + list_for_each_entry_safe(n, next, iova, list) { + list_del(&n->list); + kfree(n); + } + + list_splice_tail(iova_copy, iova); +} + static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -1202,6 +1345,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, int ret; bool resv_msi, msi_remap; phys_addr_t resv_msi_base; + struct iommu_domain_geometry geo; + struct list_head iova_copy; + struct vfio_iova *iova, *iova_next; mutex_lock(&iommu->lock); @@ -1271,6 +1417,26 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_domain; + /* Get aperture info */ + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, &geo); + + if (vfio_iommu_aper_conflict(iommu, geo.aperture_start, + geo.aperture_end)) { + ret = -EINVAL; + goto out_detach; + } + + /* Get a copy of the current iova list and work on it */ + INIT_LIST_HEAD(&iova_copy); + ret = vfio_iommu_get_iova_copy(iommu, &iova_copy); + if (ret) + goto out_detach; + + ret = vfio_iommu_aper_resize(&iova_copy, geo.aperture_start, + geo.aperture_end); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1304,8 +1470,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, list_add(&group->next, &d->group_list); iommu_domain_free(domain->domain); kfree(domain); - mutex_unlock(&iommu->lock); - return 0; + goto done; } ret = iommu_attach_group(domain->domain, iommu_group); @@ -1328,6 +1493,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, } list_add(&domain->next, &iommu->domain_list); +done: + /* Delete the old one and insert new iova list */ + vfio_iommu_insert_iova_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); @@ -1337,6 +1505,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, iommu_detach_group(domain->domain, iommu_group); out_domain: iommu_domain_free(domain->domain); + list_for_each_entry_safe(iova, iova_next, &iova_copy, list) + kfree(iova); out_free: kfree(domain); kfree(group); @@ -1475,6 +1645,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) } INIT_LIST_HEAD(&iommu->domain_list); + INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); @@ -1502,6 +1673,7 @@ static void vfio_iommu_type1_release(void *iommu_data) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain, *domain_tmp; + struct vfio_iova *iova, *iova_next; if (iommu->external_domain) { vfio_release_domain(iommu->external_domain, true); @@ -1517,6 +1689,13 @@ static void vfio_iommu_type1_release(void *iommu_data) list_del(&domain->next); kfree(domain); } + + list_for_each_entry_safe(iova, iova_next, + &iommu->iova_list, list) { + list_del(&iova->list); + kfree(iova); + } + kfree(iommu); } From patchwork Thu Feb 15 09:45:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128400 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1558418ljc; Thu, 15 Feb 2018 01:48:39 -0800 (PST) X-Google-Smtp-Source: AH8x225i0O60QRMYg3roB9n9Oz+Wt3g3ZBqgG8inHYaLAC7pCLSNatYzM1+ms5AYI9DawpqH3Gr9 X-Received: by 10.101.93.142 with SMTP id f14mr1694234pgt.82.1518688119530; Thu, 15 Feb 2018 01:48:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688119; cv=none; d=google.com; s=arc-20160816; b=dxxdsmh25BZrN2mXQpgNwuprvTc18eaxP0CHYbDVeGw+xOCHwF8qNnXSMMePxXShFa 8qtuDCn5zR2mCGLUwTmUYLjR2ny4+dpHjReo9813wR4pZGyj2Kg+AQhxiChJuMQK081r crLChogakWvBryFKpkdFN4WqLMUTD++PIcOGIi8Eyqbzr+5cQbGl74v9Y/ZboFBGQKUy /AEWCbrFaRzhOrWt0y24KbSUzn00nGpfdgSZ3hIpcMQMwHa0gZmzXwNNfZp2tShhXY0u 6CjrS4jxLMiHaqEkljKfkg69Rk9qhaRtmZFczW/ihZnYOBvAkabDKuZ71mvJLuzfqo50 u9eA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=MSub6yClrvAfx30XTilWp+YqFOKVLWxuSzH++4RSZKQ=; b=NatzcBW7O7X6eMH5BhpdZZNfZctWydgH1GTvdSUN6eb6v5QSY7wTAsCv5EyxxPNaTM tE8+ENhwYxfPTtZOPkoGwzCnAAcOLp9YenZgZehDjIoTjli7xUC0/pbAtkKWuGnHW6mx cZEmRnXYrNHnJmi3qkS/eoY9DJ7ErJVVKgVfN/dn2nr2zMkb3c3xgnxiwQMtAJm0zUKV DIJoDH/p1JSsm8yDEcOF7IZjWdnkX8JrM8805I8qwGvBo/HSPS/hYwVYBagwGTBEfBhD +JQ/8YBqEDUfvRBYhzKhY43S8/SaowfJ/GDFmCHh/B5M+GgSWakmH+CkYn3tVOIvDCr3 FpXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x189si1096674pgb.216.2018.02.15.01.48.39; Thu, 15 Feb 2018 01:48:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755375AbeBOJse (ORCPT + 28 others); Thu, 15 Feb 2018 04:48:34 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:59926 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S966892AbeBOJrV (ORCPT ); Thu, 15 Feb 2018 04:47:21 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id CB1522DC571C7; Thu, 15 Feb 2018 17:47:07 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:46:58 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 2/6] vfio/type1: Check reserve region conflict and update iova list Date: Thu, 15 Feb 2018 09:45:00 +0000 Message-ID: <20180215094504.4972-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This retrieves the reserved regions associated with dev group and checks for conflicts with any existing dma mappings. Also update the iova list excluding the reserved regions. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 86 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 85 insertions(+), 1 deletion(-) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 4726f55..4db87a9 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1303,6 +1303,72 @@ static int vfio_iommu_aper_resize(struct list_head *iova, return 0; } +/* + * Check reserved region conflicts with existing dma mappings + */ +static bool vfio_iommu_resv_conflict(struct vfio_iommu *iommu, + struct list_head *resv_regions) +{ + struct iommu_resv_region *region; + + /* Check for conflict with existing dma mappings */ + list_for_each_entry(region, resv_regions, list) { + if (vfio_find_dma(iommu, region->start, region->length)) + return true; + } + + return false; +} + +/* + * Check iova region overlap with reserved regions and + * exclude them from the iommu iova range + */ +static int vfio_iommu_resv_exclude(struct list_head *iova, + struct list_head *resv_regions) +{ + struct iommu_resv_region *resv; + struct vfio_iova *n, *next; + + list_for_each_entry(resv, resv_regions, list) { + phys_addr_t start, end; + + start = resv->start; + end = resv->start + resv->length - 1; + + list_for_each_entry_safe(n, next, iova, list) { + int ret = 0; + + /* No overlap */ + if ((start > n->end) || (end < n->start)) + continue; + /* + * Insert a new node if current node overlaps with the + * reserve region to exlude that from valid iova range. + * Note that, new node is inserted before the current + * node and finally the current node is deleted keeping + * the list updated and sorted. + */ + if (start > n->start) + ret = vfio_insert_iova(n->start, start - 1, + &n->list); + if (!ret && end < n->end) + ret = vfio_insert_iova(end + 1, n->end, + &n->list); + if (ret) + return ret; + + list_del(&n->list); + kfree(n); + } + } + + if (list_empty(iova)) + return -EINVAL; + + return 0; +} + static int vfio_iommu_get_iova_copy(struct vfio_iommu *iommu, struct list_head *iova_copy) { @@ -1346,7 +1412,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, bool resv_msi, msi_remap; phys_addr_t resv_msi_base; struct iommu_domain_geometry geo; - struct list_head iova_copy; + struct list_head iova_copy, group_resv_regions; + struct iommu_resv_region *resv, *resv_next; struct vfio_iova *iova, *iova_next; mutex_lock(&iommu->lock); @@ -1426,6 +1493,14 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, goto out_detach; } + INIT_LIST_HEAD(&group_resv_regions); + iommu_get_group_resv_regions(iommu_group, &group_resv_regions); + + if (vfio_iommu_resv_conflict(iommu, &group_resv_regions)) { + ret = -EINVAL; + goto out_detach; + } + /* Get a copy of the current iova list and work on it */ INIT_LIST_HEAD(&iova_copy); ret = vfio_iommu_get_iova_copy(iommu, &iova_copy); @@ -1437,6 +1512,10 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; + ret = vfio_iommu_resv_exclude(&iova_copy, &group_resv_regions); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1497,6 +1576,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, /* Delete the old one and insert new iova list */ vfio_iommu_insert_iova_copy(iommu, &iova_copy); + list_for_each_entry_safe(resv, resv_next, &group_resv_regions, list) + kfree(resv); + mutex_unlock(&iommu->lock); return 0; @@ -1507,6 +1589,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, iommu_domain_free(domain->domain); list_for_each_entry_safe(iova, iova_next, &iova_copy, list) kfree(iova); + list_for_each_entry_safe(resv, resv_next, &group_resv_regions, list) + kfree(resv); out_free: kfree(domain); kfree(group); From patchwork Thu Feb 15 09:45:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128396 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1557587ljc; Thu, 15 Feb 2018 01:47:31 -0800 (PST) X-Google-Smtp-Source: AH8x227i7MIwstjqTmRrL2xvgpUcD7Caqy5pgvIh9JZTqI7aQaKgdWomlnA5g4eXGDWkesFGnj+F X-Received: by 10.101.101.84 with SMTP id a20mr1753526pgw.163.1518688051525; Thu, 15 Feb 2018 01:47:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688051; cv=none; d=google.com; s=arc-20160816; b=Ci6sGU27B84spRVzG5lCQxryCn+Fv2ATKuxxDZ/lEPArSZuSEgmSRwpzj7GacwVt6F usohdOMbsgoq/HJD2uuIGRH4NjOdOYgPmE3el2rKoTCsVreMgdZd5OCiCmnLmjZgs6Ve 2jayPIvBTU3KnoPNgrNyHCqYyfwI7NgSLpNpCYqPND+i4ckC7smEM+IZ6MNn/oPu8ozT wSGlEkoDn16eL5jIL7I5ssWfkDhM4dnjT//1vyjTXwZyud7q48GHc6CzUVbxUwz4I9Zo s7wzRONOKpac8Zn7EfpZvraKrtdhuKL5RafaPoJejB+FEHu3l2t4X7K86ZSryjKBKiMI B28A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=roEhcE1JuPPnazNz1N6UQOUHx0cRrtR/7kU0P6h56jU=; b=mPRcza+Pax5IGaD0s8IQbyt2WSq6/KStcXt5gZHHZxaRhhjuIMU8f4NnV0c7185JFc cfKYj0SOwj0LB/FhHh9lWHGMXRanqgu/kpDX+oxCA8yFC6blB7nwdCCayLuC6Kqp5gLi XRDs0OreRWR45UMun9BIRDTd6ldGLj2XFG2EVrLRpsUodKebAAot7l89hohO8Bu2GwEk WylSRSNEEA4W5Vp8cm/l2ILEI2VykhzN8fVWAXfuaopzezjxauE51XA/QJsYwE1AlzHD CIn2lBu7q6G6kC9HicZtfS7PeiTnwvVxvCJsPdtjU3b98kgAIupnFjOkaiCYBNp4Phqa esDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o88si9101117pfk.380.2018.02.15.01.47.31; Thu, 15 Feb 2018 01:47:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967822AbeBOJr0 (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:26 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:59923 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S966863AbeBOJrV (ORCPT ); Thu, 15 Feb 2018 04:47:21 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id BB72EECD1E938; Thu, 15 Feb 2018 17:47:07 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:47:01 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 3/6] vfio/type1: Update iova list on detach Date: Thu, 15 Feb 2018 09:45:01 +0000 Message-ID: <20180215094504.4972-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Get a copy of iova list on _group_detach and try to update the list. On success replace the current one with the copy. Leave the list as it is if update fails. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 103 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 103 insertions(+) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 4db87a9..8d8ddd7 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1646,12 +1646,96 @@ static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) WARN_ON(iommu->notifier.head); } +/* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the + * iova aperture with the smallest window among existing domains. + */ +static void vfio_iommu_aper_expand(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *domain; + struct iommu_domain_geometry geo; + struct vfio_iova *node; + phys_addr_t start = 0; + phys_addr_t end = (phys_addr_t)~0; + + list_for_each_entry(domain, &iommu->domain_list, next) { + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, + &geo); + if (geo.aperture_start > start) + start = geo.aperture_start; + if (geo.aperture_end < end) + end = geo.aperture_end; + } + + /* Modify aperture limits. The new aper is either same or bigger */ + node = list_first_entry(iova_copy, struct vfio_iova, list); + node->start = start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + node->end = end; +} + +/* + * Called when a group is detached. The reserved regions for that + * group can be part of valid iova now. But since reserved regions + * may be duplicated among groups, populate the iova valid regions + * list again. + */ +static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *d; + struct vfio_group *g; + struct vfio_iova *node, *tmp; + struct iommu_resv_region *resv, *resv_next; + struct list_head resv_regions; + phys_addr_t start, end; + int ret; + + INIT_LIST_HEAD(&resv_regions); + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) + iommu_get_group_resv_regions(g->iommu_group, + &resv_regions); + } + + if (list_empty(&resv_regions)) + return 0; + + node = list_first_entry(iova_copy, struct vfio_iova, list); + start = node->start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + end = node->end; + + /* purge the iova list and create new one */ + list_for_each_entry_safe(node, tmp, iova_copy, list) { + list_del(&node->list); + kfree(node); + } + + ret = vfio_iommu_aper_resize(iova_copy, start, end); + if (ret) + goto done; + + /* Exclude current reserved regions from iova ranges */ + ret = vfio_iommu_resv_exclude(iova_copy, &resv_regions); +done: + list_for_each_entry_safe(resv, resv_next, &resv_regions, list) + kfree(resv); + return ret; +} + static void vfio_iommu_type1_detach_group(void *iommu_data, struct iommu_group *iommu_group) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; + struct list_head iova_copy; + struct vfio_iova *iova, *iova_next; + bool iova_copy_fail; mutex_lock(&iommu->lock); @@ -1674,6 +1758,13 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, } } + /* + * Get a copy of iova list. If success, use copy to update the + * list and to replace the current one. + */ + INIT_LIST_HEAD(&iova_copy); + iova_copy_fail = !!vfio_iommu_get_iova_copy(iommu, &iova_copy); + list_for_each_entry(domain, &iommu->domain_list, next) { group = find_iommu_group(domain, iommu_group); if (!group) @@ -1699,10 +1790,22 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, iommu_domain_free(domain->domain); list_del(&domain->next); kfree(domain); + if (!iova_copy_fail) + vfio_iommu_aper_expand(iommu, &iova_copy); } break; } + if (!iova_copy_fail) { + if (!vfio_iommu_resv_refresh(iommu, &iova_copy)) { + /* Delete the current one and insert new iova list */ + vfio_iommu_insert_iova_copy(iommu, &iova_copy); + goto detach_group_done; + } + } + + list_for_each_entry_safe(iova, iova_next, &iova_copy, list) + kfree(iova); detach_group_done: mutex_unlock(&iommu->lock); } From patchwork Thu Feb 15 09:45:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128398 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1558012ljc; Thu, 15 Feb 2018 01:48:05 -0800 (PST) X-Google-Smtp-Source: AH8x226cqX8QF5nnLrz56ANfjkoOGKo6YsdJa4VRKSQjbUCI+nJ+yzDVhphCzERiEbXv+iRGCVXg X-Received: by 10.101.70.69 with SMTP id k5mr1768728pgr.61.1518688085490; Thu, 15 Feb 2018 01:48:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688085; cv=none; d=google.com; s=arc-20160816; b=cacY9P+F+MaU6t486xJmamZRw8AamJUn+YLRVgNWdheXDP1b3yicpAEw6ZGMrC4iGK E0Cq6j5W4akrt/ar8xbLVui9Xto3tdav7si91LalNUMLyAFxqZOPVEH5EzKa/Y/o5XUd GvvNUc5pDACA/zgOdF9oVmFnRjDpXHnLIwZ7oMklamT6XXuJ0htzTx5Avs3QMyvimnQo 8Q18Ex9DsIrSuQhi64Ii1D8kZPq5ixkVmcMtzoBjcDfliA6Rz5ey8nHHx/Ra+jeW8iIf dCX1jvUueVmfC9adskjHKVSFIIx8smMrMDzQ3izamj0ksz2Rq0S5G3KQ17an6wQfsWou Tjdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=XrS8nbDW5EQTjAsoFt1auo83rno7Es2YVkLOdzanwII=; b=d7VBvk3/N4RnsWlewPSSGXbAFi7V252Mowekp/yRE8SQ/yYh68blTAaHIDTxd+Vvq2 5dMvlI5BbWh+Af2rD0gwPB9fGFQR2RYnR6BQ0E8ckL6J/LdpTb8rYjP+dwed+dA4z5Di YmSrhyAy05/diN3+bXysa9z/Tq8efGSPuiJAt91tpH//xA/86djMQEEaRnoQlT5cRM8r LURMWB2mN3yLrBZIVzDkKHPlcuSUkwAdegt9bcWE/i4UgIvCwTXFTDeqlQaa3A5wLoKj EF0iFJnNwD40uAO+/4Vmx4Gzwx1F+zWCqFaLoi5DTYKm5zWSw+e7vn5LGHvjkxaPXRl6 mTTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 30-v6si1352272pla.812.2018.02.15.01.48.05; Thu, 15 Feb 2018 01:48:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967688AbeBOJrY (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:24 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:51614 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1755343AbeBOJrQ (ORCPT ); Thu, 15 Feb 2018 04:47:16 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id BD08CD432DAE7; Thu, 15 Feb 2018 17:47:12 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:47:04 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 4/6] vfio/type1: check dma map request is within a valid iova range Date: Thu, 15 Feb 2018 09:45:02 +0000 Message-ID: <20180215094504.4972-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This checks and rejects any dma map request outside valid iova range. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 8d8ddd7..dae01c5 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -970,6 +970,23 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma, return ret; } +/* + * Check dma map request is within a valid iova range + */ +static bool vfio_iommu_iova_dma_valid(struct vfio_iommu *iommu, + dma_addr_t start, dma_addr_t end) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *node; + + list_for_each_entry(node, iova, list) { + if ((start >= node->start) && (end <= node->end)) + return true; + } + + return false; +} + static int vfio_dma_do_map(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_map *map) { @@ -1008,6 +1025,11 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, goto out_unlock; } + if (!vfio_iommu_iova_dma_valid(iommu, iova, iova + size - 1)) { + ret = -EINVAL; + goto out_unlock; + } + dma = kzalloc(sizeof(*dma), GFP_KERNEL); if (!dma) { ret = -ENOMEM; From patchwork Thu Feb 15 09:45:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128399 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1558241ljc; Thu, 15 Feb 2018 01:48:23 -0800 (PST) X-Google-Smtp-Source: AH8x227d/FN7vMpnnNvwQTZDypzOA9Z46YTDtACYAKEWAZXJfV/GIeM4BYpZ8w2k1eltbCTb+4Qp X-Received: by 10.99.96.137 with SMTP id u131mr1776219pgb.103.1518688103746; Thu, 15 Feb 2018 01:48:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688103; cv=none; d=google.com; s=arc-20160816; b=RoMOc8UfhXjgDUaiM1R7XRpTwrcjCBfkVjADQB4BjNrXP8FBacHmHvoJsDjmajSDqm cSdMT6UBJIf/h3o2h9y2JaBFBl70mOToV5LRvfH3TqP1IWywGOwXGLOH4HfQ6vZHVLiE jINoNNlvurKsOPrYeFq2748cFs4hSgQPkSBwaPtOylFjrOsEgyvsKQjv3CNspmyyb2LK vRuoHhkBkO9/QXGaHy3nD2HEeLCqxXTaUAAEf2yNM94dUAPk139pKm+sNb6Xd8XWHwSJ lmLJY6UQLx1p65xbFoe3E4KTgun2UIvtQxqV4zFprngCp0JpfVqfR4RANL4bpkj4tsY8 LLWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=gDM+CiamHZ+3+BQDmW7F0J0vo6V4OKdERRJrzNFgVDk=; b=kUmNkZx5x+mxkIhSdM3c8vGbPepaKgn7VNp4Zo1cXfRn6JHuA/dSVUBIbaCZ9KhXCz 42I6PwXUpF2PhBIC5aDGB32TT+lzhyabZTtya+ED9dBCYKOoInFPai63oPgirC7r2lah V17wsfcDRd0ndlbLcUqll6fLpz7ETCByy61n0o8QuvdZdfLyVHnjgkQ5wdvLZJVMoc5Z 1eCbFUbw7XHXzA6xzjpowNqU3QLhyLqDgZp3XnHLflBshevvcR8c6W3noVFsE7X2NpiO KGDdswS5y7zVaId7lUeo++CGfNWCOe4e+HNWszAWmIYTnU6V5k/hMtZx9HW8vsRgbzNb Kt3Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b2-v6si946858plk.478.2018.02.15.01.48.23; Thu, 15 Feb 2018 01:48:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967359AbeBOJrX (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:23 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:51622 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1755165AbeBOJrQ (ORCPT ); Thu, 15 Feb 2018 04:47:16 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id CC24C481FB8E6; Thu, 15 Feb 2018 17:47:12 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:47:07 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 5/6] vfio/type1: Add IOVA range capability support Date: Thu, 15 Feb 2018 09:45:03 +0000 Message-ID: <20180215094504.4972-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This allows the user-space to retrieve the supported IOVA range(s), excluding any reserved regions. The implementation is based on capability chains, added to VFIO_IOMMU_GET_INFO ioctl. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 92 +++++++++++++++++++++++++++++++++++++++++ include/uapi/linux/vfio.h | 23 +++++++++++ 2 files changed, 115 insertions(+) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index dae01c5..21e575c 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1925,6 +1925,68 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu) return ret; } +static int vfio_add_iova_cap(struct vfio_info_cap *caps, + struct vfio_iommu_type1_info_cap_iova_range *cap_iovas, + size_t size) +{ + struct vfio_info_cap_header *header; + struct vfio_iommu_type1_info_cap_iova_range *iova_cap; + + header = vfio_info_cap_add(caps, size, + VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE, 1); + if (IS_ERR(header)) + return PTR_ERR(header); + + iova_cap = container_of(header, + struct vfio_iommu_type1_info_cap_iova_range, header); + iova_cap->nr_iovas = cap_iovas->nr_iovas; + memcpy(iova_cap->iova_ranges, cap_iovas->iova_ranges, + cap_iovas->nr_iovas * sizeof(*cap_iovas->iova_ranges)); + return 0; +} + +static int vfio_build_iommu_iova_caps(struct vfio_iommu *iommu, + struct vfio_info_cap *caps) +{ + struct vfio_iommu_type1_info_cap_iova_range *cap_iovas; + struct vfio_iova *iova; + size_t size; + int iovas = 0, i = 0, ret; + + mutex_lock(&iommu->lock); + + list_for_each_entry(iova, &iommu->iova_list, list) + iovas++; + + if (!iovas) { + ret = -EINVAL; + goto out_unlock; + } + + size = sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges)); + + cap_iovas = kzalloc(size, GFP_KERNEL); + if (!cap_iovas) { + ret = -ENOMEM; + goto out_unlock; + } + + cap_iovas->nr_iovas = iovas; + + list_for_each_entry(iova, &iommu->iova_list, list) { + cap_iovas->iova_ranges[i].start = iova->start; + cap_iovas->iova_ranges[i].end = iova->end; + i++; + } + + ret = vfio_add_iova_cap(caps, cap_iovas, size); + + kfree(cap_iovas); +out_unlock: + mutex_unlock(&iommu->lock); + return ret; +} + static long vfio_iommu_type1_ioctl(void *iommu_data, unsigned int cmd, unsigned long arg) { @@ -1946,6 +2008,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } } else if (cmd == VFIO_IOMMU_GET_INFO) { struct vfio_iommu_type1_info info; + struct vfio_info_cap caps = { .buf = NULL, .size = 0 }; + int ret; minsz = offsetofend(struct vfio_iommu_type1_info, iova_pgsizes); @@ -1959,6 +2023,34 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, info.iova_pgsizes = vfio_pgsize_bitmap(iommu); + if (info.argsz == minsz) + goto done; + + ret = vfio_build_iommu_iova_caps(iommu, &caps); + if (ret) + return ret; + + if (caps.size) { + info.flags |= VFIO_IOMMU_INFO_CAPS; + minsz = offsetofend(struct vfio_iommu_type1_info, + cap_offset); + if (info.argsz < sizeof(info) + caps.size) { + info.argsz = sizeof(info) + caps.size; + info.cap_offset = 0; + } else { + vfio_info_cap_shift(&caps, sizeof(info)); + if (copy_to_user((void __user *)arg + + sizeof(info), caps.buf, + caps.size)) { + kfree(caps.buf); + return -EFAULT; + } + info.cap_offset = sizeof(info); + } + + kfree(caps.buf); + } +done: return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0; diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index c743721..46b49e9 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -589,7 +589,30 @@ struct vfio_iommu_type1_info { __u32 argsz; __u32 flags; #define VFIO_IOMMU_INFO_PGSIZES (1 << 0) /* supported page sizes info */ +#define VFIO_IOMMU_INFO_CAPS (1 << 1) /* Info supports caps */ __u64 iova_pgsizes; /* Bitmap of supported page sizes */ + __u32 cap_offset; /* Offset within info struct of first cap */ +}; + +/* + * The IOVA capability allows to report the valid IOVA range(s) + * excluding any reserved regions associated with dev group. Any dma + * map attempt outside the valid iova range will return error. + * + * The structures below define version 1 of this capability. + */ +#define VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE 1 + +struct vfio_iova_range { + __u64 start; + __u64 end; +}; + +struct vfio_iommu_type1_info_cap_iova_range { + struct vfio_info_cap_header header; + __u32 nr_iovas; + __u32 reserved; + struct vfio_iova_range iova_ranges[]; }; #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) From patchwork Thu Feb 15 09:45:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128397 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1557689ljc; Thu, 15 Feb 2018 01:47:39 -0800 (PST) X-Google-Smtp-Source: AH8x224Bq6I2yMCSRK+lHN2Ctt+wziHko9qylZS5KT63H9be58ymNEkcscW5aUKX64aZXIE2f7Eh X-Received: by 2002:a17:902:f81:: with SMTP id 1-v6mr1928607plz.265.1518688059615; Thu, 15 Feb 2018 01:47:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688059; cv=none; d=google.com; s=arc-20160816; b=mKPnMlanbXph1goyxBC9Lf38S5gj21o5oonu30RGyO/+Ey9lQBBVRFJKFwa2NBi7ba hEtVkEFAeqoFf3xa7kO7TKs8kfVkCvcj7MgN2vWwmJVnd0uRtteB12Q1QHzJoupInfls R3ceLhodiHu+Hr8Yv1KlaRMZvM1hjkTPTO+ZQG3uPf4JJ6hktEUQDoLslxWyrhzTR6wR 1pPPVG+0wIN8AyoyiEPsZY5Pe3a0ewyDA8YTXHpJ95jM1GzsYnBu3hSe+iSRowrs2typ IMb4u6myaNydgii/fMxxOh291TKUMSS5YLIu2J5CGL4zOOK21+XsZ6s4iMtxIvb+0f9i mY7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=rFb2lZ642Ev7TVhHiAiBfpH5yPhYgpfcXgyarl/O550=; b=PWyvoP5nSygoRMxLznF0rx+SYpSZ3txu2iktqCj4X/sjfa5lsO7ZGdXOOi3nGK/ugq 0EImXZQ8OmMW7nTxoOouMvDm+meYUVdTI2t/Fz+xcQZj5/7wzIFPHEMKCvB2xyAh/Twh 6Dnt5Jq8n59B9xJqblErF5jjNaL0/xWApPB0TaVp59oC0ZmlVzgh2g/xQghYm5tXTum6 8GUh3r4fK4pl8ebunoNhz+UTXRoZqE9Xd7HFHFFXY/SEUcnZdfB16/CwCtkmeP1TsyFI JQ4tNNzhfLV8nCPgxElGjSxezzzlWoVKaCzi9K2DqldTYg+KpmcQYs4Q906ymesivRIy kt+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 30-v6si1352272pla.812.2018.02.15.01.47.39; Thu, 15 Feb 2018 01:47:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967873AbeBOJrf (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:35 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:59985 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S966863AbeBOJra (ORCPT ); Thu, 15 Feb 2018 04:47:30 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id D99188083FC8F; Thu, 15 Feb 2018 17:47:17 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:47:10 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 6/6] vfio/type1: remove duplicate retrieval of reserved regions. Date: Thu, 15 Feb 2018 09:45:04 +0000 Message-ID: <20180215094504.4972-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As we now already have the reserved regions list, just pass that into vfio_iommu_has_sw_msi() fn. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 21e575c..ff9818b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1192,15 +1192,13 @@ static struct vfio_group *find_iommu_group(struct vfio_domain *domain, return NULL; } -static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) +static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, + phys_addr_t *base) { - struct list_head group_resv_regions; - struct iommu_resv_region *region, *next; + struct iommu_resv_region *region; bool ret = false; - INIT_LIST_HEAD(&group_resv_regions); - iommu_get_group_resv_regions(group, &group_resv_regions); - list_for_each_entry(region, &group_resv_regions, list) { + list_for_each_entry(region, group_resv_regions, list) { /* * The presence of any 'real' MSI regions should take * precedence over the software-managed one if the @@ -1216,8 +1214,7 @@ static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) ret = true; } } - list_for_each_entry_safe(region, next, &group_resv_regions, list) - kfree(region); + return ret; } @@ -1538,7 +1535,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_detach; - resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); + resv_msi = vfio_iommu_has_sw_msi(&group_resv_regions, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); list_add(&group->next, &domain->group_list);