From patchwork Thu Feb 15 09:44:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128401 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1558786ljc; Thu, 15 Feb 2018 01:49:09 -0800 (PST) X-Google-Smtp-Source: AH8x225F/rr2Xmm0wVACV0zKZVx5gRnKxnDHUsUazpAdWNxqbjlKa9ZudeRb107PoShIXzFAHanr X-Received: by 10.99.191.15 with SMTP id v15mr1779425pgf.216.1518688149582; Thu, 15 Feb 2018 01:49:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688149; cv=none; d=google.com; s=arc-20160816; b=gQ+LKJFPuvTC67gwSoqXiDn9HbkzhoiY43Ky1liG3wUmaG8jiAbZU5hx6bw/SpR8Vz bzmZx5kyUg/2eSQEuxoxd4zh1Z/YeQ5ZpM8fNZbWXV2/JHvzj7DYCoNgjQwZjRmn9KBh GcPPnLLhXE7kk3uO5Ab/0UVGJk6fFrLliUtnFPCEfoHK61llaksQjGDNiV/rYyuGaT6e b8NMJYySQAlLBIABqQ0gJrwiDOUbK69m4yJZidVc6WY4wxNet0oGU335fGz8B6546wLJ pL2LFMLH7k5Dr2m9t0kpdGavim2iLSgJIDla0qF9ycaKH8v3DtZArcfg3P5NrsWxfUl8 PUWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=6RPFaZRiXloPL4GAWa/JQUp8hGPjZLFgojU8KBFwP9s=; b=HqnEzqAKhx4cKJK9HFNpZnDrcp39sU4b7po67FscOMmYOYANotvBVMRYFFv3pvVxfr w8e3wWQ97CAfqr1tNKg/ZXm0WiQP0eBPnX62OeNSBzovhLf2/grXLknJDSUzFOz3SH4k pMMKJgArxMDcslPpvUyL99Ac3219/T/MSYoUEC8ANAM/BxYzjaihotq3Xvht248qgi4v f2opxtZbNqdmqYZ9Hxu4GFo2sYr/9tySnnyjgNi8nJeZ3Fz9fS1doFVe78ESBsC+nOdv 9v/uLu2TIgN/rPh437lhuaM88GhNiYlwTMDPUbEexOhl3PSLELECu5ygPI0L5Acht4ni vDUQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2-v6si1665101pli.617.2018.02.15.01.49.09; Thu, 15 Feb 2018 01:49:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966846AbeBOJrU (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:20 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:5653 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755342AbeBOJrQ (ORCPT ); Thu, 15 Feb 2018 04:47:16 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C881DFA304B85; Thu, 15 Feb 2018 17:47:02 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:46:55 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 1/6] vfio/type1: Introduce iova list and add iommu aperture validity check Date: Thu, 15 Feb 2018 09:44:59 +0000 Message-ID: <20180215094504.4972-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This introduces an iova list that is valid for dma mappings. Make sure the new iommu aperture window doesn't conflict with the current one or with any existing dma mappings during attach. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 183 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 181 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index e30e29a..4726f55 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -60,6 +60,7 @@ MODULE_PARM_DESC(disable_hugepages, struct vfio_iommu { struct list_head domain_list; + struct list_head iova_list; struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; struct rb_root dma_list; @@ -92,6 +93,12 @@ struct vfio_group { struct list_head next; }; +struct vfio_iova { + struct list_head list; + dma_addr_t start; + dma_addr_t end; +}; + /* * Guest RAM pinning working set or DMA target */ @@ -1192,6 +1199,142 @@ static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) return ret; } +/* + * This is a helper function to insert an address range to iova list. + * The list starts with a single entry corresponding to the IOMMU + * domain geometry to which the device group is attached. The list + * aperture gets modified when a new domain is added to the container + * if the new aperture doesn't conflict with the current one or with + * any existing dma mappings. The list is also modified to exclude + * any reserved regions associated with the device group. + */ +static int vfio_insert_iova(phys_addr_t start, phys_addr_t end, + struct list_head *head) +{ + struct vfio_iova *region; + + region = kmalloc(sizeof(*region), GFP_KERNEL); + if (!region) + return -ENOMEM; + + INIT_LIST_HEAD(®ion->list); + region->start = start; + region->end = end; + + list_add_tail(®ion->list, head); + return 0; +} + +/* + * Check the new iommu aperture conflicts with existing aper or + * with any existing dma mappings. + */ +static bool vfio_iommu_aper_conflict(struct vfio_iommu *iommu, + phys_addr_t start, + phys_addr_t end) +{ + struct vfio_iova *first, *last; + struct list_head *iova = &iommu->iova_list; + + if (list_empty(iova)) + return false; + + /* Disjoint sets, return conflict */ + first = list_first_entry(iova, struct vfio_iova, list); + last = list_last_entry(iova, struct vfio_iova, list); + if ((start > last->end) || (end < first->start)) + return true; + + /* Check for any existing dma mappings outside the new start */ + if (start > first->start) { + if (vfio_find_dma(iommu, first->start, start - first->start)) + return true; + } + + /* Check for any existing dma mappings outside the new end */ + if (end < last->end) { + if (vfio_find_dma(iommu, end + 1, last->end - end)) + return true; + } + + return false; +} + +/* + * Resize iommu iova aperture window. This is called only if the new + * aperture has no conflict with existing aperture and dma mappings. + */ +static int vfio_iommu_aper_resize(struct list_head *iova, + dma_addr_t start, + dma_addr_t end) +{ + struct vfio_iova *node, *next; + + if (list_empty(iova)) + return vfio_insert_iova(start, end, iova); + + /* Adjust iova list start */ + list_for_each_entry_safe(node, next, iova, list) { + if (start < node->start) + break; + if ((start >= node->start) && (start < node->end)) { + node->start = start; + break; + } + /* Delete nodes before new start */ + list_del(&node->list); + kfree(node); + } + + /* Adjust iova list end */ + list_for_each_entry_safe(node, next, iova, list) { + if (end > node->end) + continue; + + if ((end >= node->start) && (end < node->end)) { + node->end = end; + continue; + } + /* Delete nodes after new end */ + list_del(&node->list); + kfree(node); + } + + return 0; +} + +static int vfio_iommu_get_iova_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *n; + + list_for_each_entry(n, iova, list) { + int ret; + + ret = vfio_insert_iova(n->start, n->end, iova_copy); + if (ret) + return ret; + } + + return 0; +} + +static void vfio_iommu_insert_iova_copy(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct list_head *iova = &iommu->iova_list; + struct vfio_iova *n, *next; + + list_for_each_entry_safe(n, next, iova, list) { + list_del(&n->list); + kfree(n); + } + + list_splice_tail(iova_copy, iova); +} + static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -1202,6 +1345,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, int ret; bool resv_msi, msi_remap; phys_addr_t resv_msi_base; + struct iommu_domain_geometry geo; + struct list_head iova_copy; + struct vfio_iova *iova, *iova_next; mutex_lock(&iommu->lock); @@ -1271,6 +1417,26 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_domain; + /* Get aperture info */ + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, &geo); + + if (vfio_iommu_aper_conflict(iommu, geo.aperture_start, + geo.aperture_end)) { + ret = -EINVAL; + goto out_detach; + } + + /* Get a copy of the current iova list and work on it */ + INIT_LIST_HEAD(&iova_copy); + ret = vfio_iommu_get_iova_copy(iommu, &iova_copy); + if (ret) + goto out_detach; + + ret = vfio_iommu_aper_resize(&iova_copy, geo.aperture_start, + geo.aperture_end); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1304,8 +1470,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, list_add(&group->next, &d->group_list); iommu_domain_free(domain->domain); kfree(domain); - mutex_unlock(&iommu->lock); - return 0; + goto done; } ret = iommu_attach_group(domain->domain, iommu_group); @@ -1328,6 +1493,9 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, } list_add(&domain->next, &iommu->domain_list); +done: + /* Delete the old one and insert new iova list */ + vfio_iommu_insert_iova_copy(iommu, &iova_copy); mutex_unlock(&iommu->lock); @@ -1337,6 +1505,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, iommu_detach_group(domain->domain, iommu_group); out_domain: iommu_domain_free(domain->domain); + list_for_each_entry_safe(iova, iova_next, &iova_copy, list) + kfree(iova); out_free: kfree(domain); kfree(group); @@ -1475,6 +1645,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) } INIT_LIST_HEAD(&iommu->domain_list); + INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); @@ -1502,6 +1673,7 @@ static void vfio_iommu_type1_release(void *iommu_data) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain, *domain_tmp; + struct vfio_iova *iova, *iova_next; if (iommu->external_domain) { vfio_release_domain(iommu->external_domain, true); @@ -1517,6 +1689,13 @@ static void vfio_iommu_type1_release(void *iommu_data) list_del(&domain->next); kfree(domain); } + + list_for_each_entry_safe(iova, iova_next, + &iommu->iova_list, list) { + list_del(&iova->list); + kfree(iova); + } + kfree(iommu); }