From patchwork Fri Jan 12 16:45:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 124376 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp2272875qgn; Fri, 12 Jan 2018 08:51:11 -0800 (PST) X-Google-Smtp-Source: ACJfBotcF9QJuTtK73SvKIDToK4SKfTH2DzulHoUGIV1Av5xmIJcxdGVM12jXGK3d8I1GCh807kG X-Received: by 10.98.254.11 with SMTP id z11mr15649583pfh.48.1515775871457; Fri, 12 Jan 2018 08:51:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515775871; cv=none; d=google.com; s=arc-20160816; b=xpz2CnGxFqz84Ole7k4MDBDt0KNgowvaNEK8v8AyopOhmJg5EGj+uZ9rRK+XYWtrCp KUW12Hb1gTyv/TU71DoMlV2EUl/aV9hj1TOCI9Ga1e0QIrEhXLKgtJRvJAbxSv+fmkF+ y3je47sGZjr50ykzcg7KiA5OIYWZ1yJKQqJpHHBQtJzp9vPYDxTlXbbE5LZl2Am623kt 16Ni+lpxgOg8EJTI3NjRYFiC46E4/GeB2gHXOXQLuo6Wayn+d/riUDDCJoiSmac/tuif h5wiuVT1N+OXsCCitwG7jnNsxzoNOj+DIb1DvLgMs7n2kKFD1vdjggF9cbXdzZb32qDX p3YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=2d/pRI42xNNZA7vX55s/FVSLLJY8hOF19dMHWzEhr+A=; b=t1XoGy3PHYm44rVZMar5dpimZ+arE/iTqALYqxANqsUFepPuqsK8DiYvi5qsjHSmSy /DP16jTDsbmnOHJL4k55GjDJqldnNPhdwNxesXdLWIFX1PXAG5yc/n84DXkrMCdC1Xg0 X6igcnNn5qlXc9n1d60DPRosHx5P9aSG7+w9T2TWKHYF+8Q2pwDEuiua3V3uhxy8sval OA2YMKUHTvUehLw/Y/ljVQoBwVuIF/dFWNYcH2i3WCVnpW9bZJboGUTTs3HcT01gTvO1 hAMqNuVuMmcQsYiU2LBiGeHAeGnThVQW/KbVpBhc9UQ7pMVF+nbPJS2z4IF3nrEXgAYu NUMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n189si15593837pfn.40.2018.01.12.08.51.11; Fri, 12 Jan 2018 08:51:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965028AbeALQvJ (ORCPT + 28 others); Fri, 12 Jan 2018 11:51:09 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:57007 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S964853AbeALQsW (ORCPT ); Fri, 12 Jan 2018 11:48:22 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 05C835747A4AC; Sat, 13 Jan 2018 00:48:17 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.361.1; Sat, 13 Jan 2018 00:48:08 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [RFC v2 1/5] vfio/type1: Introduce iova list and add iommu aperture validity check Date: Fri, 12 Jan 2018 16:45:27 +0000 Message-ID: <20180112164531.93712-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180112164531.93712-1-shameerali.kolothum.thodi@huawei.com> References: <20180112164531.93712-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This introduces an iova list that is valid for dma mappings. Make sure the new iommu aperture window is valid and doesn't conflict with any existing dma mappings during attach. Also update the iova list with new aperture window during attach/detach. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 177 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 177 insertions(+) -- 1.9.1 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index e30e29a..11cbd49 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -60,6 +60,7 @@ struct vfio_iommu { struct list_head domain_list; + struct list_head iova_list; struct vfio_domain *external_domain; /* domain for external user */ struct mutex lock; struct rb_root dma_list; @@ -92,6 +93,12 @@ struct vfio_group { struct list_head next; }; +struct vfio_iova { + struct list_head list; + phys_addr_t start; + phys_addr_t end; +}; + /* * Guest RAM pinning working set or DMA target */ @@ -1192,6 +1199,123 @@ static bool vfio_iommu_has_sw_msi(struct iommu_group *group, phys_addr_t *base) return ret; } +static int vfio_insert_iova(phys_addr_t start, phys_addr_t end, + struct list_head *head) +{ + struct vfio_iova *region; + + region = kmalloc(sizeof(*region), GFP_KERNEL); + if (!region) + return -ENOMEM; + + INIT_LIST_HEAD(®ion->list); + region->start = start; + region->end = end; + + list_add_tail(®ion->list, head); + return 0; +} + +/* + * Find whether a mem region overlaps with existing dma mappings + */ +static bool vfio_find_dma_overlap(struct vfio_iommu *iommu, + phys_addr_t start, phys_addr_t end) +{ + struct rb_node *n = rb_first(&iommu->dma_list); + + for (; n; n = rb_next(n)) { + struct vfio_dma *dma; + + dma = rb_entry(n, struct vfio_dma, node); + + if (end < dma->iova) + break; + if (start >= dma->iova + dma->size) + continue; + return true; + } + + return false; +} + +/* + * Check the new iommu aperture is a valid one + */ +static int vfio_iommu_valid_aperture(struct vfio_iommu *iommu, + phys_addr_t start, + phys_addr_t end) +{ + struct vfio_iova *first, *last; + struct list_head *iova = &iommu->iova_list; + + if (list_empty(iova)) + return 0; + + /* Check if new one is outside the current aperture */ + first = list_first_entry(iova, struct vfio_iova, list); + last = list_last_entry(iova, struct vfio_iova, list); + if ((start > last->end) || (end < first->start)) + return -EINVAL; + + /* Check for any existing dma mappings outside the new start */ + if (start > first->start) { + if (vfio_find_dma_overlap(iommu, first->start, start - 1)) + return -EINVAL; + } + + /* Check for any existing dma mappings outside the new end */ + if (end < last->end) { + if (vfio_find_dma_overlap(iommu, end + 1, last->end)) + return -EINVAL; + } + + return 0; +} + +/* + * Adjust the iommu aperture window if new aperture is a valid one + */ +static int vfio_iommu_iova_aper_adjust(struct vfio_iommu *iommu, + phys_addr_t start, + phys_addr_t end) +{ + struct vfio_iova *node, *next; + struct list_head *iova = &iommu->iova_list; + + if (list_empty(iova)) + return vfio_insert_iova(start, end, iova); + + /* Adjust iova list start */ + list_for_each_entry_safe(node, next, iova, list) { + if (start < node->start) + break; + if ((start >= node->start) && (start <= node->end)) { + node->start = start; + break; + } + /* Delete nodes before new start */ + list_del(&node->list); + kfree(node); + } + + /* Adjust iova list end */ + list_for_each_entry_safe(node, next, iova, list) { + if (end > node->end) + continue; + + if ((end >= node->start) && (end <= node->end)) { + node->end = end; + continue; + } + /* Delete nodes after new end */ + list_del(&node->list); + kfree(node); + } + + return 0; +} + static int vfio_iommu_type1_attach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -1202,6 +1326,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, int ret; bool resv_msi, msi_remap; phys_addr_t resv_msi_base; + struct iommu_domain_geometry geo; mutex_lock(&iommu->lock); @@ -1271,6 +1396,14 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, if (ret) goto out_domain; + /* Get aperture info */ + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, &geo); + + ret = vfio_iommu_valid_aperture(iommu, geo.aperture_start, + geo.aperture_end); + if (ret) + goto out_detach; + resv_msi = vfio_iommu_has_sw_msi(iommu_group, &resv_msi_base); INIT_LIST_HEAD(&domain->group_list); @@ -1327,6 +1460,11 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, goto out_detach; } + ret = vfio_iommu_iova_aper_adjust(iommu, geo.aperture_start, + geo.aperture_end); + if (ret) + goto out_detach; + list_add(&domain->next, &iommu->domain_list); mutex_unlock(&iommu->lock); @@ -1392,6 +1530,35 @@ static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) WARN_ON(iommu->notifier.head); } +/* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the + * iova aperture with the smallest window among existing domains. + */ +static void vfio_iommu_iova_aper_refresh(struct vfio_iommu *iommu) +{ + struct vfio_domain *domain; + struct iommu_domain_geometry geo; + struct vfio_iova *node; + phys_addr_t start = 0; + phys_addr_t end = (phys_addr_t)~0; + + list_for_each_entry(domain, &iommu->domain_list, next) { + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, + &geo); + if (geo.aperture_start > start) + start = geo.aperture_start; + if (geo.aperture_end < end) + end = geo.aperture_end; + } + + /* modify iova aperture limits */ + node = list_first_entry(&iommu->iova_list, struct vfio_iova, list); + node->start = start; + node = list_last_entry(&iommu->iova_list, struct vfio_iova, list); + node->end = end; +} + static void vfio_iommu_type1_detach_group(void *iommu_data, struct iommu_group *iommu_group) { @@ -1445,6 +1612,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, iommu_domain_free(domain->domain); list_del(&domain->next); kfree(domain); + vfio_iommu_iova_aper_refresh(iommu); } break; } @@ -1475,6 +1643,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) } INIT_LIST_HEAD(&iommu->domain_list); + INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); @@ -1502,6 +1671,7 @@ static void vfio_iommu_type1_release(void *iommu_data) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain, *domain_tmp; + struct vfio_iova *iova, *iova_tmp; if (iommu->external_domain) { vfio_release_domain(iommu->external_domain, true); @@ -1517,6 +1687,13 @@ static void vfio_iommu_type1_release(void *iommu_data) list_del(&domain->next); kfree(domain); } + + list_for_each_entry_safe(iova, iova_tmp, + &iommu->iova_list, list) { + list_del(&iova->list); + kfree(iova); + } + kfree(iommu); }