From patchwork Tue Jul 23 16:06:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 169546 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9071992ilk; Tue, 23 Jul 2019 09:08:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqwp8cMypb4qd1vjYTdBpEQAZTq0QM2kdW/KL5Nzbb2kfa2SmKFcIoKTn3o1U5lkSYR9Lcp9 X-Received: by 2002:a17:90a:3aed:: with SMTP id b100mr83895424pjc.63.1563898121113; Tue, 23 Jul 2019 09:08:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563898121; cv=none; d=google.com; s=arc-20160816; b=aqvVyuZUs6WZ28pDcFoXHZFjOxar1sSCn6xfvP/29Jr8kpWfB9WlDveSIvWAVMRcFs 4OghwiIG2gsKzRH2SvLH7Fg0llupx/Ev8Pi+b7ScnoTzF3j0qdm2B88uV+iXkgERf8Eh x9LLsbkg4vBtvei4Y3rnawvyYXrz0oL/xPBImd9RkYiVMXSG13vvtRr3+5f4jkxvFlgw wlvaWhDewXIWA2M+qLLwtQZCLyD/KDq72+U3XGExbCVvrnvP3VW/kvVjCk9hdYU/ZPsB dHk0FUW394kVLR+gu9kcFBV9e1EA/Nzp+Zi8Au7dHiiIJil3HFtryL+/iZ10cpaLfqsx rU5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=yAxowK39EeuxEZK0z/5l3XaoNwJGj4pXn25FwltaG2M=; b=NDhITOv0++Kti9iBqscZAXbpCp7SGg993NrIoVgT4+JsDk9gonsQcMxkvANJGqB3gh +4eMFtDElpZ6+oxzsKNr6nrg7mLRgWzkDFEIwsrXy0uNSP0AOjHzUXxnmFuu9Xsceney d+ceh4NDW7d8tZkNLOn7D+wGoQYTP7EhXMu62/mrswzQJsRyJboUtjgZbzvQat/CEq94 XkpcjLPRBLJauAvuKU8KbMo6K5ak0CRktL9P6G+Pp0p7U5WUIxvEbRyW4rEDM2SyuJi/ Q3m5vK7/2N0L1x4tr2wvYqZrxCS0P7J2rijE7t4U/vLhg3wW0QnLMMil+4U58owGOjrJ j9Rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 6si11799673plb.345.2019.07.23.09.08.40; Tue, 23 Jul 2019 09:08:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388814AbfGWQIi (ORCPT + 29 others); Tue, 23 Jul 2019 12:08:38 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2740 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388777AbfGWQIg (ORCPT ); Tue, 23 Jul 2019 12:08:36 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 2C1734A5F9A2A176738C; Wed, 24 Jul 2019 00:08:34 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Wed, 24 Jul 2019 00:08:25 +0800 From: Shameer Kolothum To: , CC: , , , , , , , Shameer Kolothum Subject: [PATCH v8 3/6] vfio/type1: Update iova list on detach Date: Tue, 23 Jul 2019 17:06:34 +0100 Message-ID: <20190723160637.8384-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> References: <20190723160637.8384-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Get a copy of iova list on _group_detach and try to update the list. On success replace the current one with the copy. Leave the list as it is if update fails. Signed-off-by: Shameer Kolothum --- v7 --> v8 -Fixed possible invalid holes in iova list if there are no more reserved regions in vfio_iommu_resv_refresh(). -Handled iommu_get_group_resv_regions() err case in vfio_iommu_resv_refresh() -Tidy up of iova_copy list fail case. --- drivers/vfio/vfio_iommu_type1.c | 94 +++++++++++++++++++++++++++++++++ 1 file changed, 94 insertions(+) -- 2.17.1 Reviewed-by: Eric Auger diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index a3c9794ccf83..7005a8cfca1b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1867,12 +1867,93 @@ static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) WARN_ON(iommu->notifier.head); } +/* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the + * iova aperture with the smallest window among existing domains. + */ +static void vfio_iommu_aper_expand(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *domain; + struct iommu_domain_geometry geo; + struct vfio_iova *node; + dma_addr_t start = 0; + dma_addr_t end = (dma_addr_t)~0; + + if (list_empty(iova_copy)) + return; + + list_for_each_entry(domain, &iommu->domain_list, next) { + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, + &geo); + if (geo.aperture_start > start) + start = geo.aperture_start; + if (geo.aperture_end < end) + end = geo.aperture_end; + } + + /* Modify aperture limits. The new aper is either same or bigger */ + node = list_first_entry(iova_copy, struct vfio_iova, list); + node->start = start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + node->end = end; +} + +/* + * Called when a group is detached. The reserved regions for that + * group can be part of valid iova now. But since reserved regions + * may be duplicated among groups, populate the iova valid regions + * list again. + */ +static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *d; + struct vfio_group *g; + struct vfio_iova *node; + dma_addr_t start, end; + LIST_HEAD(resv_regions); + int ret; + + if (list_empty(iova_copy)) + return -EINVAL; + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) { + ret = iommu_get_group_resv_regions(g->iommu_group, + &resv_regions); + if (ret) + goto done; + } + } + + node = list_first_entry(iova_copy, struct vfio_iova, list); + start = node->start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + end = node->end; + + /* purge the iova list and create new one */ + vfio_iommu_iova_free(iova_copy); + + ret = vfio_iommu_aper_resize(iova_copy, start, end); + if (ret) + goto done; + + /* Exclude current reserved regions from iova ranges */ + ret = vfio_iommu_resv_exclude(iova_copy, &resv_regions); +done: + vfio_iommu_resv_free(&resv_regions); + return ret; +} + static void vfio_iommu_type1_detach_group(void *iommu_data, struct iommu_group *iommu_group) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; + LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -1895,6 +1976,13 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, } } + /* + * Get a copy of iova list. This will be used to update + * and to replace the current one later. Please note that + * we will leave the original list as it is if update fails. + */ + vfio_iommu_iova_get_copy(iommu, &iova_copy); + list_for_each_entry(domain, &iommu->domain_list, next) { group = find_iommu_group(domain, iommu_group); if (!group) @@ -1920,10 +2008,16 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, iommu_domain_free(domain->domain); list_del(&domain->next); kfree(domain); + vfio_iommu_aper_expand(iommu, &iova_copy); } break; } + if (!vfio_iommu_resv_refresh(iommu, &iova_copy)) + vfio_iommu_iova_insert_copy(iommu, &iova_copy); + else + vfio_iommu_iova_free(&iova_copy); + detach_group_done: mutex_unlock(&iommu->lock); }