From patchwork Thu Feb 15 09:45:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 128396 Delivered-To: patch@linaro.org Received: by 10.46.124.24 with SMTP id x24csp1557587ljc; Thu, 15 Feb 2018 01:47:31 -0800 (PST) X-Google-Smtp-Source: AH8x227i7MIwstjqTmRrL2xvgpUcD7Caqy5pgvIh9JZTqI7aQaKgdWomlnA5g4eXGDWkesFGnj+F X-Received: by 10.101.101.84 with SMTP id a20mr1753526pgw.163.1518688051525; Thu, 15 Feb 2018 01:47:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518688051; cv=none; d=google.com; s=arc-20160816; b=Ci6sGU27B84spRVzG5lCQxryCn+Fv2ATKuxxDZ/lEPArSZuSEgmSRwpzj7GacwVt6F usohdOMbsgoq/HJD2uuIGRH4NjOdOYgPmE3el2rKoTCsVreMgdZd5OCiCmnLmjZgs6Ve 2jayPIvBTU3KnoPNgrNyHCqYyfwI7NgSLpNpCYqPND+i4ckC7smEM+IZ6MNn/oPu8ozT wSGlEkoDn16eL5jIL7I5ssWfkDhM4dnjT//1vyjTXwZyud7q48GHc6CzUVbxUwz4I9Zo s7wzRONOKpac8Zn7EfpZvraKrtdhuKL5RafaPoJejB+FEHu3l2t4X7K86ZSryjKBKiMI B28A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=roEhcE1JuPPnazNz1N6UQOUHx0cRrtR/7kU0P6h56jU=; b=mPRcza+Pax5IGaD0s8IQbyt2WSq6/KStcXt5gZHHZxaRhhjuIMU8f4NnV0c7185JFc cfKYj0SOwj0LB/FhHh9lWHGMXRanqgu/kpDX+oxCA8yFC6blB7nwdCCayLuC6Kqp5gLi XRDs0OreRWR45UMun9BIRDTd6ldGLj2XFG2EVrLRpsUodKebAAot7l89hohO8Bu2GwEk WylSRSNEEA4W5Vp8cm/l2ILEI2VykhzN8fVWAXfuaopzezjxauE51XA/QJsYwE1AlzHD CIn2lBu7q6G6kC9HicZtfS7PeiTnwvVxvCJsPdtjU3b98kgAIupnFjOkaiCYBNp4Phqa esDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o88si9101117pfk.380.2018.02.15.01.47.31; Thu, 15 Feb 2018 01:47:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967822AbeBOJr0 (ORCPT + 28 others); Thu, 15 Feb 2018 04:47:26 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:59923 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S966863AbeBOJrV (ORCPT ); Thu, 15 Feb 2018 04:47:21 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id BB72EECD1E938; Thu, 15 Feb 2018 17:47:07 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.202.227.237) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.361.1; Thu, 15 Feb 2018 17:47:01 +0800 From: Shameer Kolothum To: , , CC: , , , , , Shameer Kolothum Subject: [PATCH v3 3/6] vfio/type1: Update iova list on detach Date: Thu, 15 Feb 2018 09:45:01 +0000 Message-ID: <20180215094504.4972-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> References: <20180215094504.4972-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.237] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Get a copy of iova list on _group_detach and try to update the list. On success replace the current one with the copy. Leave the list as it is if update fails. Signed-off-by: Shameer Kolothum --- drivers/vfio/vfio_iommu_type1.c | 103 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 103 insertions(+) -- 2.7.4 diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 4db87a9..8d8ddd7 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1646,12 +1646,96 @@ static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) WARN_ON(iommu->notifier.head); } +/* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the + * iova aperture with the smallest window among existing domains. + */ +static void vfio_iommu_aper_expand(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *domain; + struct iommu_domain_geometry geo; + struct vfio_iova *node; + phys_addr_t start = 0; + phys_addr_t end = (phys_addr_t)~0; + + list_for_each_entry(domain, &iommu->domain_list, next) { + iommu_domain_get_attr(domain->domain, DOMAIN_ATTR_GEOMETRY, + &geo); + if (geo.aperture_start > start) + start = geo.aperture_start; + if (geo.aperture_end < end) + end = geo.aperture_end; + } + + /* Modify aperture limits. The new aper is either same or bigger */ + node = list_first_entry(iova_copy, struct vfio_iova, list); + node->start = start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + node->end = end; +} + +/* + * Called when a group is detached. The reserved regions for that + * group can be part of valid iova now. But since reserved regions + * may be duplicated among groups, populate the iova valid regions + * list again. + */ +static int vfio_iommu_resv_refresh(struct vfio_iommu *iommu, + struct list_head *iova_copy) +{ + struct vfio_domain *d; + struct vfio_group *g; + struct vfio_iova *node, *tmp; + struct iommu_resv_region *resv, *resv_next; + struct list_head resv_regions; + phys_addr_t start, end; + int ret; + + INIT_LIST_HEAD(&resv_regions); + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) + iommu_get_group_resv_regions(g->iommu_group, + &resv_regions); + } + + if (list_empty(&resv_regions)) + return 0; + + node = list_first_entry(iova_copy, struct vfio_iova, list); + start = node->start; + node = list_last_entry(iova_copy, struct vfio_iova, list); + end = node->end; + + /* purge the iova list and create new one */ + list_for_each_entry_safe(node, tmp, iova_copy, list) { + list_del(&node->list); + kfree(node); + } + + ret = vfio_iommu_aper_resize(iova_copy, start, end); + if (ret) + goto done; + + /* Exclude current reserved regions from iova ranges */ + ret = vfio_iommu_resv_exclude(iova_copy, &resv_regions); +done: + list_for_each_entry_safe(resv, resv_next, &resv_regions, list) + kfree(resv); + return ret; +} + static void vfio_iommu_type1_detach_group(void *iommu_data, struct iommu_group *iommu_group) { struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; + struct list_head iova_copy; + struct vfio_iova *iova, *iova_next; + bool iova_copy_fail; mutex_lock(&iommu->lock); @@ -1674,6 +1758,13 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, } } + /* + * Get a copy of iova list. If success, use copy to update the + * list and to replace the current one. + */ + INIT_LIST_HEAD(&iova_copy); + iova_copy_fail = !!vfio_iommu_get_iova_copy(iommu, &iova_copy); + list_for_each_entry(domain, &iommu->domain_list, next) { group = find_iommu_group(domain, iommu_group); if (!group) @@ -1699,10 +1790,22 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, iommu_domain_free(domain->domain); list_del(&domain->next); kfree(domain); + if (!iova_copy_fail) + vfio_iommu_aper_expand(iommu, &iova_copy); } break; } + if (!iova_copy_fail) { + if (!vfio_iommu_resv_refresh(iommu, &iova_copy)) { + /* Delete the current one and insert new iova list */ + vfio_iommu_insert_iova_copy(iommu, &iova_copy); + goto detach_group_done; + } + } + + list_for_each_entry_safe(iova, iova_next, &iova_copy, list) + kfree(iova); detach_group_done: mutex_unlock(&iommu->lock); }