From patchwork Fri Mar 19 13:25:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404794 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1378014jai; Fri, 19 Mar 2021 06:31:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwX1VbpnO8pxcNckoufbxy0KMmnMGeIhSuO9tRCguL9MSKwIH6RQ67whE3Y0bF8gqsIgcwT X-Received: by 2002:a05:6402:1754:: with SMTP id v20mr9413501edx.191.1616160705878; Fri, 19 Mar 2021 06:31:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160705; cv=none; d=google.com; s=arc-20160816; b=KXTsuJl9aF33YN3xOzC9eJ8Rqyav9+l+nFOtFkeynFQX+4CZR/4PMlJzrnnca5Q5hn UKUgizM8hjVv8jjLgoUdlihxcQZsb3yq5ZqRP2Urx1XYCJ3W1SvvME/Pq2LJmRfQG780 BPup3vCvggypD3esUVpm3C8U+LL68zXK5gUFku3Iz+BilKEy//mvAfB7M1a4wTejUc2b YwepQlFOY7qjoc2+S0BCoufhHj6DHP3jnnPKG6TXEOtSwgmJGFs/45pnnHDm/nO7Wb0q nK6oOUWsPabQmbUBD+n0GM0qdtGePoDMQYOGVtH9GpRiGCzmH6zMSJxzazFpGDD+x1pc fEHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=b9Ih1m/407oj+EOmqSjZvPkpF8ndg/gNL7osGbnsE/A=; b=cAhSn5CguGk5klaJWljuXUgricLpx04EsR0NCAmaH+XGwWbe72Q6hXvTz2T9YMLqdK /P7ZYgpBliL7fAtTfLwPZRGlz3Ubo6kCdRlxrCykZ0SL6Aj7pszH2Yw7dMOl9PSN3/Zs TPoh7NSxw/2QfhAvyiQLqTTIvuXk9x62jm8ThmZGMJ2h3kGggIjEiHmq9ru6QPlm/h9n H2/i/a3iEKVRortPMGXIreWKxBmXqRlvCuzAK7aBsTHH5gW76szvmoLsM3K1+jAYtnkf ohbUYkGvGR6HwpOLnIVXigUbP/hlstAuUVblBw6a6PnKLRXlVGu/Rn+lc8BuqoRjXUqY jenw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id rp14si4089970ejb.435.2021.03.19.06.31.45 for ; Fri, 19 Mar 2021 06:31:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230377AbhCSNbO (ORCPT ); Fri, 19 Mar 2021 09:31:14 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14016 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229806AbhCSNan (ORCPT ); Fri, 19 Mar 2021 09:30:43 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T269c5zPkbj; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:06 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 1/6] iommu: Move IOVA power-of-2 roundup into allocator Date: Fri, 19 Mar 2021 21:25:43 +0800 Message-ID: <1616160348-29451-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move the IOVA size power-of-2 rcache roundup into the IOVA allocator. This is to eventually make it possible to be able to configure the upper limit of the IOVA rcache range. Signed-off-by: John Garry --- drivers/iommu/dma-iommu.c | 8 ------ drivers/iommu/iova.c | 51 ++++++++++++++++++++++++++------------- 2 files changed, 34 insertions(+), 25 deletions(-) -- 2.26.2 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index af765c813cc8..15b7270a5c2a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -429,14 +429,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, shift = iova_shift(iovad); iova_len = size >> shift; - /* - * Freeing non-power-of-two-sized allocations back into the IOVA caches - * will come back to bite us badly, so we have to waste a bit of space - * rounding up anything cacheable to make sure that can't happen. The - * order of the unadjusted size will still match upon freeing. - */ - if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) - iova_len = roundup_pow_of_two(iova_len); dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit); diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index e6e2fa85271c..e62e9e30b30c 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -179,7 +179,7 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova, static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, - struct iova *new, bool size_aligned) + struct iova *new, bool size_aligned, bool fast) { struct rb_node *curr, *prev; struct iova *curr_iova; @@ -188,6 +188,15 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long align_mask = ~0UL; unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn; + /* + * Freeing non-power-of-two-sized allocations back into the IOVA caches + * will come back to bite us badly, so we have to waste a bit of space + * rounding up anything cacheable to make sure that can't happen. The + * order of the unadjusted size will still match upon freeing. + */ + if (fast && size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) + size = roundup_pow_of_two(size); + if (size_aligned) align_mask <<= fls_long(size - 1); @@ -288,21 +297,10 @@ void iova_cache_put(void) } EXPORT_SYMBOL_GPL(iova_cache_put); -/** - * alloc_iova - allocates an iova - * @iovad: - iova domain in question - * @size: - size of page frames to allocate - * @limit_pfn: - max limit address - * @size_aligned: - set if size_aligned address range is required - * This function allocates an iova in the range iovad->start_pfn to limit_pfn, - * searching top-down from limit_pfn to iovad->start_pfn. If the size_aligned - * flag is set then the allocated address iova->pfn_lo will be naturally - * aligned on roundup_power_of_two(size). - */ -struct iova * -alloc_iova(struct iova_domain *iovad, unsigned long size, +static struct iova * +__alloc_iova(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, - bool size_aligned) + bool size_aligned, bool fast) { struct iova *new_iova; int ret; @@ -312,7 +310,7 @@ alloc_iova(struct iova_domain *iovad, unsigned long size, return NULL; ret = __alloc_and_insert_iova_range(iovad, size, limit_pfn + 1, - new_iova, size_aligned); + new_iova, size_aligned, fast); if (ret) { free_iova_mem(new_iova); @@ -321,6 +319,25 @@ alloc_iova(struct iova_domain *iovad, unsigned long size, return new_iova; } + +/** + * alloc_iova - allocates an iova + * @iovad: - iova domain in question + * @size: - size of page frames to allocate + * @limit_pfn: - max limit address + * @size_aligned: - set if size_aligned address range is required + * This function allocates an iova in the range iovad->start_pfn to limit_pfn, + * searching top-down from limit_pfn to iovad->start_pfn. If the size_aligned + * flag is set then the allocated address iova->pfn_lo will be naturally + * aligned on roundup_power_of_two(size). + */ +struct iova * +alloc_iova(struct iova_domain *iovad, unsigned long size, + unsigned long limit_pfn, + bool size_aligned) +{ + return __alloc_iova(iovad, size, limit_pfn, size_aligned, false); +} EXPORT_SYMBOL_GPL(alloc_iova); static struct iova * @@ -433,7 +450,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size, return iova_pfn; retry: - new_iova = alloc_iova(iovad, size, limit_pfn, true); + new_iova = __alloc_iova(iovad, size, limit_pfn, true, true); if (!new_iova) { unsigned int cpu; From patchwork Fri Mar 19 13:25:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404789 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1377487jai; Fri, 19 Mar 2021 06:31:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyH2QFw+2AxMSKS1+YEE8tkqh03DBZshQWz8PzPk4maK+PmB3cuPjdsXeqMFOTITgnhNdaB X-Received: by 2002:a17:907:3d8d:: with SMTP id he13mr4431160ejc.530.1616160675103; Fri, 19 Mar 2021 06:31:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160675; cv=none; d=google.com; s=arc-20160816; b=ucYng9R5d6thkAZBatSTMM5+2Q4Hfa9Y7UL5oMqV8+7WHl9nt2MVSsozK5Cv2eeO28 mAYPi6laLW1OyViLRZTyUR0YwsKRMLcanPsRNNfe+HNAYwm6ubYcmcxt1/PHSINy5Nek V4yvks2b0Kq7+5LuWI2+sjfd6/4ag8DGQdADzXwBtBnKc5zaao0vDjsPDmJVVEk7Xzco clZxdo3DPrmietcDrpLRzOIgHguXDSr8HQ+foTVT1iM3JcA5AiImJqZS8FTPXM9PSq5z jC6WqLl/i0+78blbVNtU0bmjZYtSVk3R64VR765xVGGozOviMEh1M0kGYrPAZSew38y4 ZRrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=f/NaN5kAMcJJ+RWjoo33AdA4S8t135bYBtGSZSAf9Ws=; b=DpjP13NrFcUaqdfhKUfD0M7RNJhN9cxEbLitTUaOmn/TEhoEKTPrANAuqnJ/0lfedC 0x2LMUb5OQbMiDHRBNup8f4JVRkEj0beJLVCD1nk5Nn8812myxFVDsTnW9klGqgLsUhJ 4NWDexDqj4F3iHVvK5fsYNha4mhMtbPB0tUQk6FIKg8W+FU3t8mQFGBp+gl9/vLRMHR0 mIah+oAhl6o6Al2U4NPsAQcRNoJ6WKBgZGZsK5UCGhXe2Z6Ek6exLNgOZad2a9O7rtxO RI8HnqqShWlpqcZSLR3fszJ3Rq7d3rl9xq6zHVsdaljEW8tSXAEZXRe+3M0isKUSfYp1 9e4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si4292630ejx.725.2021.03.19.06.31.14 for ; Fri, 19 Mar 2021 06:31:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230241AbhCSNap (ORCPT ); Fri, 19 Mar 2021 09:30:45 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14018 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230251AbhCSNaT (ORCPT ); Fri, 19 Mar 2021 09:30:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T25scqzPkWX; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:07 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 2/6] iova: Add a per-domain count of reserved nodes Date: Fri, 19 Mar 2021 21:25:44 +0800 Message-ID: <1616160348-29451-3-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org To help learn if the domain has regular IOVA nodes, add a count of reserved nodes, calculated at init time. Signed-off-by: John Garry --- drivers/iommu/iova.c | 2 ++ include/linux/iova.h | 1 + 2 files changed, 3 insertions(+) -- 2.26.2 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index e62e9e30b30c..cecc74fb8663 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -717,6 +717,8 @@ reserve_iova(struct iova_domain *iovad, * or need to insert remaining non overlap addr range */ iova = __insert_new_range(iovad, pfn_lo, pfn_hi); + if (iova) + iovad->reserved_node_count++; finish: spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); diff --git a/include/linux/iova.h b/include/linux/iova.h index c834c01c0a5b..fd3217a605b2 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -95,6 +95,7 @@ struct iova_domain { flush-queues */ atomic_t fq_timer_on; /* 1 when timer is active, 0 when not */ + int reserved_node_count; }; static inline unsigned long iova_size(struct iova *iova) From patchwork Fri Mar 19 13:25:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404791 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1377652jai; Fri, 19 Mar 2021 06:31:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxWLwMiWxEl8eB+0IUeCGmluy8mwyd2ymVLhD/4P0vFW/bDBs4+CzlPYYzb5kF/coUEk1sS X-Received: by 2002:a17:906:d0c3:: with SMTP id bq3mr4268623ejb.424.1616160684623; Fri, 19 Mar 2021 06:31:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160684; cv=none; d=google.com; s=arc-20160816; b=vFFMzwzZZPm1tObHGntmVEqeamRxgF3zsCL6r2qhPSVylK4T5j7czPz3OJJbFALWbh WUK5b60+C6TUOqVB1Ug0kvDXCcJuFad9eRzOerxZYp2OUV4j2F1PN5rIUczOiyEvIBT5 IMT5HPpfw5tb565eSjzfVUmvpSBRTk3gLEC6uuWpdT8mnjEiE82nEB02SpyhBq5MZcWv 00LA4mcX8aaUQmT3mexwgXWJXgJjLMYm+J+J30m7NIYWPBpW8alRNxLjBjTPgIDs3IRv oOCkqjQV1PIq51hRF5eHLocfuaLt5yxNYdgFQZ9DRI/SICZsBqYZxZdxsi2XQwtfY619 k5Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=J8ik1LTTIOM0V8nNstNgouIEID60C1ghzDKkYb36oZU=; b=u2JR4RK3t0+EuPdw8p9RcZehedOknjLUSGV2xN8bSM4+49M4QaJnTjOnMLVMf0pNgC 483bPmW7UX85gzHO/GjNa0yBkn7203cneMJq4o67oM0VEnzHBvp9ZHbdW11g1ZRs3WNg lZ/uljSGkqmE4lu43dtBATwBJZpXuH33dwMz74ywRpaz8s2oB+wQU4QCkRQZpNA9yJQw 1DzaCxMgDPlqd9dF4ZFi2AKWFaRj983DHykmcrGIzM/Vpk3v9ThLVOOmgUkaPdK8QURf ZIOfogA2HN27sBnB5H7c/JvMeJUON+oNZyjwWsn66+cc3ZMC0xVw+fl3HYGvGh1/mXjy vZZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si4292630ejx.725.2021.03.19.06.31.24 for ; Fri, 19 Mar 2021 06:31:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230290AbhCSNar (ORCPT ); Fri, 19 Mar 2021 09:30:47 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14021 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230281AbhCSNaW (ORCPT ); Fri, 19 Mar 2021 09:30:22 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T25CSXzPk0h; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:07 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 3/6] iova: Allow rcache range upper limit to be configurable Date: Fri, 19 Mar 2021 21:25:45 +0800 Message-ID: <1616160348-29451-4-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Some LLDs may request DMA mappings whose IOVA length exceeds that of the current rcache upper limit. This means that allocations for those IOVAs will never be cached, and always must be allocated and freed from the RB tree per DMA mapping cycle. This has a significant effect on performance, more so since commit 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search fails"), as discussed at [0]. Allow the range of cached IOVAs to be increased, by providing an API to set the upper limit, which is capped at IOVA_RANGE_CACHE_MAX_SIZE. For simplicity, the full range of IOVA rcaches is allocated and initialized at IOVA domain init time. Setting the range upper limit will fail if the domain is already live (before the tree contains IOVA nodes). This must be done to ensure any IOVAs cached comply with rule of size being a power-of-2. [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/ Signed-off-by: John Garry --- drivers/iommu/iova.c | 37 +++++++++++++++++++++++++++++++++++-- include/linux/iova.h | 11 ++++++++++- 2 files changed, 45 insertions(+), 3 deletions(-) -- 2.26.2 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index cecc74fb8663..d4f2f7fbbd84 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -49,6 +49,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->flush_cb = NULL; iovad->fq = NULL; iovad->anchor.pfn_lo = iovad->anchor.pfn_hi = IOVA_ANCHOR; + iovad->rcache_max_size = IOVA_RANGE_CACHE_DEFAULT_SIZE; rb_link_node(&iovad->anchor.node, NULL, &iovad->rbroot.rb_node); rb_insert_color(&iovad->anchor.node, &iovad->rbroot); init_iova_rcaches(iovad); @@ -194,7 +195,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, * rounding up anything cacheable to make sure that can't happen. The * order of the unadjusted size will still match upon freeing. */ - if (fast && size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) + if (fast && size < (1 << (iovad->rcache_max_size - 1))) size = roundup_pow_of_two(size); if (size_aligned) @@ -901,7 +902,7 @@ static bool iova_rcache_insert(struct iova_domain *iovad, unsigned long pfn, { unsigned int log_size = order_base_2(size); - if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE) + if (log_size >= iovad->rcache_max_size) return false; return __iova_rcache_insert(iovad, &iovad->rcaches[log_size], pfn); @@ -946,6 +947,38 @@ static unsigned long __iova_rcache_get(struct iova_rcache *rcache, return iova_pfn; } +void iova_rcache_set_upper_limit(struct iova_domain *iovad, + unsigned long iova_len) +{ + unsigned int rcache_index = order_base_2(iova_len) + 1; + struct rb_node *rb_node = &iovad->anchor.node; + unsigned long flags; + int count = 0; + + spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); + if (rcache_index <= iovad->rcache_max_size) + goto out; + + while (1) { + rb_node = rb_prev(rb_node); + if (!rb_node) + break; + count++; + } + + /* + * If there are already IOVA nodes present in the tree, then don't + * allow range upper limit to be set. + */ + if (count != iovad->reserved_node_count) + goto out; + + iovad->rcache_max_size = min_t(unsigned long, rcache_index, + IOVA_RANGE_CACHE_MAX_SIZE); +out: + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); +} + /* * Try to satisfy IOVA allocation range from rcache. Fail if requested * size is too big or the DMA limit we are given isn't satisfied by the diff --git a/include/linux/iova.h b/include/linux/iova.h index fd3217a605b2..952b81b54ef7 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -25,7 +25,8 @@ struct iova { struct iova_magazine; struct iova_cpu_rcache; -#define IOVA_RANGE_CACHE_MAX_SIZE 6 /* log of max cached IOVA range size (in pages) */ +#define IOVA_RANGE_CACHE_DEFAULT_SIZE 6 +#define IOVA_RANGE_CACHE_MAX_SIZE 10 /* log of max cached IOVA range size (in pages) */ #define MAX_GLOBAL_MAGS 32 /* magazines per bin */ struct iova_rcache { @@ -74,6 +75,7 @@ struct iova_domain { unsigned long start_pfn; /* Lower limit for this domain */ unsigned long dma_32bit_pfn; unsigned long max32_alloc_size; /* Size of last failed allocation */ + unsigned long rcache_max_size; /* Upper limit of cached IOVA RANGE */ struct iova_fq __percpu *fq; /* Flush Queue */ atomic64_t fq_flush_start_cnt; /* Number of TLB flushes that @@ -158,6 +160,8 @@ int init_iova_flush_queue(struct iova_domain *iovad, struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); void put_iova_domain(struct iova_domain *iovad); void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad); +void iova_rcache_set_upper_limit(struct iova_domain *iovad, + unsigned long iova_len); #else static inline int iova_cache_get(void) { @@ -238,6 +242,11 @@ static inline void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad) { } + +static inline void iova_rcache_set_upper_limit(struct iova_domain *iovad, + unsigned long iova_len) +{ +} #endif #endif From patchwork Fri Mar 19 13:25:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404788 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1377467jai; Fri, 19 Mar 2021 06:31:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz0kMnfHBAYHXtPzYmuDpylyvbHAqKCWChSy2Ad969lygBkT5nBIqElfjRg4p2lVl0nPn4v X-Received: by 2002:aa7:d347:: with SMTP id m7mr9342998edr.260.1616160674153; Fri, 19 Mar 2021 06:31:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160674; cv=none; d=google.com; s=arc-20160816; b=nXbJRnLYT+jWfUca2wzT7odBZc0mS60yqmW32sLDRHdX+v0X2wIg03eohLQm4nqtTl zRNjfMhxXZO17ROSvqMI6akpGEzW3UXdmtbUdqVls5FQ6uT6+jrnPA6vbvDAT6PINUMC bZ2lBz9GcfU2aggUjQl6EV0DFgsyq1dgHL1+Z3wqxaetyVRl/Y9E/x//LR4VZ4tOudW+ ejgYkisHV21UAfUAr8NY5LskkewLE8tMF/lUi3xAO1Ep4HthjQ3pKgMFLOyogm75UzF9 lQt2YESKA37f9DOyVWc7CKWnqtO7sNJHTdmvua9hlOW6K1bcrnTXGgUameMiXVqk2LAN hohA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=IJWUrN96asFGPxZD8Mi7lxGwR7hbif9mLlCzJnZ3oMQ=; b=dWbAGJtpv9lz/VvLQ/V/3W940kf6on03IY6aH8qGWckJ2w+DUWpJtWKR90cw9sJalA DSoTmiPNNhRXd4JxV+3QXNlL4CBgJEduGNXD2hhxoTou49oWEda8akF/Gx5FMYXcOP7d /cMzutrCPIV1COeLV66aZ33cs2h/oQaAz+qByZNSlCHwAq7D3RPgRU4UsLmkBl75b3iL 8Ci2wGp6Iz78cCDrSVV92gCqfRyKuizGXgI/2TuX05Ni6+ZwPGX2m/HLAvqe5GvJhN8o 2kGVQjDNHV8Vxu9qOsx52CcMTfJuHZgqdZ92DEQinxImOwcGX+esbyl0aBson+m+fn5x Jkug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si4292630ejx.725.2021.03.19.06.31.14 for ; Fri, 19 Mar 2021 06:31:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230231AbhCSNal (ORCPT ); Fri, 19 Mar 2021 09:30:41 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14020 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbhCSNaT (ORCPT ); Fri, 19 Mar 2021 09:30:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T26YgmzPkdk; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:07 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 4/6] iommu: Add iommu_dma_set_opt_size() Date: Fri, 19 Mar 2021 21:25:46 +0800 Message-ID: <1616160348-29451-5-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a function which allows the max optimised IOMMU DMA size to be set. Signed-off-by: John Garry --- drivers/iommu/dma-iommu.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) -- 2.26.2 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 15b7270a5c2a..a5dfbd6c0496 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -447,6 +447,21 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } +__maybe_unused +static void iommu_dma_set_opt_size(struct device *dev, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + unsigned long shift, iova_len; + + shift = iova_shift(iovad); + iova_len = size >> shift; + iova_len = roundup_pow_of_two(iova_len); + + iova_rcache_set_upper_limit(iovad, iova_len); +} + static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, dma_addr_t iova, size_t size, struct page *freelist) { From patchwork Fri Mar 19 13:25:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404790 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1377534jai; Fri, 19 Mar 2021 06:31:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwKx7C2GBBPBg4MJTUdjXUQNhlqSvnySRLdI/YTT9DurMjmqrBly9gltN2GzT7y1ETPa+4q X-Received: by 2002:a17:907:104f:: with SMTP id oy15mr4444829ejb.252.1616160677562; Fri, 19 Mar 2021 06:31:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160677; cv=none; d=google.com; s=arc-20160816; b=iarsPWre+IP8i9aWF/O6ILWbVKTSq0j28+hGMP4/Ab6LqqrDeKSu9Zc87uExof9Qjg 9q4Njx+Z+ElDd0ViewICshMoCGReNelCKOcCba/7QiwUbhOAswwhbn2nBk6Yg7hdKpQO irjsPB6a3/fqHL1W621VseRTJZPfU7Sn5QL2xnPNsEEWdaxUO2DHfQGKFaMSyL27BDNv DfAL0u808yKZS+nhh51GGcot+7SqdgmjwWOql0Z+b0lvN95Jq3OcnQAvH97nbqShuV9K fX2D2ly4oZRQtCN/yCjG+amDKOBEzf3u3cMSxxrtl9TeIkanKIIn+U1gvE2MEbz6cxWS WC5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=KdJq4QncHqOg+p4ezMA7i8W/lEhP89u6v+C93SCRISA=; b=GvGpiXqLjAKlisbTBOt3MysCJ8ORwb/LYutbHjeCqLDBrJPYuG32IQ1pg61GPTPtM1 uLGOb0w0eNg+/FOtHAG3hNihIzXodda3T5rSlhmYjn99+YTyosgGh3rcojRhIt9YjvjY bqj+6XYcdn0xuJ9EezEEa//6mjYLui+KMjUl5kuugTqsg9PIYCusGlXB1bbcoKhrVYKw gEcvcbOmpHHNbJqHPhpPOZXCCNxGDxY0O/hTg9R9enXCMfvCw0ftSpy4fcj+bgjJjp0F etZCPOwVv6fvYc6EUhF1ptem08zWjsvWOyCjnDXttLy0ynj2fDub0EcKTuu6Wh7nf2Ql mfkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si4292630ejx.725.2021.03.19.06.31.17 for ; Fri, 19 Mar 2021 06:31:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230274AbhCSNaq (ORCPT ); Fri, 19 Mar 2021 09:30:46 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14019 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230258AbhCSNaU (ORCPT ); Fri, 19 Mar 2021 09:30:20 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T24tC4zPkFh; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:08 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 5/6] dma-mapping/iommu: Add dma_set_max_opt_size() Date: Fri, 19 Mar 2021 21:25:47 +0800 Message-ID: <1616160348-29451-6-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a function to allow the max size which we want to optimise DMA mappings for. Signed-off-by: John Garry --- drivers/iommu/dma-iommu.c | 2 +- include/linux/dma-map-ops.h | 1 + include/linux/dma-mapping.h | 5 +++++ kernel/dma/mapping.c | 11 +++++++++++ 4 files changed, 18 insertions(+), 1 deletion(-) -- 2.26.2 diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index a5dfbd6c0496..d35881fcfb9c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -447,7 +447,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } -__maybe_unused static void iommu_dma_set_opt_size(struct device *dev, size_t size) { struct iommu_domain *domain = iommu_get_dma_domain(dev); @@ -1278,6 +1277,7 @@ static const struct dma_map_ops iommu_dma_ops = { .map_resource = iommu_dma_map_resource, .unmap_resource = iommu_dma_unmap_resource, .get_merge_boundary = iommu_dma_get_merge_boundary, + .set_max_opt_size = iommu_dma_set_opt_size, }; /* diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 51872e736e7b..fed7a183b3b9 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -64,6 +64,7 @@ struct dma_map_ops { u64 (*get_required_mask)(struct device *dev); size_t (*max_mapping_size)(struct device *dev); unsigned long (*get_merge_boundary)(struct device *dev); + void (*set_max_opt_size)(struct device *dev, size_t size); }; #ifdef CONFIG_DMA_OPS diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 2a984cb4d1e0..91fe770145d4 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -144,6 +144,7 @@ u64 dma_get_required_mask(struct device *dev); size_t dma_max_mapping_size(struct device *dev); bool dma_need_sync(struct device *dev, dma_addr_t dma_addr); unsigned long dma_get_merge_boundary(struct device *dev); +void dma_set_max_opt_size(struct device *dev, size_t size); #else /* CONFIG_HAS_DMA */ static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, @@ -257,6 +258,10 @@ static inline unsigned long dma_get_merge_boundary(struct device *dev) { return 0; } +static inline void dma_set_max_opt_size(struct device *dev, size_t size) +{ +} + #endif /* CONFIG_HAS_DMA */ struct page *dma_alloc_pages(struct device *dev, size_t size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index b6a633679933..59e6acb1c471 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -608,3 +608,14 @@ unsigned long dma_get_merge_boundary(struct device *dev) return ops->get_merge_boundary(dev); } EXPORT_SYMBOL_GPL(dma_get_merge_boundary); + +void dma_set_max_opt_size(struct device *dev, size_t size) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (!ops || !ops->set_max_opt_size) + return; + + ops->set_max_opt_size(dev, size); +} +EXPORT_SYMBOL_GPL(dma_set_max_opt_size); From patchwork Fri Mar 19 13:25:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 404792 Delivered-To: patch@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp1377681jai; Fri, 19 Mar 2021 06:31:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJycpbRqXoAZQSB5DFVgrFiG4BTWeYR4DVVMzaqWj7ugalDNQosTLtF42Vdr63dD/8cpyuYr X-Received: by 2002:a17:907:2b03:: with SMTP id gc3mr4371378ejc.448.1616160685771; Fri, 19 Mar 2021 06:31:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616160685; cv=none; d=google.com; s=arc-20160816; b=y16t0K6CaOO+tEe/eiBaKlGgpk9PXbHEWLDIELr30IRDnDhkUbu8+L3VjwAp3QNv8y wnatmhxN4EJo0GIarW+49RWPz6oe4m9iLD7s43oNxZc1QRgayv2FwjqpX02xgy2Xlnqd pi6qwcqPDRvVEFMLLhDGzNqHbBqei0GDSC7FilFHEFldwGMTzeDXrBwAj2Gk2eEzjrNS +42a6urPLRc3mQmZWb+CCMSUpUwxkpKohQnBgP/xH9YH+s8+XxWdenAqPBH3YNuuoACc t/pEhf2ghfrNTmz0cxK5kw+Rb6xa1XrUdzPDKDGw7gNNRXoZi7aHWBpvx2LzyOOS8YkY JN3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Il5uVGG9IcxMgiOSO0E9IWH0PurtaVdV3/Zkgezmn/Y=; b=Pesv+Yz0Ec9Pyo9yBe6FT2RY2Z2a7fOOatOnKv7qTvdve6Kq1V0q+0UAR2kZ/OGwXQ 5HkqqFBrPnwk1n+RHxPveqgpkBAvB52ic3Y6m8GmP3BFPWxZIusgLKnvS+4c2hRKzN31 PckbiwA1a9V77KXXzlbwmh6n/U9qxb6bGyXSueCQ0spmG+GRMj0DSCFIERrFcjWIMzTO WaMpFZDi8CM66k92IbknTVLJ78amcwd4sOPzF9cAdQ6W/eNmcvmxF/jru2O6IciWaa91 3g1PF6r1s3ZoqnoMVgc/7dO7GqJ/q8+It9w7TWb4WR29OsUO1z/+zmBz58vrJVpCYzU2 b3pQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m21si4292630ejx.725.2021.03.19.06.31.25 for ; Fri, 19 Mar 2021 06:31:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-scsi-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-scsi-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230262AbhCSNap (ORCPT ); Fri, 19 Mar 2021 09:30:45 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14017 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230254AbhCSNaT (ORCPT ); Fri, 19 Mar 2021 09:30:19 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F24T24JxyzPk7t; Fri, 19 Mar 2021 21:27:50 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 19 Mar 2021 21:30:08 +0800 From: John Garry To: , , , , , , CC: , , , , John Garry Subject: [PATCH 6/6] scsi: hisi_sas: Set max optimal DMA size for v3 hw Date: Fri, 19 Mar 2021 21:25:48 +0800 Message-ID: <1616160348-29451-7-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1616160348-29451-1-git-send-email-john.garry@huawei.com> References: <1616160348-29451-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org For IOMMU strict mode, more than doubles throughput in some scenarios. Signed-off-by: John Garry --- drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 2 ++ 1 file changed, 2 insertions(+) -- 2.26.2 diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c index 4580e081e489..2f77b418bbeb 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c @@ -4684,6 +4684,8 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err_out_regions; } + dma_set_max_opt_size(dev, PAGE_SIZE * HISI_SAS_SGE_PAGE_CNT); + shost = hisi_sas_shost_alloc_pci(pdev); if (!shost) { rc = -ENOMEM;