From patchwork Thu May 18 07:59:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100048 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602146qge; Thu, 18 May 2017 01:02:40 -0700 (PDT) X-Received: by 10.99.65.7 with SMTP id o7mr3019290pga.90.1495094560214; Thu, 18 May 2017 01:02:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094560; cv=none; d=google.com; s=arc-20160816; b=p6jWfS6+QBEOk2UFOORI3JnpAJJUmpc1kwLgOUI4exf+agwzkMsmsqhREBiBR71sNm vQwj4f9BdnHlQgfYKcmaxW5Mp6rj53c5d6Q2VlwQGoVkToTqrQHkVjhDBdAfyj3dpj+g a5jphnE9AQlHqK5YgnVt/7AmFzikwn7zAObJqydbu8W06td3uo4GeDE1EZysV3tCNfap EX5GncjwNGKOdMqbtWWHM5xcMSQ/wHhicoADIS1Je0Fo3w2rRIy39SaNbxH3uyu3bMmt H5qXW1ulgXAB8ACubWbvq6k1RmAYewRxlGSAO5PCHJBAjEWqfyo//ttXYluZDK4ZeWfm nsWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=VwWm+GgqVSBYwesekh+vJUbritFQbY9ON0sMZC7vGdQ=; b=nhU8He21daJCqpkHHbOzllPeRi66spIB3ze17dyEe3F2T0DK2BCCYWOuxdnNOZuWEn HkPHBNNrbW1bAAh+Ihro9GzPq9Crfxq+E0MCTMB8v0ZBzixv1p0Zc0agoawE3D+WLlrg GFGev3bXDCIflLCYAkhEB7ig4zVPLD7rNCGq7LPaizTKTcFpFKD+OjljH9o+gRyB9Idq UKwmqoypf66aG0pH8AzYO6kVlI+PmVCb4NxHIzCnw7QRpwR1zsW/lA/S/9x2ZwCIDOL9 AawkcftgnXu1WrNXrpapc3wEOyQZiwrEeVMNjQyCIm+lXEV6GO8431mXznWXGTodIg+J xtVg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y10si4532358pgo.351.2017.05.18.01.02.39; Thu, 18 May 2017 01:02:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755022AbdERICQ (ORCPT + 25 others); Thu, 18 May 2017 04:02:16 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6776 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752741AbdERICG (ORCPT ); Thu, 18 May 2017 04:02:06 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOU59819; Thu, 18 May 2017 16:02:00 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:50 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 1/6] iommu/iova: cut down judgement times Date: Thu, 18 May 2017 15:59:52 +0800 Message-ID: <1495094397-9132-2-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.591D54F9.0050, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9f9b23571aa8cd23945a3af93237799a Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Below judgement can only be satisfied at the last time, which produced 2N judgements(suppose N times failed, 0 or 1 time successed) in vain. if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) { return iova; } Signed-off-by: Zhen Lei Reviewed-by: Robin Murphy --- drivers/iommu/iova.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 5c88ba7..333a9cc 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -291,15 +291,12 @@ private_find_iova(struct iova_domain *iovad, unsigned long pfn) while (node) { struct iova *iova = rb_entry(node, struct iova, node); - /* If pfn falls within iova's range, return iova */ - if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) { - return iova; - } - if (pfn < iova->pfn_lo) node = node->rb_left; - else if (pfn > iova->pfn_lo) + else if (pfn > iova->pfn_hi) node = node->rb_right; + else + return iova; /* pfn falls within iova's range */ } return NULL; From patchwork Thu May 18 07:59:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100053 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602684qge; Thu, 18 May 2017 01:04:15 -0700 (PDT) X-Received: by 10.98.71.84 with SMTP id u81mr3067617pfa.102.1495094654988; Thu, 18 May 2017 01:04:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094654; cv=none; d=google.com; s=arc-20160816; b=btUSrBstOP8wyXbfY9dDAjWTjLKR9CmJ8iqxYVXuz916OUVMUQEZX+UlBfOSDOEBbp dCDsspnWa4YhC3XrHkeBsF0chtqypgdzMkKosry95qqlgHOkj6oFMOoQjGr2MJs2Adll 4ptMEhctpgrk7YlCj8CqkNOxMl4L7vWVsonokLoC9PbhN3ppQdTImmThWrp9Pbf9er65 ikoZ5T0mvnw8E1g6oxu8A2idjLFXi7w1FXt8NsnsXc85B3gkNQ3MWepBoTCrOebGX3pB H5Wwn7fqugr5/W7zOFHcG59ziFvnT2DwR6yedPB8Txl6RSWSHjhyCON9KUH4YsSLBH2C Ir/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=pNXnlchfMNz8f2MhxIHpVHVnRORTH++mNk780ul/Bno=; b=NH+Mgo7z5B+qdDeGYnIoFf/1uJs+T7xO27k0394F3qm9hK2xKRFIJQcTTaEXFlfWlZ 4x2SWuuMvrM7g92bJ40HaK3a9H9uVM0O+8WYv2iWZ4K5Q+o6esbGMzI7j95c1+ofwOy2 bN+C9CgTcmFwZXaTaFushAnw3MUPLakpgQSQLzzCJYVZJ0aw+ipboc8GW+t21sb1pHiu CKzxEEHvK4J6TyHyJO0KmiNUcZjm15UKApqNNFN/qPYSN+EkVY5nYJIAO7m08FBu11tI 7L5JErWYQodV2FxhP327Bk+alYXtYlWtLB38Ai2kFt5ji3q+3uV/Ax4Unl6fk6ykrMiz M2Kw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s62si4388536pgb.247.2017.05.18.01.04.14; Thu, 18 May 2017 01:04:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755205AbdERIEG (ORCPT + 25 others); Thu, 18 May 2017 04:04:06 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6781 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754806AbdERICG (ORCPT ); Thu, 18 May 2017 04:02:06 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOU59820; Thu, 18 May 2017 16:02:01 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:51 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 2/6] iommu/iova: insert start_pfn boundary of dma32 Date: Thu, 18 May 2017 15:59:53 +0800 Message-ID: <1495094397-9132-3-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.591D54FC.0122, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 5c8b8364966d661c6c17150a49332734 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Reserve the first granule size memory(start at start_pfn) as boundary iova, to make sure that iovad->cached32_node can not be NULL in future. Meanwhile, changed the assignment of iovad->cached32_node from rb_next to rb_prev of &free->node in function __cached_rbnode_delete_update. Signed-off-by: Zhen Lei --- drivers/iommu/iova.c | 63 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 37 insertions(+), 26 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 333a9cc..d0c19ec 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -32,6 +32,17 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad, static void init_iova_rcaches(struct iova_domain *iovad); static void free_iova_rcaches(struct iova_domain *iovad); +static void +insert_iova_boundary(struct iova_domain *iovad) +{ + struct iova *iova; + unsigned long start_pfn_32bit = iovad->start_pfn; + + iova = reserve_iova(iovad, start_pfn_32bit, start_pfn_32bit); + BUG_ON(!iova); + iovad->cached32_node = &iova->node; +} + void init_iova_domain(struct iova_domain *iovad, unsigned long granule, unsigned long start_pfn, unsigned long pfn_32bit) @@ -45,27 +56,38 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, spin_lock_init(&iovad->iova_rbtree_lock); iovad->rbroot = RB_ROOT; - iovad->cached32_node = NULL; iovad->granule = granule; iovad->start_pfn = start_pfn; iovad->dma_32bit_pfn = pfn_32bit; init_iova_rcaches(iovad); + + /* + * Insert boundary nodes for dma32. So cached32_node can not be NULL in + * future. + */ + insert_iova_boundary(iovad); } EXPORT_SYMBOL_GPL(init_iova_domain); static struct rb_node * __get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) { - if ((*limit_pfn > iovad->dma_32bit_pfn) || - (iovad->cached32_node == NULL)) + struct rb_node *cached_node; + struct rb_node *next_node; + + if (*limit_pfn > iovad->dma_32bit_pfn) return rb_last(&iovad->rbroot); - else { - struct rb_node *prev_node = rb_prev(iovad->cached32_node); - struct iova *curr_iova = - rb_entry(iovad->cached32_node, struct iova, node); - *limit_pfn = curr_iova->pfn_lo - 1; - return prev_node; + else + cached_node = iovad->cached32_node; + + next_node = rb_next(cached_node); + if (next_node) { + struct iova *next_iova = rb_entry(next_node, struct iova, node); + + *limit_pfn = min(*limit_pfn, next_iova->pfn_lo - 1); } + + return cached_node; } static void @@ -83,20 +105,13 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) struct iova *cached_iova; struct rb_node *curr; - if (!iovad->cached32_node) - return; curr = iovad->cached32_node; cached_iova = rb_entry(curr, struct iova, node); if (free->pfn_lo >= cached_iova->pfn_lo) { - struct rb_node *node = rb_next(&free->node); - struct iova *iova = rb_entry(node, struct iova, node); - /* only cache if it's below 32bit pfn */ - if (node && iova->pfn_lo < iovad->dma_32bit_pfn) - iovad->cached32_node = node; - else - iovad->cached32_node = NULL; + if (free->pfn_hi <= iovad->dma_32bit_pfn) + iovad->cached32_node = rb_prev(&free->node); } } @@ -142,7 +157,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, struct iova *new, bool size_aligned) { - struct rb_node *prev, *curr = NULL; + struct rb_node *prev, *curr; unsigned long flags; unsigned long saved_pfn; unsigned int pad_size = 0; @@ -172,13 +187,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, curr = rb_prev(curr); } - if (!curr) { - if (size_aligned) - pad_size = iova_get_pad_size(size, limit_pfn); - if ((iovad->start_pfn + size + pad_size) > limit_pfn) { - spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); - return -ENOMEM; - } + if (unlikely(!curr)) { + spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); + return -ENOMEM; } /* pfn_lo will point to size aligned address if size_aligned is set */ From patchwork Thu May 18 07:59:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100047 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602143qge; Thu, 18 May 2017 01:02:39 -0700 (PDT) X-Received: by 10.98.178.72 with SMTP id x69mr3001941pfe.74.1495094559828; Thu, 18 May 2017 01:02:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094559; cv=none; d=google.com; s=arc-20160816; b=fu8Ltl7V2k/1XqdgZGnE9cugEtYrIU1/R9ILWN435eoAstgH91sQYQI6LZxVabaepx XxFc2yLGsDZkgJY3uAw1M8Wrl6LlpG3w9suTc1DcjHa3bvw0j2Oiegor8fQlBN+tUcPs CArcAv6kLy74LHlmPVaacpkDSiYIzMNAX9+2zouhM+NK1RzcV0KM4ophDSbXWKQTRVz7 WGwbXaBnjwydHcyWPUfBSER2oa+w3tVsv2JrHRNevyU8Oy9XVh7SjkpiokjyYmR3V7wh 7vr6XoFRtUEVYGWM1uN2MAahEA24wXFP76iu4qR89jZsr0iXAn0VXJBj2fjCikoCbthP fd+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=XULl0vjEF28g48mlR3M2fpD2sAIfsJhd1xRg3VXF868=; b=A8RVbDIBCMSIUOlc5T6EFUq2QsxF9gANpFgjGJguKm/Ro2psqFpHX923S3e0niBLLY 3KXgQk9Spj3i4FBb1YhBtj36tELc2to6SA+Q3guvg0OwPgwRdKco3T3RCzqGqfG8Tequ +vr4VqGqruO6/rJTnkJfZ9OeQtIqtEVngFn/Ve4b2CMTLLcGIrxsweixeFa+XVXazk83 JLYbZ4fVGlai/+Bp77DVMF3flGwHdLpGnBb/ZPh89JpoEm7Jb/F3/PKJ6UK6NZ4gJ2Y8 /n5HOUuOgCrAbUfpD8Dw6FR1Y9Irwf81lqFtIzjPYDE6ulJrCHIZ8kbWH6TKep8iI8/K RRmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y10si4532358pgo.351.2017.05.18.01.02.39; Thu, 18 May 2017 01:02:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754948AbdERICJ (ORCPT + 25 others); Thu, 18 May 2017 04:02:09 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6777 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751348AbdERICG (ORCPT ); Thu, 18 May 2017 04:02:06 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOU59816; Thu, 18 May 2017 16:01:59 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:51 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 3/6] iommu/iova: adjust __cached_rbnode_insert_update Date: Thu, 18 May 2017 15:59:54 +0800 Message-ID: <1495094397-9132-4-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.591D54F8.00DB, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: ac71f869336dc4e7f60f5e7e973b19eb Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For case 2 and 3, adjust cached32_node to the new place, case 1 keep no change. For example: case1: (the right part was allocated) |------------------------------| |<-----free---->|<--new_iova-->| | | cached32_node case2: (all was allocated) |------------------------------| |<---------new_iova----------->| | | cached32_node case3: |-----------------------|......|---------| |..free..|<--new_iova-->| | | | | cached32_node(new) cached32_node(old) Signed-off-by: Zhen Lei --- drivers/iommu/iova.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index d0c19ec..1b8e136 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -91,12 +91,16 @@ __get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) } static void -__cached_rbnode_insert_update(struct iova_domain *iovad, - unsigned long limit_pfn, struct iova *new) +__cached_rbnode_insert_update(struct iova_domain *iovad, struct iova *new) { - if (limit_pfn != iovad->dma_32bit_pfn) + struct iova *cached_iova; + + if (new->pfn_hi > iovad->dma_32bit_pfn) return; - iovad->cached32_node = &new->node; + + cached_iova = rb_entry(iovad->cached32_node, struct iova, node); + if (new->pfn_lo <= cached_iova->pfn_lo) + iovad->cached32_node = rb_prev(&new->node); } static void @@ -159,12 +163,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, { struct rb_node *prev, *curr; unsigned long flags; - unsigned long saved_pfn; unsigned int pad_size = 0; /* Walk the tree backwards */ spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); - saved_pfn = limit_pfn; curr = __get_cached_rbnode(iovad, &limit_pfn); prev = curr; while (curr) { @@ -198,11 +200,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, /* If we have 'prev', it's a valid place to start the insertion. */ iova_insert_rbtree(&iovad->rbroot, new, prev); - __cached_rbnode_insert_update(iovad, saved_pfn, new); + __cached_rbnode_insert_update(iovad, new); spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); - return 0; } From patchwork Thu May 18 07:59:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100050 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602152qge; Thu, 18 May 2017 01:02:41 -0700 (PDT) X-Received: by 10.98.137.140 with SMTP id n12mr2982302pfk.183.1495094560962; Thu, 18 May 2017 01:02:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094560; cv=none; d=google.com; s=arc-20160816; b=N57QivRKmF1tBF3hjfbIOGFeP0ypmD2pGF7P8rIADTWsvjvyngRXqG1ZDMWXjUi0v5 M/NryPc8ZatjK7QL5a3YWYTrAgr/NGvr6LOKRCzmAowfbN5Y6jFawx4x7v0PeKtfcFmF S0gabt1811RARaDEU9nL3FgeLz7dW2QK74TKEHJ0FWaCczcSV9S/aZLPgMZhhGrsRlB+ 3QOJCMYS4D7TVE24DeZn7JOxRN2Cxc2aovOPFZMoszBcgk35ElxB8GeKLqcu5+t+bph6 ij9jIUFS9ZL6ryV1a6A/OlsRJe2RlXShMa4L7hsma4MaptI4Ztpuj41zRR1AA463jeNO JCpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=3tVh6SNjPeKjqfzeZaqCnjbWHD/B578jUIkppVu6xa8=; b=EPqXYdzkZQWWL712iYhWqdFhrN6dNwh17YKaPyhbSTq1wWe3LF5FHjmMQ5koJm3ZV2 rdI1JUbw/QhvmZWgUpBa3IhWILzU9/3kavrgLH7guAZ0UftWjSOcTOOLd9mjdlFpgNyH HMtUlQ91NjUupF1ts6yIhmPMz/wf0GUhZGPb2bAiKMu4aY8P5DGbkuYl2WB362ZuM3px exvYosPzqyo/6wjlV3+eaDFTXwnTXSEKAIyt/qAYlJdOkD2KS6qxg79jIQLuao0dYJY3 sUABiKCPA9ek6swAGEzJOSchApRFcaBfQkQFrKvr/kSe3vv0giAglutAxs4uKijsAuzY dGpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y10si4532358pgo.351.2017.05.18.01.02.40; Thu, 18 May 2017 01:02:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755092AbdERICW (ORCPT + 25 others); Thu, 18 May 2017 04:02:22 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6779 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752852AbdERICG (ORCPT ); Thu, 18 May 2017 04:02:06 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOU59818; Thu, 18 May 2017 16:02:00 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:52 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 4/6] iommu/iova: to optimize the allocation performance of dma64 Date: Thu, 18 May 2017 15:59:55 +0800 Message-ID: <1495094397-9132-5-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.591D54F9.001C, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 93c6b3f95fba0b7fa69efab330f6ae55 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently we always search free iova space for dma64 begin at the last node of iovad rb-tree. In the worst case, there maybe too many nodes exist at the tail, so that we should traverse many times for the first loop in __alloc_and_insert_iova_range. As we traced, more than 10K times for the case of iperf. __alloc_and_insert_iova_range: ...... curr = __get_cached_rbnode(iovad, &limit_pfn); //--> return rb_last(&iovad->rbroot); while (curr) { ...... curr = rb_prev(curr); } So add cached64_node to take the same effect as cached32_node, and add the start_pfn boundary of dma64, to prevent a iova cross both dma32 and dma64 area. |-------------------|------------------------------| |<--cached32_node-->|<--------cached64_node------->| | | start_pfn dma_32bit_pfn + 1 Signed-off-by: Zhen Lei --- drivers/iommu/iova.c | 46 +++++++++++++++++++++++++++------------------- include/linux/iova.h | 5 +++-- 2 files changed, 30 insertions(+), 21 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 1b8e136..711b10a 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -37,10 +37,15 @@ insert_iova_boundary(struct iova_domain *iovad) { struct iova *iova; unsigned long start_pfn_32bit = iovad->start_pfn; + unsigned long start_pfn_64bit = iovad->dma_32bit_pfn + 1; iova = reserve_iova(iovad, start_pfn_32bit, start_pfn_32bit); BUG_ON(!iova); iovad->cached32_node = &iova->node; + + iova = reserve_iova(iovad, start_pfn_64bit, start_pfn_64bit); + BUG_ON(!iova); + iovad->cached64_node = &iova->node; } void @@ -62,8 +67,8 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, init_iova_rcaches(iovad); /* - * Insert boundary nodes for dma32. So cached32_node can not be NULL in - * future. + * Insert boundary nodes for dma32 and dma64. So cached32_node and + * cached64_node can not be NULL in future. */ insert_iova_boundary(iovad); } @@ -75,10 +80,10 @@ __get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) struct rb_node *cached_node; struct rb_node *next_node; - if (*limit_pfn > iovad->dma_32bit_pfn) - return rb_last(&iovad->rbroot); - else + if (*limit_pfn <= iovad->dma_32bit_pfn) cached_node = iovad->cached32_node; + else + cached_node = iovad->cached64_node; next_node = rb_next(cached_node); if (next_node) { @@ -94,29 +99,32 @@ static void __cached_rbnode_insert_update(struct iova_domain *iovad, struct iova *new) { struct iova *cached_iova; + struct rb_node **cached_node; - if (new->pfn_hi > iovad->dma_32bit_pfn) - return; + if (new->pfn_hi <= iovad->dma_32bit_pfn) + cached_node = &iovad->cached32_node; + else + cached_node = &iovad->cached64_node; - cached_iova = rb_entry(iovad->cached32_node, struct iova, node); + cached_iova = rb_entry(*cached_node, struct iova, node); if (new->pfn_lo <= cached_iova->pfn_lo) - iovad->cached32_node = rb_prev(&new->node); + *cached_node = rb_prev(&new->node); } static void __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) { struct iova *cached_iova; - struct rb_node *curr; + struct rb_node **cached_node; - curr = iovad->cached32_node; - cached_iova = rb_entry(curr, struct iova, node); + if (free->pfn_hi <= iovad->dma_32bit_pfn) + cached_node = &iovad->cached32_node; + else + cached_node = &iovad->cached64_node; - if (free->pfn_lo >= cached_iova->pfn_lo) { - /* only cache if it's below 32bit pfn */ - if (free->pfn_hi <= iovad->dma_32bit_pfn) - iovad->cached32_node = rb_prev(&free->node); - } + cached_iova = rb_entry(*cached_node, struct iova, node); + if (free->pfn_lo >= cached_iova->pfn_lo) + *cached_node = rb_prev(&free->node); } /* Insert the iova into domain rbtree by holding writer lock */ @@ -262,7 +270,7 @@ EXPORT_SYMBOL_GPL(iova_cache_put); * alloc_iova - allocates an iova * @iovad: - iova domain in question * @size: - size of page frames to allocate - * @limit_pfn: - max limit address + * @limit_pfn: - max limit address(included) * @size_aligned: - set if size_aligned address range is required * This function allocates an iova in the range iovad->start_pfn to limit_pfn, * searching top-down from limit_pfn to iovad->start_pfn. If the size_aligned @@ -381,7 +389,7 @@ EXPORT_SYMBOL_GPL(free_iova); * alloc_iova_fast - allocates an iova from rcache * @iovad: - iova domain in question * @size: - size of page frames to allocate - * @limit_pfn: - max limit address + * @limit_pfn: - max limit address(included) * This function tries to satisfy an iova allocation from the rcache, * and falls back to regular allocation on failure. */ diff --git a/include/linux/iova.h b/include/linux/iova.h index e0a892a..2d34112 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -40,10 +40,11 @@ struct iova_rcache { struct iova_domain { spinlock_t iova_rbtree_lock; /* Lock to protect update of rbtree */ struct rb_root rbroot; /* iova domain rbtree root */ - struct rb_node *cached32_node; /* Save last alloced node */ + struct rb_node *cached32_node; /* Save last alloced node, 32bits */ + struct rb_node *cached64_node; /* Save last alloced node, 64bits */ unsigned long granule; /* pfn granularity for this domain */ unsigned long start_pfn; /* Lower limit for this domain */ - unsigned long dma_32bit_pfn; + unsigned long dma_32bit_pfn; /* max dma32 limit address(included) */ struct iova_rcache rcaches[IOVA_RANGE_CACHE_MAX_SIZE]; /* IOVA range caches */ }; From patchwork Thu May 18 07:59:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100051 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602262qge; Thu, 18 May 2017 01:02:57 -0700 (PDT) X-Received: by 10.84.216.30 with SMTP id m30mr3294092pli.161.1495094577836; Thu, 18 May 2017 01:02:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094577; cv=none; d=google.com; s=arc-20160816; b=JzFtzmV/GdErOH59lfcz42hJ+3xkVA1UqoDuw0FsgdXy0t603CAudBmNRUricMgQHY fAvsABOMaAjawPejbnx9ycNgNp60PfoGHr78AqCzDgji7+GAnFxwX9ygw/pVITBBvJ9j Cg5hFuk8H+rMi0oc10VGVcltaM4KEbV9DnpEcR5blVwneckqvGF7LWuxcLJrULm5MGw3 scjR2zoWei/ZEycBpLISjAE3PVZmROJa3FCEbjN1nHOfx/x8Xv+tzkYMc0TKKBProIxw 7BkSl36cjCFsFIdsRklhEHGUy0Bn2/KE1Mtx/pK/QTNVdMke4w6N/UwNXSwbIz+/mFeJ TalQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=t4ooPprsG/K0uI5APOHZyxeqa/9g14tcx2+lDMALFqc=; b=xN5GWpFZC2CGUMvXqLGcIGxjokrdU3VHgFcY/4RCDrU53tViCnqj34QKt8jBRXWM2n 5Ve3PyVYbj0ygrTOcus7LsPWyciFjZjRcCRhOw1fT5P3nJXzfCs3yL2HlidiUmQCDdAn gzvO/fkjpvBiMYm1tissILB3Ot3BDMwPLclhP+EGZY3WF2GZv/bvkvuW/bgPwEe577Zj COci1JpAkvh/u7Bw4ihICFaShbqW/2U8fw3ZeL4AL+OUQBaRYPW07jDlb3Jsi7Cxo9CQ MMxDUxBkqV9E140rQQceyzjCH62YOEmqMmrtHG4npYR26EsfkuHpugKXAxXOqe2iRnNI Tq+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y4si4547127plb.230.2017.05.18.01.02.57; Thu, 18 May 2017 01:02:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755134AbdERIC2 (ORCPT + 25 others); Thu, 18 May 2017 04:02:28 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6778 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752083AbdERICH (ORCPT ); Thu, 18 May 2017 04:02:07 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOU59814; Thu, 18 May 2017 16:01:59 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:53 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 5/6] iommu/iova: move the caculation of pad mask out of loop Date: Thu, 18 May 2017 15:59:56 +0800 Message-ID: <1495094397-9132-6-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020206.591D54F8.00E0, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 08ca033ee96ecb3c8ed75ac13331400e Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I'm not sure whether the compiler can optimize it, but move it out will be better. At least, it does not require lock protection. Signed-off-by: Zhen Lei --- drivers/iommu/iova.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 711b10a..338930b 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -155,23 +155,16 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova, rb_insert_color(&iova->node, root); } -/* - * Computes the padding size required, to make the start address - * naturally aligned on the power-of-two order of its size - */ -static unsigned int -iova_get_pad_size(unsigned int size, unsigned int limit_pfn) -{ - return (limit_pfn + 1 - size) & (__roundup_pow_of_two(size) - 1); -} - static int __alloc_and_insert_iova_range(struct iova_domain *iovad, unsigned long size, unsigned long limit_pfn, struct iova *new, bool size_aligned) { struct rb_node *prev, *curr; unsigned long flags; - unsigned int pad_size = 0; + unsigned long pad_mask, pad_size = 0; + + if (size_aligned) + pad_mask = __roundup_pow_of_two(size) - 1; /* Walk the tree backwards */ spin_lock_irqsave(&iovad->iova_rbtree_lock, flags); @@ -185,8 +178,13 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, else if (limit_pfn < curr_iova->pfn_hi) goto adjust_limit_pfn; else { + /* + * Computes the padding size required, to make the start + * address naturally aligned on the power-of-two order + * of its size + */ if (size_aligned) - pad_size = iova_get_pad_size(size, limit_pfn); + pad_size = (limit_pfn + 1 - size) & pad_mask; if ((curr_iova->pfn_hi + size + pad_size) <= limit_pfn) break; /* found a free slot */ } From patchwork Thu May 18 07:59:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 100052 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp602468qge; Thu, 18 May 2017 01:03:39 -0700 (PDT) X-Received: by 10.99.170.2 with SMTP id e2mr3089631pgf.204.1495094619021; Thu, 18 May 2017 01:03:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495094619; cv=none; d=google.com; s=arc-20160816; b=jzDtzTh95UMbMIs8hq8fcNn3zeFCjHOJEcW/LKeWg7v1oiXhuUb9/Y2mc1vwLQ9aUl I7XzO01LohwuaxfLPfkXQSZaUCaVOJZOUXbTg4Jc5NvlunSCBQ/YXTgsuRFQ9UGY7LY8 EOVuc3Sh8hmGuK+YI1H7iQHETL7T9NHF6wPV4xdip9ZFe8letjSgwt6Yl6EDLLrW3m9Q sciPElHI9sUA/nAwTzlwWHgSGac1hsid87UVvq7QVQDBS1yODOVSfQFOCIDPApS1TM75 sRBxFgzgZZBI1QGlnUAL8ZgmB/mSfXReMTKFnMzHPAMn/lFN31DUaiXoGWAunipI9IM6 s6aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=94c45xZw80hRPvfEFp6Kj9U+fJhW8Y+5ObaDzUsaiZ8=; b=YXMV632ziUMESYeijem5uUQMft/HSEt2XF8HjHUgwneNPjirtx8IJBdyqMcKUsNR6i XhiT3f1FlSX0IGREGNzPgwW7eopqWrCT+6XRfFGudzHLWpXvuw8273QTl4T1WFiv/676 IimzVWsGVVOYkIX3K/5N4rWp9Rg+zodWCu0JPIfxf66hwpmh2dc/YlvqP+jZWOeh1uVB Y59l1Sd3mbmOI962lFPfxfiwray0nbeaVoES8YqUyzcY34EeWKMMDQHsvufv7H1S7Gb7 gz6C/pAp8lpSNDIS37MDGFHlbtoxeeWp9tbqvTYOjbjcwAEvb0GUEIrj3eYCNOzNhtKI MyBQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 91si4542469plb.204.2017.05.18.01.03.38; Thu, 18 May 2017 01:03:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755201AbdERIDf (ORCPT + 25 others); Thu, 18 May 2017 04:03:35 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:6349 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754219AbdERICJ (ORCPT ); Thu, 18 May 2017 04:02:09 -0400 Received: from 172.30.72.54 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.54]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id ANU58275; Thu, 18 May 2017 16:02:04 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Thu, 18 May 2017 16:01:54 +0800 From: Zhen Lei To: Joerg Roedel , iommu , Robin Murphy , David Woodhouse , Sudeep Dutt , Ashutosh Dixit , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v3 6/6] iommu/iova: fix iovad->dma_32bit_pfn as the last pfn of dma32 Date: Thu, 18 May 2017 15:59:57 +0800 Message-ID: <1495094397-9132-7-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> References: <1495094397-9132-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.591D54FF.001B, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 38adf588acf5df0ed075fa2114e32454 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To make sure iovad->cached32_node and iovad->cached64_node can exactly control dma32 and dma64 area. It also help us to remove the parameter pfn_32bit of init_iova_domain. Signed-off-by: Zhen Lei --- drivers/iommu/amd_iommu.c | 7 ++----- drivers/iommu/dma-iommu.c | 21 ++++----------------- drivers/iommu/intel-iommu.c | 11 +++-------- drivers/iommu/iova.c | 4 ++-- drivers/misc/mic/scif/scif_rma.c | 3 +-- include/linux/iova.h | 2 +- 6 files changed, 13 insertions(+), 35 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 63cacf5..9aebfa6 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -61,7 +61,6 @@ /* IO virtual address start page frame number */ #define IOVA_START_PFN (1) #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) -#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) /* Reserved IOVA ranges */ #define MSI_RANGE_START (0xfee00000) @@ -1776,8 +1775,7 @@ static struct dma_ops_domain *dma_ops_domain_alloc(void) if (!dma_dom->domain.pt_root) goto free_dma_dom; - init_iova_domain(&dma_dom->iovad, PAGE_SIZE, - IOVA_START_PFN, DMA_32BIT_PFN); + init_iova_domain(&dma_dom->iovad, PAGE_SIZE, IOVA_START_PFN); /* Initialize reserved ranges */ copy_reserved_iova(&reserved_iova_ranges, &dma_dom->iovad); @@ -2747,8 +2745,7 @@ static int init_reserved_iova_ranges(void) struct pci_dev *pdev = NULL; struct iova *val; - init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, - IOVA_START_PFN, DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, IOVA_START_PFN); lockdep_set_class(&reserved_iova_ranges.iova_rbtree_lock, &reserved_rbtree_key); diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 8348f366..b3455d4 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -290,18 +290,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, /* ...then finally give it a kicking to make sure it fits */ base_pfn = max_t(unsigned long, base_pfn, domain->geometry.aperture_start >> order); - end_pfn = min_t(unsigned long, end_pfn, - domain->geometry.aperture_end >> order); } - /* - * PCI devices may have larger DMA masks, but still prefer allocating - * within a 32-bit mask to avoid DAC addressing. Such limitations don't - * apply to the typical platform device, so for those we may as well - * leave the cache limit at the top of their range to save an rb_last() - * traversal on every allocation. - */ - if (dev && dev_is_pci(dev)) - end_pfn &= DMA_BIT_MASK(32) >> order; /* start_pfn is always nonzero for an already-initialised domain */ if (iovad->start_pfn) { @@ -310,19 +299,17 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, pr_warn("Incompatible range for DMA domain\n"); return -EFAULT; } - /* - * If we have devices with different DMA masks, move the free - * area cache limit down for the benefit of the smaller one. - */ - iovad->dma_32bit_pfn = min(end_pfn, iovad->dma_32bit_pfn); return 0; } - init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn); + init_iova_domain(iovad, 1UL << order, base_pfn); if (!dev) return 0; + if (end_pfn < iovad->dma_32bit_pfn) + dev_dbg(dev, "ancient device or dma range missed some bits?"); + return iova_reserve_iommu_regions(dev, domain); } EXPORT_SYMBOL(iommu_dma_init_domain); diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 90ab011..266b96b 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -82,8 +82,6 @@ #define IOVA_START_PFN (1) #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) -#define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) -#define DMA_64BIT_PFN IOVA_PFN(DMA_BIT_MASK(64)) /* page table handling */ #define LEVEL_STRIDE (9) @@ -1874,8 +1872,7 @@ static int dmar_init_reserved_ranges(void) struct iova *iova; int i; - init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN); lockdep_set_class(&reserved_iova_list.iova_rbtree_lock, &reserved_rbtree_key); @@ -1933,8 +1930,7 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu, int adjust_width, agaw; unsigned long sagaw; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ @@ -4999,8 +4995,7 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) { int adjust_width; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN, - DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 338930b..c400f6b 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -50,7 +50,7 @@ insert_iova_boundary(struct iova_domain *iovad) void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, unsigned long pfn_32bit) + unsigned long start_pfn) { /* * IOVA granularity will normally be equal to the smallest @@ -63,7 +63,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, iovad->rbroot = RB_ROOT; iovad->granule = granule; iovad->start_pfn = start_pfn; - iovad->dma_32bit_pfn = pfn_32bit; + iovad->dma_32bit_pfn = DMA_BIT_MASK(32) >> ilog2(granule); init_iova_rcaches(iovad); /* diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c index 329727e..c824329 100644 --- a/drivers/misc/mic/scif/scif_rma.c +++ b/drivers/misc/mic/scif/scif_rma.c @@ -39,8 +39,7 @@ void scif_rma_ep_init(struct scif_endpt *ep) struct scif_endpt_rma_info *rma = &ep->rma_info; mutex_init(&rma->rma_lock); - init_iova_domain(&rma->iovad, PAGE_SIZE, SCIF_IOVA_START_PFN, - SCIF_DMA_64BIT_PFN); + init_iova_domain(&rma->iovad, PAGE_SIZE, SCIF_IOVA_START_PFN); spin_lock_init(&rma->tc_lock); mutex_init(&rma->mmn_lock); INIT_LIST_HEAD(&rma->reg_list); diff --git a/include/linux/iova.h b/include/linux/iova.h index 2d34112..c0fcd18 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -102,7 +102,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi); void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to); void init_iova_domain(struct iova_domain *iovad, unsigned long granule, - unsigned long start_pfn, unsigned long pfn_32bit); + unsigned long start_pfn); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); void put_iova_domain(struct iova_domain *iovad); struct iova *split_and_remove_iova(struct iova_domain *iovad,