From patchwork Mon Jun 26 13:38:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 106326 Delivered-To: patch@linaro.org Received: by 10.140.101.48 with SMTP id t45csp85980qge; Mon, 26 Jun 2017 06:39:50 -0700 (PDT) X-Received: by 10.99.60.68 with SMTP id i4mr253404pgn.250.1498484390188; Mon, 26 Jun 2017 06:39:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498484390; cv=none; d=google.com; s=arc-20160816; b=LWEb6PWvWSBoe4oBszUsMRDgWjFa/bGoZYQfpS1GjguXi4UxHwD3Vwh7dMuUGNjnu7 YaAyf9kWeFyl0DS7udP4kUYnJZWdPlMCEa90+c7znPPnDLPIK0pOdAp0NJP5dCWBCw5N mr6AYvw1byIH/28vk5vvT4fcXERmwRzYfJyrmKggwSI5Ob0oaQsfOhx3vKFnh1bAEtLk jVelARwAZOsnIRVThHtHVrhQPkfZy9gNMq9egqZs1B7GnuTVwMbNp9bq/ilqs5BoLYZ4 DPDDk8qoD61+YKkV1si6l7MyKv2hDS9aN6+k+z+PQE1bdC7gHQP95OuG7qx8RmNEjvwp Idhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=/sIQiJuz7TO2pL5fkLQ87BSRa1j6WMse6X+MX+K8rZY=; b=nWAyRyj96DaCocITsAEsgSj/vRTzhJNyW60D3Ug3ZiBeuSeMMqUFhlNcFj3LUg2eTs YPmQiliqclz67QpFYzCTQRBZUzZwzvFGDbr34JmJo2ibvr/J8dEVIvXWxKxmeNlbFwGf ozRKE7epgbBTLVD0yD32LtE0MA2e8324E4LDPuLcYU5zUmdvmrr243moR4iuARp8AqbE 2BNk6LFt6TiFpAsfQdRMD+ThEJOpUP0EHRioWzGO8wMbZIu3+YmVZzQfdOmH52GLeYTD OmO3o5owEi3BMs0Ggf+80TISwOa7izbAOms7ozslnET5zwNlEUUbNJOmcv9aHKQ62cuC 560Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u91si78851plb.499.2017.06.26.06.39.49; Mon, 26 Jun 2017 06:39:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752481AbdFZNjg (ORCPT + 25 others); Mon, 26 Jun 2017 09:39:36 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8859 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752193AbdFZNj2 (ORCPT ); Mon, 26 Jun 2017 09:39:28 -0400 Received: from 172.30.72.53 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.53]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQA47793; Mon, 26 Jun 2017 21:39:26 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:17 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei , John Garry Subject: [PATCH 1/5] iommu/arm-smmu-v3: put off the execution of TLBI* to reduce lock confliction Date: Mon, 26 Jun 2017 21:38:46 +0800 Message-ID: <1498484330-10840-2-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59510E8E.00A2, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: a4d792d883189da3eefb60cb94f23d25 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Because all TLBI commands should be followed by a SYNC command, to make sure that it has been completely finished. So we can just add the TLBI commands into the queue, and put off the execution until meet SYNC or other commands. To prevent the followed SYNC command waiting for a long time because of too many commands have been delayed, restrict the max delayed number. According to my test, I got the same performance data as I replaced writel with writel_relaxed in queue_inc_prod. Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 291da5f..4481123 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -337,6 +337,7 @@ /* Command queue */ #define CMDQ_ENT_DWORDS 2 #define CMDQ_MAX_SZ_SHIFT 8 +#define CMDQ_MAX_DELAYED 32 #define CMDQ_ERR_SHIFT 24 #define CMDQ_ERR_MASK 0x7f @@ -472,6 +473,7 @@ struct arm_smmu_cmdq_ent { }; } cfgi; + #define CMDQ_OP_TLBI_NH_ALL 0x10 #define CMDQ_OP_TLBI_NH_ASID 0x11 #define CMDQ_OP_TLBI_NH_VA 0x12 #define CMDQ_OP_TLBI_EL2_ALL 0x20 @@ -499,6 +501,7 @@ struct arm_smmu_cmdq_ent { struct arm_smmu_queue { int irq; /* Wired interrupt */ + u32 nr_delay; __le64 *base; dma_addr_t base_dma; @@ -722,11 +725,16 @@ static int queue_sync_prod(struct arm_smmu_queue *q) return ret; } -static void queue_inc_prod(struct arm_smmu_queue *q) +static void queue_inc_swprod(struct arm_smmu_queue *q) { - u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1; + u32 prod = q->prod + 1; q->prod = Q_OVF(q, q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod); +} + +static void queue_inc_prod(struct arm_smmu_queue *q) +{ + queue_inc_swprod(q); writel(q->prod, q->prod_reg); } @@ -761,13 +769,24 @@ static void queue_write(__le64 *dst, u64 *src, size_t n_dwords) *dst++ = cpu_to_le64(*src++); } -static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent) +static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent, int optimize) { if (queue_full(q)) return -ENOSPC; queue_write(Q_ENT(q, q->prod), ent, q->ent_dwords); - queue_inc_prod(q); + + /* + * We don't want too many commands to be delayed, this may lead the + * followed sync command to wait for a long time. + */ + if (optimize && (++q->nr_delay < CMDQ_MAX_DELAYED)) { + queue_inc_swprod(q); + } else { + queue_inc_prod(q); + q->nr_delay = 0; + } + return 0; } @@ -909,6 +928,7 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu) static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, struct arm_smmu_cmdq_ent *ent) { + int optimize = 0; u64 cmd[CMDQ_ENT_DWORDS]; unsigned long flags; bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV); @@ -920,8 +940,17 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, return; } + /* + * All TLBI commands should be followed by a sync command later. + * The CFGI commands is the same, but they are rarely executed. + * So just optimize TLBI commands now, to reduce the "if" judgement. + */ + if ((ent->opcode >= CMDQ_OP_TLBI_NH_ALL) && + (ent->opcode <= CMDQ_OP_TLBI_NSNH_ALL)) + optimize = 1; + spin_lock_irqsave(&smmu->cmdq.lock, flags); - while (queue_insert_raw(q, cmd) == -ENOSPC) { + while (queue_insert_raw(q, cmd, optimize) == -ENOSPC) { if (queue_poll_cons(q, false, wfe)) dev_err_ratelimited(smmu->dev, "CMDQ timeout\n"); } @@ -1953,6 +1982,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, << Q_BASE_LOG2SIZE_SHIFT; q->prod = q->cons = 0; + q->nr_delay = 0; + return 0; } @@ -2512,6 +2543,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) dev_err(smmu->dev, "unit-length command queue not supported\n"); return -ENXIO; } + BUILD_BUG_ON(CMDQ_MAX_DELAYED >= (1 << CMDQ_MAX_SZ_SHIFT)); smmu->evtq.q.max_n_shift = min((u32)EVTQ_MAX_SZ_SHIFT, reg >> IDR1_EVTQ_SHIFT & IDR1_EVTQ_MASK); From patchwork Mon Jun 26 13:38:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 106327 Delivered-To: patch@linaro.org Received: by 10.140.101.48 with SMTP id t45csp85983qge; Mon, 26 Jun 2017 06:39:50 -0700 (PDT) X-Received: by 10.99.0.212 with SMTP id 203mr213209pga.259.1498484390735; Mon, 26 Jun 2017 06:39:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498484390; cv=none; d=google.com; s=arc-20160816; b=slNWwCqYuZuzilqDUDJa7Xfr9ArucJiVyPgBXe/Lejs/Ktn4OjZi4YSZZ5uvMZ4/BR ojdPw2LjeOHNLDj68lC4E9nqsmZ8cgUNYfMqGzH15VFYIe7n3G4L/B+v0tEv9JMYhxoB Y8sAxiAefuavAutV1BzTIFQZ6DzfNHsxWRMD0ItLn7EHaUIc8T5GmS6Q3XSlDbR6BARE 6RfVxeWZ/9b9T1nHM8Vm2gbpQNTZ0IYfBaMUx30O70C2X+x/i9DEKXE1EoFgCbRA1omu C/+aG61lfppr++mKKFeotfTafik14dShj5O47ogC0OpOSoHLLu247KFVYUZlks5wjBQL 4xMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=0Tzp4mZMvaxeJ3UtDzoTtV9SmBYO8rZUDJz0Qp9onDM=; b=l+F2lnQnl0NP1yGHWGPv59nmE7cHa4Eb114vOZobodyjVsemUqa4ScG30N/ME3W6eH Jrko7YIGsx0J4xL3RL0Slffdr0BANiJ72RxTg77dhv6jjyqgw7byxnKgmgRHRfCvhhRI gl8xsDMCZWpIOQ7RdN4sIVVTb07m93ZXCrD98Fq49BFVExUjsWO0WuyB7M27GkOt4iKw d+7MjatwPoQX8lBkD72Y96f7zJ8ZJTh8WpfGOuCZbyx8fbNtL1gqIB+l6sjj/GVk7l89 /R56prE0r8Bhh5caXR0LV0IJ9QTPXSndHfXiRMhAC0SuJxJYxtAirE4D23nEXx9DuhEM Z3jQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u91si78851plb.499.2017.06.26.06.39.50; Mon, 26 Jun 2017 06:39:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752523AbdFZNjq (ORCPT + 25 others); Mon, 26 Jun 2017 09:39:46 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8858 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752187AbdFZNj2 (ORCPT ); Mon, 26 Jun 2017 09:39:28 -0400 Received: from 172.30.72.53 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.53]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQA47791; Mon, 26 Jun 2017 21:39:25 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:18 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei , John Garry Subject: [PATCH 2/5] iommu: add a new member unmap_tlb_sync into struct iommu_ops Date: Mon, 26 Jun 2017 21:38:47 +0800 Message-ID: <1498484330-10840-3-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59510E8E.000F, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: ed5302fe4556ae41e7f51ba1eefc20de Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org An iova range may contain many pages/blocks, especially for the case of unmap_sg. Currently, for each page/block unmapping, a tlb invalidation operation will be followed and wait(called tlb_sync) until the operation's over. But actually we only need one tlb_sync in the last stage. Look at the loop in function iommu_unmap: while (unmapped < size) { ... unmapped_page = domain->ops->unmap(domain, iova, pgsize); ... } It's not a good idea to add the tlb_sync in domain->ops->unmap. There are many profits, below actions can be reduced: 1. iommu hardware is a shared resource for cpus, for the tlb_sync operation, lock protection is needed. 2. iommu hardware is not inside CPU, to start tlb_sync and check it finished may take a lot of time. Some people might ask: Is it safe to do so? The answer is yes. The standard processing flow is: alloc iova map process data unmap tlb invalidation and sync free iova What should be guaranteed is: "free iova" action is behind "unmap" and "tlbi operation" action, that is what we are doing right now. This ensures that: all TLBs of an iova-range have been invalidated before the iova reallocated. Signed-off-by: Zhen Lei --- drivers/iommu/iommu.c | 3 +++ include/linux/iommu.h | 1 + 2 files changed, 4 insertions(+) -- 2.5.0 diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index cf7ca7e..01e91a8 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1610,6 +1610,9 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) unmapped += unmapped_page; } + if (domain->ops->unmap_tlb_sync) + domain->ops->unmap_tlb_sync(domain); + trace_unmap(orig_iova, size, unmapped); return unmapped; } diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 2cb54ad..5964121 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -197,6 +197,7 @@ struct iommu_ops { phys_addr_t paddr, size_t size, int prot); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, size_t size); + void (*unmap_tlb_sync)(struct iommu_domain *domain); size_t (*map_sg)(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); From patchwork Mon Jun 26 13:38:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 106328 Delivered-To: patch@linaro.org Received: by 10.140.101.48 with SMTP id t45csp86168qge; Mon, 26 Jun 2017 06:40:15 -0700 (PDT) X-Received: by 10.84.176.195 with SMTP id v61mr266352plb.101.1498484414871; Mon, 26 Jun 2017 06:40:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498484414; cv=none; d=google.com; s=arc-20160816; b=0lkjdCK+awIWgvWLfC3q76CCF1oOt1jdpjHXXyBdpCZOraL/xMTTuK1LBn24gxKPIV mpZanELuC+VX0hMRoEmauanAciw9UotNG+Mf8rlfC0ZB0F8Dmb0j997vIEjYCbbgXhRX ZTT+kuBSNBhofyha2Z+CVQ3IYc+h/XuUOfdeAKgissvEv4qAK13I5IiEtIBxSfoRliOV Rn+/cR9gda5G6PzMoECvEwa6M4FwCl8jQtIW39EY8KazDAmhw1qPxunWUhDEQq2/Tyry FNDOO+0pXLs8CGzGeu+lRvpVfcWFa3xSzrwXHxv6bxZ/mrVwulk/z66fnbEGA33+uhGY 42tA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=BWOq7Yk0aHeUyNliU47Iw1a8ANU/3GVIvilxGTDXpG4=; b=zp8Xc75JVWIxoa36A4mXJWisafVg2bLgCj/tBYPjs4BPZy84RcTwVw+3MAlhtmhDdj l/l8CwDm7aeD/RBVb44Mvwpl8ZVtlHZBJwvOBiAgvu+g3cu1RdDRot1W7EX5w7F3UfM1 LJEsyUDZwp77m7opVpapZkLNaMYAL63d8tQZ0GT4dZQn/RQjR6AToLCCEo/ADUmPyQ4Q N87QWId+sJL0ZRlHksGih6qnTUmnTM4zQtQo2ml29FhP0SO1gYAOMaOghkSMr7/Z8qCY 63oI8/x1yXaUmpx8hsdg3uUm5LhZnYo9LhDdhHhdU3JWNLn1A6asqh8RUydKi8YU/j9M 92Ug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h24si80902pfk.157.2017.06.26.06.40.14; Mon, 26 Jun 2017 06:40:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752552AbdFZNkL (ORCPT + 25 others); Mon, 26 Jun 2017 09:40:11 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:8860 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752394AbdFZNjg (ORCPT ); Mon, 26 Jun 2017 09:39:36 -0400 Received: from 172.30.72.53 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.53]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQA47794; Mon, 26 Jun 2017 21:39:26 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:19 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei , John Garry Subject: [PATCH 3/5] iommu/arm-smmu-v3: add support for unmap an iova range with only one tlb sync Date: Mon, 26 Jun 2017 21:38:48 +0800 Message-ID: <1498484330-10840-4-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59510E8E.0149, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9c63b4ac117a54a3f8d97d1bfff17cd4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 1. remove tlb_sync operation in "unmap" 2. make sure each "unmap" will always be followed by tlb sync operation The resultant effect is as below: unmap memory page-1 tlb invalidate page-1 ... unmap memory page-n tlb invalidate page-n tlb sync Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 10 ++++++++++ drivers/iommu/io-pgtable-arm.c | 30 ++++++++++++++++++++---------- drivers/iommu/io-pgtable.h | 1 + 3 files changed, 31 insertions(+), 10 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4481123..328b9d7 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1724,6 +1724,15 @@ arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) return ops->unmap(ops, iova, size); } +static void arm_smmu_unmap_tlb_sync(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops; + + if (ops && ops->unmap_tlb_sync) + ops->unmap_tlb_sync(ops); +} + static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { @@ -1943,6 +1952,7 @@ static struct iommu_ops arm_smmu_ops = { .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, + .unmap_tlb_sync = arm_smmu_unmap_tlb_sync, .map_sg = default_iommu_map_sg, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 52700fa..8137e62 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -304,6 +304,8 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, WARN_ON(!selftest_running); return -EEXIST; } else if (iopte_type(pte, lvl) == ARM_LPAE_PTE_TYPE_TABLE) { + size_t unmapped; + /* * We need to unmap and free the old table before * overwriting it with a block entry. @@ -312,7 +314,9 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz)) + unmapped = __arm_lpae_unmap(data, iova, sz, lvl, tblp); + io_pgtable_tlb_sync(&data->iop); + if (WARN_ON(unmapped != sz)) return -EINVAL; } @@ -576,7 +580,6 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Also flush any partial walks */ io_pgtable_tlb_add_flush(iop, iova, size, ARM_LPAE_GRANULE(data), false); - io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte, data); __arm_lpae_free_pgtable(data, lvl + 1, ptep); } else { @@ -601,16 +604,18 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, static int arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, size_t size) { - size_t unmapped; struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); arm_lpae_iopte *ptep = data->pgd; int lvl = ARM_LPAE_START_LVL(data); - unmapped = __arm_lpae_unmap(data, iova, size, lvl, ptep); - if (unmapped) - io_pgtable_tlb_sync(&data->iop); + return __arm_lpae_unmap(data, iova, size, lvl, ptep); +} + +static void arm_lpae_unmap_tlb_sync(struct io_pgtable_ops *ops) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - return unmapped; + io_pgtable_tlb_sync(&data->iop); } static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, @@ -723,6 +728,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) data->iop.ops = (struct io_pgtable_ops) { .map = arm_lpae_map, .unmap = arm_lpae_unmap, + .unmap_tlb_sync = arm_lpae_unmap_tlb_sync, .iova_to_phys = arm_lpae_iova_to_phys, }; @@ -1019,7 +1025,7 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) int i, j; unsigned long iova; - size_t size; + size_t size, unmapped; struct io_pgtable_ops *ops; selftest_running = true; @@ -1071,7 +1077,9 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) /* Partial unmap */ size = 1UL << __ffs(cfg->pgsize_bitmap); - if (ops->unmap(ops, SZ_1G + size, size) != size) + unmapped = ops->unmap(ops, SZ_1G + size, size); + ops->unmap_tlb_sync(ops); + if (unmapped != size) return __FAIL(ops, i); /* Remap of partial unmap */ @@ -1087,7 +1095,9 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) while (j != BITS_PER_LONG) { size = 1UL << j; - if (ops->unmap(ops, iova, size) != size) + unmapped = ops->unmap(ops, iova, size); + ops->unmap_tlb_sync(ops); + if (unmapped != size) return __FAIL(ops, i); if (ops->iova_to_phys(ops, iova + 42)) diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h index 524263a..7b3fc04 100644 --- a/drivers/iommu/io-pgtable.h +++ b/drivers/iommu/io-pgtable.h @@ -120,6 +120,7 @@ struct io_pgtable_ops { phys_addr_t paddr, size_t size, int prot); int (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, size_t size); + void (*unmap_tlb_sync)(struct io_pgtable_ops *ops); phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, unsigned long iova); }; From patchwork Mon Jun 26 13:38:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 106329 Delivered-To: patch@linaro.org Received: by 10.140.101.48 with SMTP id t45csp86315qge; Mon, 26 Jun 2017 06:40:34 -0700 (PDT) X-Received: by 10.84.128.107 with SMTP id 98mr255488pla.285.1498484434432; Mon, 26 Jun 2017 06:40:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498484434; cv=none; d=google.com; s=arc-20160816; b=f/yeSnazB0sDpKSr+L00SGX1dcnvNkS4DAmsiuZKF4HGXwYG8xpfqWURpNDh5aHIka 1KUiaejTDdKKRfSdg9HHSDlN4LaxdqdYK5X+tY/PW5clNOIqZlIJwsif0n2vDQ2eFrpw RuN8m9YQ9knyMmDh6OoBqMFvdddNdFa9hpglGeAodXkXpkpQcMHWvL8dyZGyOUqrjDXa njETh2stBcs+3uX8ztM5CnL9/fP5klf6/dNU+dx2BClkCEAiWoHlaSVHXY3yOvjnekNg HOmleeJF/CG33JRT81Ujm+zzQFM4WE38PX7OheUcgh2y1f7JSnfkBG4/Y+4vZ4QUnPtd aX+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=xdRy7Gm07VdVX4GzVcSEqnYgNHpGIsyOKo2nEdr01O0=; b=Ec9ha97hykDvw5LM/3a0S551F84RYH2YNOxMPvi6r1aOrdodMqAkbUfvOJ+TFDlJVe t07G+C2QQTm7d1SyIQV8Cq/2AqmFE1DQlTozNh6kyO0RnH0b8CJX3QAbLozgSI0LbF7W bduhXmRKErXilWNHUlcv2qpczC18pxssbDvkre66kF6+52TL2GhKtsWyR9FQHnL4YzK9 SoMbOx3sI9FJO0UBnOb9fPAOELpAHcAE/hXAKaRVgfO92TGAF0RAbO+Q9bYBNBmQJCoK aVyCz7N7WdUtfyXtwvWMgxPSE6owOhonQs6Mc29J1345hd6kXb4lvrkqsROlqjl1Fsim P9aw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v18si66190pgo.569.2017.06.26.06.40.34; Mon, 26 Jun 2017 06:40:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752579AbdFZNkV (ORCPT + 25 others); Mon, 26 Jun 2017 09:40:21 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:8799 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752396AbdFZNje (ORCPT ); Mon, 26 Jun 2017 09:39:34 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQZ51690; Mon, 26 Jun 2017 21:39:30 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:20 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei , John Garry Subject: [PATCH 4/5] iommu/arm-smmu: add support for unmap a memory range with only one tlb sync Date: Mon, 26 Jun 2017 21:38:49 +0800 Message-ID: <1498484330-10840-5-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020201.59510E92.0192, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 3781b651141444572d08b21ae36ce573 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 1. remove tlb_sync operation in "unmap" 2. make sure each "unmap" will always be followed by tlb sync operation The resultant effect is as below: unmap memory page-1 tlb invalidate page-1 ... unmap memory page-n tlb invalidate page-n tlb sync Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu.c | 10 ++++++++++ drivers/iommu/io-pgtable-arm-v7s.c | 32 +++++++++++++++++++++----------- 2 files changed, 31 insertions(+), 11 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index b8d069a..74ca6eb 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1402,6 +1402,15 @@ static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, return ops->unmap(ops, iova, size); } +static void arm_smmu_unmap_tlb_sync(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops; + + if (ops && ops->unmap_tlb_sync) + ops->unmap_tlb_sync(ops); +} + static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, dma_addr_t iova) { @@ -1698,6 +1707,7 @@ static struct iommu_ops arm_smmu_ops = { .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, + .unmap_tlb_sync = arm_smmu_unmap_tlb_sync, .map_sg = default_iommu_map_sg, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index a55fd38..325c1c6 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -370,6 +370,8 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, for (i = 0; i < num_entries; i++) if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) { + size_t unmapped; + /* * We need to unmap and free the old table before * overwriting it with a block entry. @@ -378,8 +380,10 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, size_t sz = ARM_V7S_BLOCK_SIZE(lvl); tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl); - if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz, - sz, lvl, tblp) != sz)) + unmapped = __arm_v7s_unmap(data, iova + i * sz, + sz, lvl, tblp); + io_pgtable_tlb_sync(&data->iop); + if (WARN_ON(unmapped != sz)) return -EINVAL; } else if (ptep[i]) { /* We require an unmap first */ @@ -626,7 +630,6 @@ static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, /* Also flush any partial walks */ io_pgtable_tlb_add_flush(iop, iova, blk_size, ARM_V7S_BLOCK_SIZE(lvl + 1), false); - io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte[i], lvl); __arm_v7s_free_table(ptep, lvl + 1, data); } else { @@ -653,13 +656,15 @@ static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, size_t size) { struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); - size_t unmapped; - unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd); - if (unmapped) - io_pgtable_tlb_sync(&data->iop); + return __arm_v7s_unmap(data, iova, size, 1, data->pgd); +} + +static void arm_v7s_unmap_tlb_sync(struct io_pgtable_ops *ops) +{ + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); - return unmapped; + io_pgtable_tlb_sync(&data->iop); } static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, @@ -724,6 +729,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, data->iop.ops = (struct io_pgtable_ops) { .map = arm_v7s_map, .unmap = arm_v7s_unmap, + .unmap_tlb_sync = arm_v7s_unmap_tlb_sync, .iova_to_phys = arm_v7s_iova_to_phys, }; @@ -822,7 +828,7 @@ static int __init arm_v7s_do_selftests(void) .quirks = IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA, .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, }; - unsigned int iova, size, iova_start; + unsigned int iova, size, unmapped, iova_start; unsigned int i, loopnr = 0; selftest_running = true; @@ -877,7 +883,9 @@ static int __init arm_v7s_do_selftests(void) size = 1UL << __ffs(cfg.pgsize_bitmap); while (i < loopnr) { iova_start = i * SZ_16M; - if (ops->unmap(ops, iova_start + size, size) != size) + unmapped = ops->unmap(ops, iova_start + size, size); + ops->unmap_tlb_sync(ops); + if (unmapped != size) return __FAIL(ops); /* Remap of partial unmap */ @@ -896,7 +904,9 @@ static int __init arm_v7s_do_selftests(void) while (i != BITS_PER_LONG) { size = 1UL << i; - if (ops->unmap(ops, iova, size) != size) + unmapped = ops->unmap(ops, iova, size); + ops->unmap_tlb_sync(ops); + if (unmapped != size) return __FAIL(ops); if (ops->iova_to_phys(ops, iova + 42)) From patchwork Mon Jun 26 13:38:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 106330 Delivered-To: patch@linaro.org Received: by 10.140.101.48 with SMTP id t45csp86318qge; Mon, 26 Jun 2017 06:40:34 -0700 (PDT) X-Received: by 10.84.238.206 with SMTP id l14mr285142pln.280.1498484434892; Mon, 26 Jun 2017 06:40:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498484434; cv=none; d=google.com; s=arc-20160816; b=MvVWqihx1Hxop91XJkV0TIr4Qj8cVxeM5tbtMEQfy9woD1x2HXPUeVs0JLDaRiaGt8 oO1X9paiH+88G90WxnVRnYVAgnXEfo1LU/9A91b7IsWqNYTiMz4M3co1INefhrBVjqb1 ZennwJw4V4HndzBVF8w7RlJDcefgTAJAoDflL+A7iy+xVWIu53WPskUx+Uv9R4p3GLLj SULAk6dukQGX+aC+4V0rz3GfyhZ1riPt3TfRuaL5FcVXE/oifDt55d8s+ivF9Tow4uw0 /ElrYtKDpB17miFyDzAolDwlFzlEEUlcWyPTbm9Gdv6n8o6MRMSBEu88btQYbvKUKu9r mOZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=qlndPujqvYh0dGahgKHPxp/2GgCj7JdmKl+ctSQAd3M=; b=XwqlPLgCuyUoXcj/iBn/BT6vmYHQc5xbGGnuTUaU17iR/tJDLKiCvraUdS53TyBiRw jXscOCxNkOmtSsUHDLYpQ0Gksf4UynUs8zZ6SazjuEqLvoTfUjnqFL30kFFnpmoPTvzL N9wbr6m+JAGfDefpE/avOnA1aIPVAW3UnflOlhr2hf7o5mztGirZi+6jrAK1BdA8ynZE glqbwWLfkL+3jDEGVszyTk8JE5AIY15ahSqVfSJVfFNOK48ChAwumdpWNf0KqlUsm4mK vV3I1DYfhn9qpPAohn6f6dToOX1j+l92+GIuQ/beoe9dbpEZryZqNxUDh0LlazSegoGU nnug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v18si66190pgo.569.2017.06.26.06.40.34; Mon, 26 Jun 2017 06:40:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752604AbdFZNkX (ORCPT + 25 others); Mon, 26 Jun 2017 09:40:23 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:8800 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752393AbdFZNje (ORCPT ); Mon, 26 Jun 2017 09:39:34 -0400 Received: from 172.30.72.56 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.56]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQZ51691; Mon, 26 Jun 2017 21:39:30 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:21 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei , John Garry Subject: [PATCH 5/5] iommu/io-pgtable: delete member tlb_sync_pending of struct io_pgtable Date: Mon, 26 Jun 2017 21:38:50 +0800 Message-ID: <1498484330-10840-6-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020206.59510E92.0229, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: b2b656ab00f7afe8080b0d2db686d0f7 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This member is unused now, because the previous patches ensured that each unmap will always be followed by tlb sync operation. By the way, ->tlb_flush_all executes tlb_sync by itself. Signed-off-by: Zhen Lei --- drivers/iommu/io-pgtable.h | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h index 7b3fc04..43ddf1f 100644 --- a/drivers/iommu/io-pgtable.h +++ b/drivers/iommu/io-pgtable.h @@ -166,7 +166,6 @@ void free_io_pgtable_ops(struct io_pgtable_ops *ops); struct io_pgtable { enum io_pgtable_fmt fmt; void *cookie; - bool tlb_sync_pending; struct io_pgtable_cfg cfg; struct io_pgtable_ops ops; }; @@ -176,22 +175,17 @@ struct io_pgtable { static inline void io_pgtable_tlb_flush_all(struct io_pgtable *iop) { iop->cfg.tlb->tlb_flush_all(iop->cookie); - iop->tlb_sync_pending = true; } static inline void io_pgtable_tlb_add_flush(struct io_pgtable *iop, unsigned long iova, size_t size, size_t granule, bool leaf) { iop->cfg.tlb->tlb_add_flush(iova, size, granule, leaf, iop->cookie); - iop->tlb_sync_pending = true; } static inline void io_pgtable_tlb_sync(struct io_pgtable *iop) { - if (iop->tlb_sync_pending) { - iop->cfg.tlb->tlb_sync(iop->cookie); - iop->tlb_sync_pending = false; - } + iop->cfg.tlb->tlb_sync(iop->cookie); } /**