From patchwork Tue Sep 12 13:00:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 112306 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5156155qgf; Tue, 12 Sep 2017 06:01:43 -0700 (PDT) X-Google-Smtp-Source: ADKCNb5kR/2K0bKKSIOWLRAA9j8MtdRccRx8HYueOSEE8WagcVIAuCa/8LR+t3MVGYNB+VRJJHUu X-Received: by 10.84.236.74 with SMTP id h10mr16816751pln.315.1505221303709; Tue, 12 Sep 2017 06:01:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505221303; cv=none; d=google.com; s=arc-20160816; b=aVmt8t060LTNxjbNZd1TZbQxS6m+3GzcIrNSU8McOXKnyhApHo79/mMZoCFhwZs+hP SQ/ij2481kd1TcR72IfWS8IAmDOLINSYLhz08DY7kkZL86yly3WOovMy3Y41Je/6MrNJ c7PALUaYE6fqUvlISnql8I5LXQc7tagpwk4pvKUyz5QEazpLnDbIHacwRgiL/2rpwpaZ Rv/uc2ofaadlpOiVcRtdkfN1G8qisTrrsmS8JhsgHGl76Tl1uXaQk0CXBVdAAOTWqBmS rWfFQYHdNXCgS7VG/zRDWoWTgqeZIWgfCSViIviTmgGEkS1RnyU0L/MF+17mbQHGMpa4 DQMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=ZEqifKYnfXV8DzRh2A6qTO1L/gfbnpaj9YJwd8Ck13I=; b=OgSlqSwRf3FPLWkC3xPXMRTlQYVtcaCVQFt+Au2krdL2VGX+sVWZ5+i5VW/WIwpSZM waEDNbGoSHIRehIixaKwRZAspEpQzoBpD287qxJfjXqBMtnSJhVew/z3bSx/FqPnFu9A zbYX+FkJO2g1CGZca2WsbWAjF1k3OU3Vw3EcsdtuLP7F5gvoLkVMhKbt1ngrA7e7zXG6 VO5NcFdE0J9MKWZg5izGY6LgDLmJHLt6VGMQLkwrOoAGAkJ0dCw7GPJK42x+8gX4ZN9B X2sx72hmizS97iCaPRQyBHuOmf2vWT8NsgSNnOOKXPuLrJKRW+JEe0mffLN7XFxQSK4k q8YA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y2si9387522pll.163.2017.09.12.06.01.42; Tue, 12 Sep 2017 06:01:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751824AbdILNBl (ORCPT + 26 others); Tue, 12 Sep 2017 09:01:41 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:6458 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751405AbdILNBh (ORCPT ); Tue, 12 Sep 2017 09:01:37 -0400 Received: from 172.30.72.60 (EHLO DGGEMS408-HUB.china.huawei.com) ([172.30.72.60]) by dggrg04-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DHB84771; Tue, 12 Sep 2017 21:01:33 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.301.0; Tue, 12 Sep 2017 21:01:23 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Hanjun Guo , Libin , "Zhen Lei" , Jinyue Li , "Kefeng Wang" Subject: [PATCH v2 1/3] iommu/arm-smmu-v3: put off the execution of TLBI* to reduce lock confliction Date: Tue, 12 Sep 2017 21:00:36 +0800 Message-ID: <1505221238-9428-2-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1505221238-9428-1-git-send-email-thunder.leizhen@huawei.com> References: <1505221238-9428-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.59B7DAAE.0035, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1f07ae7eb21c13fd104c593a5b76895e Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Because all TLBI commands should be followed by a SYNC command, to make sure that it has been completely finished. So we can just add the TLBI commands into the queue, and put off the execution until meet SYNC or other commands. To prevent the followed SYNC command waiting for a long time because of too many commands have been delayed, restrict the max delayed number. According to my test, I got the same performance data as I replaced writel with writel_relaxed in queue_inc_prod. Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index e67ba6c..ef42c4b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -337,6 +337,7 @@ /* Command queue */ #define CMDQ_ENT_DWORDS 2 #define CMDQ_MAX_SZ_SHIFT 8 +#define CMDQ_MAX_DELAYED 32 #define CMDQ_ERR_SHIFT 24 #define CMDQ_ERR_MASK 0x7f @@ -482,6 +483,7 @@ struct arm_smmu_cmdq_ent { }; } cfgi; + #define CMDQ_OP_TLBI_NH_ALL 0x10 #define CMDQ_OP_TLBI_NH_ASID 0x11 #define CMDQ_OP_TLBI_NH_VA 0x12 #define CMDQ_OP_TLBI_EL2_ALL 0x20 @@ -509,6 +511,7 @@ struct arm_smmu_cmdq_ent { struct arm_smmu_queue { int irq; /* Wired interrupt */ + u32 nr_delay; __le64 *base; dma_addr_t base_dma; @@ -745,11 +748,16 @@ static int queue_sync_prod(struct arm_smmu_queue *q) return ret; } -static void queue_inc_prod(struct arm_smmu_queue *q) +static void queue_inc_swprod(struct arm_smmu_queue *q) { - u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1; + u32 prod = q->prod + 1; q->prod = Q_OVF(q, q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod); +} + +static void queue_inc_prod(struct arm_smmu_queue *q) +{ + queue_inc_swprod(q); writel(q->prod, q->prod_reg); } @@ -791,13 +799,24 @@ static void queue_write(__le64 *dst, u64 *src, size_t n_dwords) *dst++ = cpu_to_le64(*src++); } -static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent) +static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent, int optimize) { if (queue_full(q)) return -ENOSPC; queue_write(Q_ENT(q, q->prod), ent, q->ent_dwords); - queue_inc_prod(q); + + /* + * We don't want too many commands to be delayed, this may lead the + * followed sync command to wait for a long time. + */ + if (optimize && (++q->nr_delay < CMDQ_MAX_DELAYED)) { + queue_inc_swprod(q); + } else { + queue_inc_prod(q); + q->nr_delay = 0; + } + return 0; } @@ -939,6 +958,7 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu) static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, struct arm_smmu_cmdq_ent *ent) { + int optimize = 0; u64 cmd[CMDQ_ENT_DWORDS]; unsigned long flags; bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV); @@ -950,8 +970,17 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, return; } + /* + * All TLBI commands should be followed by a sync command later. + * The CFGI commands is the same, but they are rarely executed. + * So just optimize TLBI commands now, to reduce the "if" judgement. + */ + if ((ent->opcode >= CMDQ_OP_TLBI_NH_ALL) && + (ent->opcode <= CMDQ_OP_TLBI_NSNH_ALL)) + optimize = 1; + spin_lock_irqsave(&smmu->cmdq.lock, flags); - while (queue_insert_raw(q, cmd) == -ENOSPC) { + while (queue_insert_raw(q, cmd, optimize) == -ENOSPC) { if (queue_poll_cons(q, false, wfe)) dev_err_ratelimited(smmu->dev, "CMDQ timeout\n"); } @@ -2001,6 +2030,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, << Q_BASE_LOG2SIZE_SHIFT; q->prod = q->cons = 0; + q->nr_delay = 0; + return 0; } @@ -2584,6 +2615,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) dev_err(smmu->dev, "unit-length command queue not supported\n"); return -ENXIO; } + BUILD_BUG_ON(CMDQ_MAX_DELAYED >= (1 << CMDQ_MAX_SZ_SHIFT)); smmu->evtq.q.max_n_shift = min((u32)EVTQ_MAX_SZ_SHIFT, reg >> IDR1_EVTQ_SHIFT & IDR1_EVTQ_MASK); From patchwork Tue Sep 12 13:00:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 112309 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5157332qgf; Tue, 12 Sep 2017 06:02:23 -0700 (PDT) X-Google-Smtp-Source: ADKCNb5Yid76W6bxhBEndCjGTu/zNuDWMIFFGVitOHKbAFfe/v9MUsPJIiHT30Vk3jmz/EenYjjq X-Received: by 10.84.175.195 with SMTP id t61mr16396745plb.284.1505221343744; Tue, 12 Sep 2017 06:02:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505221343; cv=none; d=google.com; s=arc-20160816; b=qWbsDENMROg++trAjLm6VVg5SUf3d681mk2//e6iBDAZbTkZ40phr1BjK9bO3ajIVd s3pCNYK9zRt68bxk4oDo2dp81h0CLcflMEZrIa4OoLca2zWlido7Z+4U1SiSpVRFMZi8 tH4XTUaQDAlmxEI47JdvTk8bBOyiyrDjuhSq3NnRj/1/M6IVfMWgqwe53MhUd5XqUVZ1 aJfvxkuatU5WpfIgCLszUS3Q4qDrAKIVdD0XBQ0Jje0IzUMo5wyK6/MeA2WCPbFYw87m D9iydNNpu1QFYKj02OrX19jLbymuOyUAlKsjfsmm06uy7MRzc5zsWDT2YZTGmt0nChyt nX5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=xpPwAItg1ZkUpEDoXzXiu/2Dfdf+DtgbWu6u7z/97Xo=; b=dfOUCLxIpqhiTW5Za25oBFULnMBpTbr11XTV1imLGBjzPAI7SBfYKgH2F5NQ4SL23y JpyhcxJWHmuqXZq3Q9ohzNhBN9wiI51bS3OIV3eEFlzCFRa8XKmXZqLUltlYCAtWPYR3 7ShPX9yjniXF7S8xpwTcz0pRAtvVZdYevdSbSCXp0j2Nv3EHuIowntligRpSIE6TmHPo Gd/20pxQ7nOYhyITurfu/dXYqSq0w+yeqocjq+XD1idBCx1o10EQqvdnOSOW5qxCdr1s T9I5vumudRw2HYBVm4sBIfMQStY8AorC1Mca/uC6PLek7hNrxVncdrZIvR4IQBkk8YaX /5jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b9si8779557pls.331.2017.09.12.06.02.23; Tue, 12 Sep 2017 06:02:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751875AbdILNCU (ORCPT + 26 others); Tue, 12 Sep 2017 09:02:20 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:6459 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751400AbdILNBh (ORCPT ); Tue, 12 Sep 2017 09:01:37 -0400 Received: from 172.30.72.58 (EHLO DGGEMS413-HUB.china.huawei.com) ([172.30.72.58]) by dggrg04-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DHB84777; Tue, 12 Sep 2017 21:01:35 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.301.0; Tue, 12 Sep 2017 21:01:24 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Hanjun Guo , Libin , "Zhen Lei" , Jinyue Li , "Kefeng Wang" Subject: [PATCH v2 2/3] iommu/arm-smmu-v3: add support for unmap an iova range with only one tlb sync Date: Tue, 12 Sep 2017 21:00:37 +0800 Message-ID: <1505221238-9428-3-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1505221238-9428-1-git-send-email-thunder.leizhen@huawei.com> References: <1505221238-9428-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.59B7DAAF.004D, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 107c9bf086c87746b818e38a2ecb68db Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch is base on: (add02cfdc9bc2 "iommu: Introduce Interface for IOMMU TLB Flushing") Because iotlb_sync is moved out of ".unmap = arm_smmu_unmap", some interval ".unmap" calls should explicitly followed by a iotlb_sync operation. Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 10 ++++++++++ drivers/iommu/io-pgtable-arm.c | 30 ++++++++++++++++++++---------- drivers/iommu/io-pgtable.h | 1 + 3 files changed, 31 insertions(+), 10 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index ef42c4b..e92828e 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1772,6 +1772,15 @@ arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) return ops->unmap(ops, iova, size); } +static void arm_smmu_iotlb_sync(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + + if (ops && ops->iotlb_sync) + ops->iotlb_sync(ops); +} + static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { @@ -1991,6 +2000,7 @@ static struct iommu_ops arm_smmu_ops = { .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, + .iotlb_sync = arm_smmu_iotlb_sync, .map_sg = default_iommu_map_sg, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index e8018a3..805efc9 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -304,6 +304,8 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, WARN_ON(!selftest_running); return -EEXIST; } else if (iopte_type(pte, lvl) == ARM_LPAE_PTE_TYPE_TABLE) { + size_t unmapped; + /* * We need to unmap and free the old table before * overwriting it with a block entry. @@ -312,7 +314,9 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz)) + unmapped = __arm_lpae_unmap(data, iova, sz, lvl, tblp); + io_pgtable_tlb_sync(&data->iop); + if (WARN_ON(unmapped != sz)) return -EINVAL; } @@ -584,7 +588,6 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Also flush any partial walks */ io_pgtable_tlb_add_flush(iop, iova, size, ARM_LPAE_GRANULE(data), false); - io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte, data); __arm_lpae_free_pgtable(data, lvl + 1, ptep); } else { @@ -609,7 +612,6 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, static int arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, size_t size) { - size_t unmapped; struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); arm_lpae_iopte *ptep = data->pgd; int lvl = ARM_LPAE_START_LVL(data); @@ -617,11 +619,14 @@ static int arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias))) return 0; - unmapped = __arm_lpae_unmap(data, iova, size, lvl, ptep); - if (unmapped) - io_pgtable_tlb_sync(&data->iop); + return __arm_lpae_unmap(data, iova, size, lvl, ptep); +} + +static void arm_lpae_iotlb_sync(struct io_pgtable_ops *ops) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - return unmapped; + io_pgtable_tlb_sync(&data->iop); } static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, @@ -734,6 +739,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) data->iop.ops = (struct io_pgtable_ops) { .map = arm_lpae_map, .unmap = arm_lpae_unmap, + .iotlb_sync = arm_lpae_iotlb_sync, .iova_to_phys = arm_lpae_iova_to_phys, }; @@ -1030,7 +1036,7 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) int i, j; unsigned long iova; - size_t size; + size_t size, unmapped; struct io_pgtable_ops *ops; selftest_running = true; @@ -1082,7 +1088,9 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) /* Partial unmap */ size = 1UL << __ffs(cfg->pgsize_bitmap); - if (ops->unmap(ops, SZ_1G + size, size) != size) + unmapped = ops->unmap(ops, SZ_1G + size, size); + ops->iotlb_sync(ops); + if (unmapped != size) return __FAIL(ops, i); /* Remap of partial unmap */ @@ -1098,7 +1106,9 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) while (j != BITS_PER_LONG) { size = 1UL << j; - if (ops->unmap(ops, iova, size) != size) + unmapped = ops->unmap(ops, iova, size); + ops->iotlb_sync(ops); + if (unmapped != size) return __FAIL(ops, i); if (ops->iova_to_phys(ops, iova + 42)) diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h index a3e6670..3a72e08 100644 --- a/drivers/iommu/io-pgtable.h +++ b/drivers/iommu/io-pgtable.h @@ -120,6 +120,7 @@ struct io_pgtable_ops { phys_addr_t paddr, size_t size, int prot); int (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, size_t size); + void (*iotlb_sync)(struct io_pgtable_ops *ops); phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, unsigned long iova); }; From patchwork Tue Sep 12 13:00:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 112307 Delivered-To: patch@linaro.org Received: by 10.140.106.117 with SMTP id d108csp5156373qgf; Tue, 12 Sep 2017 06:01:51 -0700 (PDT) X-Google-Smtp-Source: ADKCNb5zuAoxwcKvjzb99x4I5iGaC9C/ic8994/GfifZFTGUAdjIAaGrqXEYoVTqxb74gkdJA1fI X-Received: by 10.98.87.23 with SMTP id l23mr15115436pfb.77.1505221311635; Tue, 12 Sep 2017 06:01:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505221311; cv=none; d=google.com; s=arc-20160816; b=ZoPdrNpRwXzldhKkenQtFYNz7z61WuYdFqtB/37ABzWx9cBS8lfcvkhYEXWCuP7/o8 5SKk5yMneko1AsRPz+LorT+JlZYYtnrAJN/upWjypTsYRlOhUNKisIXva65tRcYllgPl uzk/IdHuX6+dQZ8s+tgu/hj6akibV+c0i1hPzUMIZMnmgoXK458gIJjCB6rQqIAnsnzk 9oj8p9WuFubQb5FBfQZv6mH8CL1YZqED3xbZvuytvc50N6sZc236HVKLC4HBDnbpL9nf QCwBILiLHXHP/bD86FLiZc7wZR+mzCkC1SesNJBYPLKVAWioENwKyqd2KAUT2gxNd0AA DYdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=EdDLFgwapZDKmGgmI6902cMNSXIuHEUESCuctugouig=; b=XiDbP2N0/jb3sNJ3HfhVnlsUJJXHF762MNkXQzrCnzPOBhzhhCiIHCHUl3gqGLCXOP THoZ/6m+msq5RFDPGi32FaZGfROclWSGFYkOy3GFjCCWy04AhmzDBzykSE1M91P3NgI0 u473VDsCIk1V0Fwx02ftxMyD+FBpCU1BZ6UGI9LZt40kbvwQKUXOc6HPk84SfhwzLgez NbQqbPgVuXlHrZ9Q/IuiECAY6NGvdB8WnPEc0k6mqsWXh5QvpDaI/71M0M7h3+REh4I0 ldC+PAVBE8i0+eLaK9g2NTSpUt5s96oqRwUkfEphmNfQLXirhedI6zH+JWyk5vTVNmNu syog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y2si9387522pll.163.2017.09.12.06.01.51; Tue, 12 Sep 2017 06:01:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751558AbdILNBt (ORCPT + 26 others); Tue, 12 Sep 2017 09:01:49 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:6040 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751484AbdILNBo (ORCPT ); Tue, 12 Sep 2017 09:01:44 -0400 Received: from 172.30.72.60 (EHLO DGGEMS411-HUB.china.huawei.com) ([172.30.72.60]) by dggrg05-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DHD85348; Tue, 12 Sep 2017 21:01:35 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.301.0; Tue, 12 Sep 2017 21:01:26 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel CC: Hanjun Guo , Libin , "Zhen Lei" , Jinyue Li , "Kefeng Wang" Subject: [PATCH v2 3/3] iommu/arm-smmu: add support for unmap a memory range with only one tlb sync Date: Tue, 12 Sep 2017 21:00:38 +0800 Message-ID: <1505221238-9428-4-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1505221238-9428-1-git-send-email-thunder.leizhen@huawei.com> References: <1505221238-9428-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090204.59B7DAB0.0104, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: f40aa71be0c9c297c54ae8e68617f8b3 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch is base on: (add02cfdc9bc2 "iommu: Introduce Interface for IOMMU TLB Flushing") Because iotlb_sync is moved out of ".unmap = arm_smmu_unmap", some interval ".unmap" calls should explicitly followed by a iotlb_sync operation. Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu.c | 10 ++++++++++ drivers/iommu/io-pgtable-arm-v7s.c | 32 +++++++++++++++++++++----------- 2 files changed, 31 insertions(+), 11 deletions(-) -- 2.5.0 diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 3bdb799..bb57d67 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1259,6 +1259,15 @@ static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, return ops->unmap(ops, iova, size); } +static void arm_smmu_iotlb_sync(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + + if (ops && ops->iotlb_sync) + ops->iotlb_sync(ops); +} + static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, dma_addr_t iova) { @@ -1561,6 +1570,7 @@ static struct iommu_ops arm_smmu_ops = { .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, + .iotlb_sync = arm_smmu_iotlb_sync, .map_sg = default_iommu_map_sg, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index d665d0d..457ad29 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -370,6 +370,8 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, for (i = 0; i < num_entries; i++) if (ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) { + size_t unmapped; + /* * We need to unmap and free the old table before * overwriting it with a block entry. @@ -378,8 +380,10 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data, size_t sz = ARM_V7S_BLOCK_SIZE(lvl); tblp = ptep - ARM_V7S_LVL_IDX(iova, lvl); - if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz, - sz, lvl, tblp) != sz)) + unmapped = __arm_v7s_unmap(data, iova + i * sz, + sz, lvl, tblp); + io_pgtable_tlb_sync(&data->iop); + if (WARN_ON(unmapped != sz)) return -EINVAL; } else if (ptep[i]) { /* We require an unmap first */ @@ -633,7 +637,6 @@ static int __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, /* Also flush any partial walks */ io_pgtable_tlb_add_flush(iop, iova, blk_size, ARM_V7S_BLOCK_SIZE(lvl + 1), false); - io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte[i], lvl); __arm_v7s_free_table(ptep, lvl + 1, data); } else { @@ -660,16 +663,18 @@ static int arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, size_t size) { struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); - size_t unmapped; if (WARN_ON(upper_32_bits(iova))) return 0; - unmapped = __arm_v7s_unmap(data, iova, size, 1, data->pgd); - if (unmapped) - io_pgtable_tlb_sync(&data->iop); + return __arm_v7s_unmap(data, iova, size, 1, data->pgd); +} + +static void arm_v7s_iotlb_sync(struct io_pgtable_ops *ops) +{ + struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); - return unmapped; + io_pgtable_tlb_sync(&data->iop); } static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, @@ -734,6 +739,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, data->iop.ops = (struct io_pgtable_ops) { .map = arm_v7s_map, .unmap = arm_v7s_unmap, + .iotlb_sync = arm_v7s_iotlb_sync, .iova_to_phys = arm_v7s_iova_to_phys, }; @@ -832,7 +838,7 @@ static int __init arm_v7s_do_selftests(void) .quirks = IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA, .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, }; - unsigned int iova, size, iova_start; + unsigned int iova, size, unmapped, iova_start; unsigned int i, loopnr = 0; selftest_running = true; @@ -887,7 +893,9 @@ static int __init arm_v7s_do_selftests(void) size = 1UL << __ffs(cfg.pgsize_bitmap); while (i < loopnr) { iova_start = i * SZ_16M; - if (ops->unmap(ops, iova_start + size, size) != size) + unmapped = ops->unmap(ops, iova_start + size, size); + ops->iotlb_sync(ops); + if (unmapped != size) return __FAIL(ops); /* Remap of partial unmap */ @@ -906,7 +914,9 @@ static int __init arm_v7s_do_selftests(void) while (i != BITS_PER_LONG) { size = 1UL << i; - if (ops->unmap(ops, iova, size) != size) + unmapped = ops->unmap(ops, iova, size); + ops->iotlb_sync(ops); + if (unmapped != size) return __FAIL(ops); if (ops->iova_to_phys(ops, iova + 42))