From patchwork Thu Sep 20 16:10:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 147103 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2188194ljw; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaDgcr/ZEty6HpJUOZSTI3Z9BiP/WrBQXAvrAJUwR+igKh7eyv6f1nzRMUl23Phf8rxxRiA X-Received: by 2002:a17:902:7c0a:: with SMTP id x10-v6mr40144891pll.77.1537459842405; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537459842; cv=none; d=google.com; s=arc-20160816; b=DpbbUFslWZjWVl1WETTqJTbppCGnhUDUY5WX+GPsld/dRyZI9dejDBLQg6kGPk+IkR MaBJkSzh6Eq2D5cQmLEBVVLS45L6rMU/cSPlaW4Q7xC6oJLgR1E+1qqZbS8xC2ejEiSf pXf/L1DNw6zKSoKOQw13PXqvGwlWqhEhDVWq5L5srp1zNaHoOXDVPiPDLkQN3lfSE5VH oI5PazAYKsXQ+pAIRZA9rNhxY3gweywKuF+g3flfdJPhHIOeZ2xe1d8aT05Pkk1B+ECJ qRtbN6RrCMZdp5NUNEGbYGJTQsY9l9hb/to8FGfD7G/80sqecrR5mhOYB9Xjfz1ltYB9 R13g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=xyPzoIMk+mzXiYzopvY5wr2CW+ewtNQPMXSIbJR6ZKA=; b=TrEqvicmyBN1AnBR8YSLertvcwm+gqslcrbjBTh3+13kUGxuHt/7S/LrXCM+kTbEG8 ChYFuXEE9HjNmov2AluzYbxYs/vygyWZLhmNU+sGrMJhkskmxmmCMzeY6iV+tpYDLCc+ zsJNTZibE5CItDeEsx/CK1uIIjlmyqvoK6YkkKeDS9AbxV4/7X9KFv8NtcOokq3zj+yo At9MlH+2JkmBvAFn+ShGncxiaBuCBQbaMGrH27QV2hHSMFeOa2IHHWDbwcl+6BBDgLIo AfY4ZRTSA9O1vg/F0Z2THCnQMRNJcCjS2zNaB/da2JVZMgC0N8qPxa4pGS3zOw/6RdEK iHBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14-v6si1401922pfi.138.2018.09.20.09.10.42; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730905AbeITVyr (ORCPT + 32 others); Thu, 20 Sep 2018 17:54:47 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:48558 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726128AbeITVyr (ORCPT ); Thu, 20 Sep 2018 17:54:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 69C6E1596; Thu, 20 Sep 2018 09:10:36 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A03533F557; Thu, 20 Sep 2018 09:10:34 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will.deacon@arm.com, thunder.leizhen@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: linuxarm@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, john.garry@huawei.com Subject: [PATCH v8 1/7] iommu/arm-smmu-v3: Implement flush_iotlb_all hook Date: Thu, 20 Sep 2018 17:10:21 +0100 Message-Id: <396dac165e343dec0cbe0a1f4d311839ff768f95.1537458163.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.19.0.dirty In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei .flush_iotlb_all is currently stubbed to arm_smmu_iotlb_sync() since the only time it would ever need to actually do anything is for callers doing their own explicit batching, e.g.: iommu_unmap_fast(domain, ...); iommu_unmap_fast(domain, ...); iommu_iotlb_flush_all(domain, ...); where since io-pgtable still issues the TLBI commands implicitly in the unmap instead of implementing .iotlb_range_add, the "flush" only needs to ensure completion of those already-in-flight invalidations. However, we're about to start using it in anger with flush queues, so let's get a proper implementation wired up. Signed-off-by: Zhen Lei Reviewed-by: Robin Murphy [rm: document why it wasn't a bug] Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-v3.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) -- 2.19.0.dirty diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index e395f1ff3f81..f10c852479fc 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1781,6 +1781,14 @@ arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) return ops->unmap(ops, iova, size); } +static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + if (smmu_domain->smmu) + arm_smmu_tlb_inv_context(smmu_domain); +} + static void arm_smmu_iotlb_sync(struct iommu_domain *domain) { struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; @@ -2008,7 +2016,7 @@ static struct iommu_ops arm_smmu_ops = { .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, - .flush_iotlb_all = arm_smmu_iotlb_sync, + .flush_iotlb_all = arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, From patchwork Thu Sep 20 16:10:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 147104 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2188207ljw; Thu, 20 Sep 2018 09:10:43 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaGMIxLgPudkQ2MTSDiM7ZNIeYakS266Cirt70+55MVCL4oJxNb2qaPzxQmCwVYLpiz6i7M X-Received: by 2002:a62:ea05:: with SMTP id t5-v6mr42397989pfh.228.1537459842896; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537459842; cv=none; d=google.com; s=arc-20160816; b=AYqP2Ky3mIaalSwDppTlt+pHQvAlSLH3b6N9UQVU7Z7/fnBmy++QwY4w00Cqyyr23f aLLNOwoMt8VMJJn59QZZpyWWupiDMGggY/0FXdSPopf7Vo9h6gueSuIf11waBy42zoL/ 5dMVwqe1xkMF6EjFfcSyUh7toKu2Hv5FXGOSBF0a1VSCuYNa2v2stO3uJbzLIknYRKBX pYQg+qwoB+Iy955+rqeZ29Cq6GmM64NWeuLnZTxsFLnYzqqC5XjzJFbzmPCMpuStlNLy vCnBO98twgefwHQPHLHg9d9+RIdje7FuaquRZ1GxMIfD74KZGfx3GhA2usVLa8OOTQqi WIag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7aPupDQpYkwyCb4UEJOUb3hMEwZDbOVLfq09pPM2GRw=; b=m/5PUHmWrdHU/93KwZIiU9toqI4ADiVdA6ao+tapqxWw4sXF0mppaJlzsH1L6djdwz +6kKWFw/dGVvdYmykV8xAnJaTpUrzoIi/DrTwcfNlSZDlQiYz8CXIBP83HSKc+1NSAkM e0gdLfT5suw86trhHBAuG90YQrPDbMbbWekh8B+UnNYJtB6hC3bwP7JQiJz2ZGpTDZw8 TmRcwWTIJ6K3miGmZXJBYWaI+VSZPbOV12xo0KnED61hsaEe9Z0OGEv4w7hOLPNLzBRZ xSQFlklFC9ZEC1VosXG2ILMjAtnsX0XrHVHCHgV2K5mXcGOE03WwpNmEH4/yEyyyWLg5 bSYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14-v6si1401922pfi.138.2018.09.20.09.10.42; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730977AbeITVyu (ORCPT + 32 others); Thu, 20 Sep 2018 17:54:50 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:48572 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726128AbeITVyt (ORCPT ); Thu, 20 Sep 2018 17:54:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7472E15AD; Thu, 20 Sep 2018 09:10:38 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A98113F557; Thu, 20 Sep 2018 09:10:36 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will.deacon@arm.com, thunder.leizhen@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: linuxarm@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, john.garry@huawei.com Subject: [PATCH v8 2/7] iommu/dma: Add support for non-strict mode Date: Thu, 20 Sep 2018 17:10:22 +0100 Message-Id: X-Mailer: git-send-email 2.19.0.dirty In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei With the flush queue infrastructure already abstracted into IOVA domains, hooking it up in iommu-dma is pretty simple. Since there is a degree of dependency on the IOMMU driver knowing what to do to play along, we key the whole thing off a domain attribute which will be set on default DMA ops domains to request non-strict invalidation. That way, drivers can indicate the appropriate support by acknowledging the attribute, and we can easily fall back to strict invalidation otherwise. The flush queue callback needs a handle on the iommu_domain which owns our cookie, so we have to add a pointer back to that, but neatly, that's also sufficient to indicate whether we're using a flush queue or not, and thus which way to release IOVAs. The only slight subtlety is switching __iommu_dma_unmap() from calling iommu_unmap() to explicit iommu_unmap_fast()/iommu_tlb_sync() so that we can elide the sync entirely in non-strict mode. Signed-off-by: Zhen Lei [rm: convert to domain attribute, tweak comments and commit message] Signed-off-by: Robin Murphy --- v8: - Rewrite commit message/comments - Don't initialise "attr" unnecessarily - Rename "domain" to "fq_domain" for clarity - Don't let init_iova_flush_queue() be called more than once drivers/iommu/dma-iommu.c | 32 +++++++++++++++++++++++++++++++- include/linux/iommu.h | 1 + 2 files changed, 32 insertions(+), 1 deletion(-) -- 2.19.0.dirty diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 511ff9a1d6d9..cc1bf786cfac 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -55,6 +55,9 @@ struct iommu_dma_cookie { }; struct list_head msi_page_list; spinlock_t msi_lock; + + /* Domain for flush queue callback; NULL if flush queue not in use */ + struct iommu_domain *fq_domain; }; static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) @@ -257,6 +260,20 @@ static int iova_reserve_iommu_regions(struct device *dev, return ret; } +static void iommu_dma_flush_iotlb_all(struct iova_domain *iovad) +{ + struct iommu_dma_cookie *cookie; + struct iommu_domain *domain; + + cookie = container_of(iovad, struct iommu_dma_cookie, iovad); + domain = cookie->fq_domain; + /* + * The IOMMU driver supporting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE + * implies that ops->flush_iotlb_all must be non-NULL. + */ + domain->ops->flush_iotlb_all(domain); +} + /** * iommu_dma_init_domain - Initialise a DMA mapping domain * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() @@ -275,6 +292,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; unsigned long order, base_pfn, end_pfn; + int attr; if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE) return -EINVAL; @@ -308,6 +326,13 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, } init_iova_domain(iovad, 1UL << order, base_pfn); + + if (!cookie->fq_domain && !iommu_domain_get_attr(domain, + DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) { + cookie->fq_domain = domain; + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); + } + if (!dev) return 0; @@ -393,6 +418,9 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, /* The MSI case is only ever cleaning up its most recent allocation */ if (cookie->type == IOMMU_DMA_MSI_COOKIE) cookie->msi_iova -= size; + else if (cookie->fq_domain) /* non-strict mode */ + queue_iova(iovad, iova_pfn(iovad, iova), + size >> iova_shift(iovad), 0); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); @@ -408,7 +436,9 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, dma_addr -= iova_off; size = iova_align(iovad, size + iova_off); - WARN_ON(iommu_unmap(domain, dma_addr, size) != size); + WARN_ON(iommu_unmap_fast(domain, dma_addr, size) != size); + if (!cookie->fq_domain) + iommu_tlb_sync(domain); iommu_dma_free_iova(cookie, dma_addr, size); } diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 87994c265bf5..decabe8e8dbe 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -124,6 +124,7 @@ enum iommu_attr { DOMAIN_ATTR_FSL_PAMU_ENABLE, DOMAIN_ATTR_FSL_PAMUV1, DOMAIN_ATTR_NESTING, /* two stages of translation */ + DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, DOMAIN_ATTR_MAX, }; From patchwork Thu Sep 20 16:10:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 147105 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2188255ljw; Thu, 20 Sep 2018 09:10:44 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda0VjK/NFAWGGVzNFYXjDReb3ZcwxSSncIxFdxkPwSJr14qju2MsAP24YvGaNoS9cqE7/G1 X-Received: by 2002:a17:902:b28:: with SMTP id 37-v6mr40409931plq.337.1537459844134; Thu, 20 Sep 2018 09:10:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537459844; cv=none; d=google.com; s=arc-20160816; b=KPkVIsYOaYDPDb3r+VrBTN919LVvG7Shh/3M00CxnvhvQ0t0zSsN7VkVDXpvNZbHYI TG6ygCrE1R5HhuLJqbL/WT/YYZIvFc57TmXoOROc72uK0tHMJKpra+1QfYGv2BgMl/mo 59vlZ/wmH/zwu4ee6kaohdfv3r1Wjp+btqaHzx+HhW9kWjLbxGwFS1nAwzd5OMgsqmxX UVV765bIrg7l35UaMX73skYJ9hGA6Hl5iSBOXj5kgZ4PSf7f+C5UYVKxxCHYr43KrtcY 0xyg3frMBUykabVuK1qVi1W87TtnKuwLPJunjnUnbjWfkNiGRH+awxYOUJ09mtW8Bgbb RK6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hZlktJ7rcez4NbFb2cGZ6ZTHai70+pWhbqdcFyL+zOU=; b=ROmhfCp0V4QMK2QNlSwRLg84cahoeMeV4Xr9yDRKuXrB4+di1/jz5NsSOjEcL+s1fM Sw4MBybVBZorCpqSFcd2bXtQ1ic6JFPSWrHJdnLbcWyrUeRXGdYydWibZQ+YC3pgvwUI xtQf4yhnoNPJq1Z7B5K2cMjysOEUMC/j+AJ1ASy/8FOchV6MCxF8V2ozzFeni8DbeRLe fz8mGRvQ0M7n3uJdhTvmf0RMRodFhKrDG99ido55ZpOVmo7Fg9/CwL/bA6QVkWlUCe5o wG3c3g8ii1o5KgQP7ZG0gZmDjLjnZkh+/DalR1uk/rz69aaAkge0VOw47slT/5y+/lGO Pp9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14-v6si1401922pfi.138.2018.09.20.09.10.43; Thu, 20 Sep 2018 09:10:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731109AbeITVyw (ORCPT + 32 others); Thu, 20 Sep 2018 17:54:52 -0400 Received: from foss.arm.com ([217.140.101.70]:48580 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731009AbeITVyv (ORCPT ); Thu, 20 Sep 2018 17:54:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7DB3615BE; Thu, 20 Sep 2018 09:10:40 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B3C513F557; Thu, 20 Sep 2018 09:10:38 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will.deacon@arm.com, thunder.leizhen@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: linuxarm@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, john.garry@huawei.com Subject: [PATCH v8 3/7] iommu: Add "iommu.strict" command line option Date: Thu, 20 Sep 2018 17:10:23 +0100 Message-Id: <799fad801970298385af3abc8ca82620ad62c000.1537458163.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.19.0.dirty In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei Add a generic command line option to enable lazy unmapping via IOVA flush queues, which will initally be suuported by iommu-dma. This echoes the semantics of "intel_iommu=strict" (albeit with the opposite default value), but in the driver-agnostic fashion of "iommu.passthrough". Signed-off-by: Zhen Lei [rm: move handling out of SMMUv3 driver, clean up documentation] Signed-off-by: Robin Murphy --- v8: - Rename "non-strict" to "strict" to better match existing options - Rewrite doc text/commit message - Downgrade boot-time message from warn/taint to info .../admin-guide/kernel-parameters.txt | 12 ++++++++++ drivers/iommu/iommu.c | 23 +++++++++++++++++++ 2 files changed, 35 insertions(+) -- 2.19.0.dirty diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9871e649ffef..92ae12aeabf4 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1749,6 +1749,18 @@ nobypass [PPC/POWERNV] Disable IOMMU bypass, using IOMMU for PCI devices. + iommu.strict= [ARM64] Configure TLB invalidation behaviour + Format: { "0" | "1" } + 0 - Lazy mode. + Request that DMA unmap operations use deferred + invalidation of hardware TLBs, for increased + throughput at the cost of reduced device isolation. + Will fall back to strict mode if not supported by + the relevant IOMMU driver. + 1 - Strict mode (default). + DMA unmap operations invalidate IOMMU hardware TLBs + synchronously. + iommu.passthrough= [ARM64] Configure DMA to bypass the IOMMU by default. Format: { "0" | "1" } diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 8c15c5980299..02b6603f0616 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -41,6 +41,7 @@ static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_IDENTITY; #else static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_DMA; #endif +static bool iommu_dma_strict __read_mostly = true; struct iommu_callback_data { const struct iommu_ops *ops; @@ -131,6 +132,21 @@ static int __init iommu_set_def_domain_type(char *str) } early_param("iommu.passthrough", iommu_set_def_domain_type); +static int __init iommu_dma_setup(char *str) +{ + int ret; + + ret = kstrtobool(str, &iommu_dma_strict); + if (ret) + return ret; + + if (!iommu_dma_strict) + pr_info("Enabling deferred TLB invalidation for DMA; protection against malicious/malfunctioning devices may be reduced.\n"); + + return 0; +} +early_param("iommu.strict", iommu_dma_setup); + static ssize_t iommu_group_attr_show(struct kobject *kobj, struct attribute *__attr, char *buf) { @@ -1072,6 +1088,13 @@ struct iommu_group *iommu_group_get_for_dev(struct device *dev) group->default_domain = dom; if (!group->domain) group->domain = dom; + + if (dom && !iommu_dma_strict) { + int attr = 1; + iommu_domain_set_attr(dom, + DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, + &attr); + } } ret = iommu_group_add_device(group, dev); From patchwork Thu Sep 20 16:10:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 147106 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2188310ljw; Thu, 20 Sep 2018 09:10:46 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZcbXuqJb7KI7VVkLh8qzdU6toQxD6K5wE1b5j4ogaGQGeYMhDymKEiDFVdH8a0mmerj0Xc X-Received: by 2002:a17:902:e20b:: with SMTP id ce11-v6mr40670397plb.136.1537459846118; Thu, 20 Sep 2018 09:10:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537459846; cv=none; d=google.com; s=arc-20160816; b=mVTM/44aC8r5WP5QoQjP9py7JxIv7s6iiemdxnSN3plC4+lO1MJ0HqDmmfEjNrShwl Ro+NunlwV8NbV9MhIXJWHYX4rS4g2GYknBtRyB9JJ7hNcsHFnCYAG54E9VQfPFKkaNJo VUrJYZ1kC2E8YYlRnWq7CohldoyasH6mdfKdOnZ/nr8Vl34slRVcvkc2eWx/LEcDLbUm rgztHxUgcrLMoH1B7tiMsxhaBiJ9yFEg1hr2oCMvwImLG0Hk4D1hOtAfPRtaJ+XbwvB9 jM/153VjTseMDtt3fV5YXIREoZQm8vauz3rcnv3NVthIrAT16txneCczfKSB70HD4D1G kcvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+R7BTIMdM3FBP5i+ZLVKiYo3ZdQZ99AUylcMGua38Q8=; b=MpUdiAI1id6o4Wy1hpraoez6RDZywnqxNc8qEn6zZLmTTSj0QGNH7USYlbfRe4oX8J CEf6fhZTQWZBkJ2btCUVqfx6SVJddvknLUHjco3bswpxWEtQitcL31bCiTRWgzgXdQjI ZcRmD3b275sqR6f+mRoEqy1vMEdEoAB0xX1bIGxEzGwQZBubEZHB2TOrx1eBfrmSaq9w /GM0+ZuXFPfKDPwnWJzfRkHzCicoSqwpppFBhblWlARriAd9ARBOX6aEKGtwyeU9AC6Y 3Rjy9ShVEac3gGUZoNk54zBMZqDrNjoT+S792H9ri7VzVxOliBy75XrvJK9aGaes7hbx qLmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o6-v6si23556440plk.31.2018.09.20.09.10.45; Thu, 20 Sep 2018 09:10:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731345AbeITVyy (ORCPT + 32 others); Thu, 20 Sep 2018 17:54:54 -0400 Received: from foss.arm.com ([217.140.101.70]:48600 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731135AbeITVyx (ORCPT ); Thu, 20 Sep 2018 17:54:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86C1315BF; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BCF613F557; Thu, 20 Sep 2018 09:10:40 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will.deacon@arm.com, thunder.leizhen@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: linuxarm@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, john.garry@huawei.com Subject: [PATCH v8 4/7] iommu/io-pgtable-arm: Add support for non-strict mode Date: Thu, 20 Sep 2018 17:10:24 +0100 Message-Id: <9a666d63a96ab97dc53df2a64b3a8d22a0986423.1537458163.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.19.0.dirty In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei Non-strict mode is simply a case of skipping 'regular' leaf TLBIs, since the sync is already factored out into ops->iotlb_sync at the core API level. Non-leaf invalidations where we change the page table structure itself still have to be issued synchronously in order to maintain walk caches correctly. To save having to reason about it too much, make sure the invalidation in arm_lpae_split_blk_unmap() just performs its own unconditional sync to minimise the window in which we're technically violating the break- before-make requirement on a live mapping. This might work out redundant with an outer-level sync for strict unmaps, but we'll never be splitting blocks on a DMA fastpath anyway. Signed-off-by: Zhen Lei [rm: tweak comment, commit message, split_blk_unmap logic and barriers] Signed-off-by: Robin Murphy --- v8: Add barrier for the fiddly cross-cpu flush case drivers/iommu/io-pgtable-arm.c | 14 ++++++++++++-- drivers/iommu/io-pgtable.h | 5 +++++ 2 files changed, 17 insertions(+), 2 deletions(-) -- 2.19.0.dirty diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 2f79efd16a05..237cacd4a62b 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -576,6 +576,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, tablep = iopte_deref(pte, data); } else if (unmap_idx >= 0) { io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true); + io_pgtable_tlb_sync(&data->iop); return size; } @@ -609,6 +610,13 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte, data); __arm_lpae_free_pgtable(data, lvl + 1, ptep); + } else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) { + /* + * Order the PTE update against queueing the IOVA, to + * guarantee that a flush callback from a different CPU + * has observed it before the TLBIALL can be issued. + */ + smp_wmb(); } else { io_pgtable_tlb_add_flush(iop, iova, size, size, true); } @@ -771,7 +779,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) u64 reg; struct arm_lpae_io_pgtable *data; - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA)) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA | + IO_PGTABLE_QUIRK_NON_STRICT)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -863,7 +872,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) struct arm_lpae_io_pgtable *data; /* The NS quirk doesn't apply at stage 2 */ - if (cfg->quirks & ~IO_PGTABLE_QUIRK_NO_DMA) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_NO_DMA | + IO_PGTABLE_QUIRK_NON_STRICT)) return NULL; data = arm_lpae_alloc_pgtable(cfg); diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h index 2df79093cad9..47d5ae559329 100644 --- a/drivers/iommu/io-pgtable.h +++ b/drivers/iommu/io-pgtable.h @@ -71,12 +71,17 @@ struct io_pgtable_cfg { * be accessed by a fully cache-coherent IOMMU or CPU (e.g. for a * software-emulated IOMMU), such that pagetable updates need not * be treated as explicit DMA data. + * + * IO_PGTABLE_QUIRK_NON_STRICT: Skip issuing synchronous leaf TLBIs + * on unmap, for DMA domains using the flush queue mechanism for + * delayed invalidation. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) #define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2) #define IO_PGTABLE_QUIRK_ARM_MTK_4GB BIT(3) #define IO_PGTABLE_QUIRK_NO_DMA BIT(4) + #define IO_PGTABLE_QUIRK_NON_STRICT BIT(5) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; From patchwork Thu Sep 20 16:10:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 147107 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2188333ljw; Thu, 20 Sep 2018 09:10:47 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdb9y0DL+egGdZr1j+mXiRM2pV8RPS/17L/qWdxsa1Ed+O7/6V3aeeGlxaDKcthQs6DPpCOm X-Received: by 2002:a62:2119:: with SMTP id h25-v6mr42727192pfh.112.1537459847602; Thu, 20 Sep 2018 09:10:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537459847; cv=none; d=google.com; s=arc-20160816; b=cVSFtfkbWj9//sFQ1XwgHuej+AofW6eyyo7RtPJj1Bqfa8vQxKgCuwiqVy9/KK3jJq Ak/MHazitoHDN7jMkSAj9J0QWKSxKSHojqBrGn/bN/SUgzYwA3l1kKwvwwlnfdH1K3yJ TSUA4JU9/4eszlB4xd+2T3A+BrxT8gjQ1Cb13ekcjeN1ARO5R94E756dzQA4G2A0oPXM OfXdHBhEt5W5NjKCz7zTcqdS+kCsMwq47nYp8xhVGTJYi/pFNcqKZsNwW3jjboruHFDT qyEmVsh2VCzaU+usM3TEljsbw+WmEKRWAku2liGu4woBxsbHq0pbyUWu3iCmfg5/6jOh eCqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=nZ5yHVS8+pqmdTxJbL+ogMyNf5/Pr1qBLqCp6JJGvIQ=; b=RPPq93t2knfHvHuSwjwdJ6wt1D3CIgDIfsEhEl4mwTBwNw7CHvsnR4qDhZhA6lO+wu rvwtS03gMgTUtlgiVqPeaIBnO8eDh2VprNEcATV1tM7ZWwFMmVHilAzgItYlvWGRlJHE 9boQ+yO4MbWGSRseQbpv2P9/t7tpbz285HrZVY01B+7rbSrJqW4FxltXAR/OOxThodbt nF11HoPy6n2BCy3gUDxtFnQ8+E9+sSx0ej+VSpn+ZEEPFYs7dh4J9Gj5kSBqTF5qVBDL Beg9uQSzCGrSzgu3fwiE/1qvFnTihx9HJPnvhiQ5rlWiy2C5FkMq2h2gIe9JSCBWiV9d K7mQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z7-v6si23308399plk.215.2018.09.20.09.10.47; Thu, 20 Sep 2018 09:10:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731507AbeITVy4 (ORCPT + 32 others); Thu, 20 Sep 2018 17:54:56 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:48612 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731009AbeITVyz (ORCPT ); Thu, 20 Sep 2018 17:54:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 902751682; Thu, 20 Sep 2018 09:10:44 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C66873F557; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will.deacon@arm.com, thunder.leizhen@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: linuxarm@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, john.garry@huawei.com Subject: [PATCH v8 5/7] iommu/arm-smmu-v3: Add support for non-strict mode Date: Thu, 20 Sep 2018 17:10:25 +0100 Message-Id: X-Mailer: git-send-email 2.19.0.dirty In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei Now that io-pgtable knows how to dodge strict TLB maintenance, all that's left to do is bridge the gap between the IOMMU core requesting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE for default domains, and showing the appropriate IO_PGTABLE_QUIRK_NON_STRICT flag to alloc_io_pgtable_ops(). Signed-off-by: Zhen Lei [rm: convert to domain attribute, tweak commit message] Signed-off-by: Robin Murphy --- v8: - Use nested switches for attrs - Document barrier semantics drivers/iommu/arm-smmu-v3.c | 79 ++++++++++++++++++++++++++----------- 1 file changed, 56 insertions(+), 23 deletions(-) -- 2.19.0.dirty diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index f10c852479fc..db402e8b068b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -612,6 +612,7 @@ struct arm_smmu_domain { struct mutex init_mutex; /* Protects smmu pointer */ struct io_pgtable_ops *pgtbl_ops; + bool non_strict; enum arm_smmu_domain_stage stage; union { @@ -1407,6 +1408,12 @@ static void arm_smmu_tlb_inv_context(void *cookie) cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; } + /* + * NOTE: when io-pgtable is in non-strict mode, we may get here with + * PTEs previously cleared by unmaps on the current CPU not yet visible + * to the SMMU. We are relying on the DSB implicit in queue_inc_prod() + * to guarantee those are observed before the TLBI. Do be careful, 007. + */ arm_smmu_cmdq_issue_cmd(smmu, &cmd); __arm_smmu_tlb_sync(smmu); } @@ -1633,6 +1640,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain) if (smmu->features & ARM_SMMU_FEAT_COHERENCY) pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_NO_DMA; + if (smmu_domain->non_strict) + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; + pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); if (!pgtbl_ops) return -ENOMEM; @@ -1934,15 +1944,27 @@ static int arm_smmu_domain_get_attr(struct iommu_domain *domain, { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - if (domain->type != IOMMU_DOMAIN_UNMANAGED) - return -EINVAL; - - switch (attr) { - case DOMAIN_ATTR_NESTING: - *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); - return 0; + switch (domain->type) { + case IOMMU_DOMAIN_UNMANAGED: + switch (attr) { + case DOMAIN_ATTR_NESTING: + *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); + return 0; + default: + return -ENODEV; + } + break; + case IOMMU_DOMAIN_DMA: + switch (attr) { + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: + *(int *)data = smmu_domain->non_strict; + return 0; + default: + return -ENODEV; + } + break; default: - return -ENODEV; + return -EINVAL; } } @@ -1952,26 +1974,37 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain, int ret = 0; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - if (domain->type != IOMMU_DOMAIN_UNMANAGED) - return -EINVAL; - mutex_lock(&smmu_domain->init_mutex); - switch (attr) { - case DOMAIN_ATTR_NESTING: - if (smmu_domain->smmu) { - ret = -EPERM; - goto out_unlock; + switch (domain->type) { + case IOMMU_DOMAIN_UNMANAGED: + switch (attr) { + case DOMAIN_ATTR_NESTING: + if (smmu_domain->smmu) { + ret = -EPERM; + goto out_unlock; + } + + if (*(int *)data) + smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; + else + smmu_domain->stage = ARM_SMMU_DOMAIN_S1; + break; + default: + ret = -ENODEV; + } + break; + case IOMMU_DOMAIN_DMA: + switch(attr) { + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: + smmu_domain->non_strict = *(int *)data; + break; + default: + ret = -ENODEV; } - - if (*(int *)data) - smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; - else - smmu_domain->stage = ARM_SMMU_DOMAIN_S1; - break; default: - ret = -ENODEV; + ret = -EINVAL; } out_unlock: