From patchwork Thu Sep 20 16:10:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 147106 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2188310ljw; Thu, 20 Sep 2018 09:10:46 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZcbXuqJb7KI7VVkLh8qzdU6toQxD6K5wE1b5j4ogaGQGeYMhDymKEiDFVdH8a0mmerj0Xc X-Received: by 2002:a17:902:e20b:: with SMTP id ce11-v6mr40670397plb.136.1537459846118; Thu, 20 Sep 2018 09:10:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537459846; cv=none; d=google.com; s=arc-20160816; b=mVTM/44aC8r5WP5QoQjP9py7JxIv7s6iiemdxnSN3plC4+lO1MJ0HqDmmfEjNrShwl Ro+NunlwV8NbV9MhIXJWHYX4rS4g2GYknBtRyB9JJ7hNcsHFnCYAG54E9VQfPFKkaNJo VUrJYZ1kC2E8YYlRnWq7CohldoyasH6mdfKdOnZ/nr8Vl34slRVcvkc2eWx/LEcDLbUm rgztHxUgcrLMoH1B7tiMsxhaBiJ9yFEg1hr2oCMvwImLG0Hk4D1hOtAfPRtaJ+XbwvB9 jM/153VjTseMDtt3fV5YXIREoZQm8vauz3rcnv3NVthIrAT16txneCczfKSB70HD4D1G kcvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+R7BTIMdM3FBP5i+ZLVKiYo3ZdQZ99AUylcMGua38Q8=; b=MpUdiAI1id6o4Wy1hpraoez6RDZywnqxNc8qEn6zZLmTTSj0QGNH7USYlbfRe4oX8J CEf6fhZTQWZBkJ2btCUVqfx6SVJddvknLUHjco3bswpxWEtQitcL31bCiTRWgzgXdQjI ZcRmD3b275sqR6f+mRoEqy1vMEdEoAB0xX1bIGxEzGwQZBubEZHB2TOrx1eBfrmSaq9w /GM0+ZuXFPfKDPwnWJzfRkHzCicoSqwpppFBhblWlARriAd9ARBOX6aEKGtwyeU9AC6Y 3Rjy9ShVEac3gGUZoNk54zBMZqDrNjoT+S792H9ri7VzVxOliBy75XrvJK9aGaes7hbx qLmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o6-v6si23556440plk.31.2018.09.20.09.10.45; Thu, 20 Sep 2018 09:10:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731345AbeITVyy (ORCPT + 32 others); Thu, 20 Sep 2018 17:54:54 -0400 Received: from foss.arm.com ([217.140.101.70]:48600 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731135AbeITVyx (ORCPT ); Thu, 20 Sep 2018 17:54:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86C1315BF; Thu, 20 Sep 2018 09:10:42 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.Emea.Arm.com [10.4.12.131]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BCF613F557; Thu, 20 Sep 2018 09:10:40 -0700 (PDT) From: Robin Murphy To: joro@8bytes.org, will.deacon@arm.com, thunder.leizhen@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: linuxarm@huawei.com, guohanjun@huawei.com, huawei.libin@huawei.com, john.garry@huawei.com Subject: [PATCH v8 4/7] iommu/io-pgtable-arm: Add support for non-strict mode Date: Thu, 20 Sep 2018 17:10:24 +0100 Message-Id: <9a666d63a96ab97dc53df2a64b3a8d22a0986423.1537458163.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.19.0.dirty In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhen Lei Non-strict mode is simply a case of skipping 'regular' leaf TLBIs, since the sync is already factored out into ops->iotlb_sync at the core API level. Non-leaf invalidations where we change the page table structure itself still have to be issued synchronously in order to maintain walk caches correctly. To save having to reason about it too much, make sure the invalidation in arm_lpae_split_blk_unmap() just performs its own unconditional sync to minimise the window in which we're technically violating the break- before-make requirement on a live mapping. This might work out redundant with an outer-level sync for strict unmaps, but we'll never be splitting blocks on a DMA fastpath anyway. Signed-off-by: Zhen Lei [rm: tweak comment, commit message, split_blk_unmap logic and barriers] Signed-off-by: Robin Murphy --- v8: Add barrier for the fiddly cross-cpu flush case drivers/iommu/io-pgtable-arm.c | 14 ++++++++++++-- drivers/iommu/io-pgtable.h | 5 +++++ 2 files changed, 17 insertions(+), 2 deletions(-) -- 2.19.0.dirty diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 2f79efd16a05..237cacd4a62b 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -576,6 +576,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, tablep = iopte_deref(pte, data); } else if (unmap_idx >= 0) { io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true); + io_pgtable_tlb_sync(&data->iop); return size; } @@ -609,6 +610,13 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte, data); __arm_lpae_free_pgtable(data, lvl + 1, ptep); + } else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) { + /* + * Order the PTE update against queueing the IOVA, to + * guarantee that a flush callback from a different CPU + * has observed it before the TLBIALL can be issued. + */ + smp_wmb(); } else { io_pgtable_tlb_add_flush(iop, iova, size, size, true); } @@ -771,7 +779,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) u64 reg; struct arm_lpae_io_pgtable *data; - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA)) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_NO_DMA | + IO_PGTABLE_QUIRK_NON_STRICT)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -863,7 +872,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) struct arm_lpae_io_pgtable *data; /* The NS quirk doesn't apply at stage 2 */ - if (cfg->quirks & ~IO_PGTABLE_QUIRK_NO_DMA) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_NO_DMA | + IO_PGTABLE_QUIRK_NON_STRICT)) return NULL; data = arm_lpae_alloc_pgtable(cfg); diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h index 2df79093cad9..47d5ae559329 100644 --- a/drivers/iommu/io-pgtable.h +++ b/drivers/iommu/io-pgtable.h @@ -71,12 +71,17 @@ struct io_pgtable_cfg { * be accessed by a fully cache-coherent IOMMU or CPU (e.g. for a * software-emulated IOMMU), such that pagetable updates need not * be treated as explicit DMA data. + * + * IO_PGTABLE_QUIRK_NON_STRICT: Skip issuing synchronous leaf TLBIs + * on unmap, for DMA domains using the flush queue mechanism for + * delayed invalidation. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) #define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2) #define IO_PGTABLE_QUIRK_ARM_MTK_4GB BIT(3) #define IO_PGTABLE_QUIRK_NO_DMA BIT(4) + #define IO_PGTABLE_QUIRK_NON_STRICT BIT(5) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias;