From patchwork Tue May 3 16:33:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B655C433EF for ; Tue, 3 May 2022 16:34:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239525AbiECQh6 (ORCPT ); Tue, 3 May 2022 12:37:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235042AbiECQh5 (ORCPT ); Tue, 3 May 2022 12:37:57 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A4683CA78 for ; Tue, 3 May 2022 09:34:24 -0700 (PDT) Received: from fraeml704-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt57x2Dcyz67y8S; Wed, 4 May 2022 00:31:41 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml704-chm.china.huawei.com (10.206.15.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Tue, 3 May 2022 18:34:22 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:34:15 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 1/9] iommu: Introduce a callback to struct iommu_resv_region Date: Tue, 3 May 2022 17:33:22 +0100 Message-ID: <20220503163330.509-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org A callback is introduced to struct iommu_resv_region to free memory allocations associated with the reserved region. This will be useful when we introduce support for IORT RMR based reserved regions. Reviewed-by: Christoph Hellwig Tested-by: Steven Price Signed-off-by: Shameer Kolothum --- drivers/iommu/iommu.c | 16 +++++++++++----- include/linux/iommu.h | 2 ++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index f2c45b85b9fc..ffcfa684e80c 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2597,16 +2597,22 @@ void iommu_put_resv_regions(struct device *dev, struct list_head *list) * @list: reserved region list for device * * IOMMU drivers can use this to implement their .put_resv_regions() callback - * for simple reservations. Memory allocated for each reserved region will be - * freed. If an IOMMU driver allocates additional resources per region, it is - * going to have to implement a custom callback. + * for simple reservations. If a per region callback is provided that will be + * used to free all memory allocations associated with the reserved region or + * else just free up the memory for the regions. If an IOMMU driver allocates + * additional resources per region, it is going to have to implement a custom + * callback. */ void generic_iommu_put_resv_regions(struct device *dev, struct list_head *list) { struct iommu_resv_region *entry, *next; - list_for_each_entry_safe(entry, next, list, list) - kfree(entry); + list_for_each_entry_safe(entry, next, list, list) { + if (entry->free) + entry->free(dev, entry); + else + kfree(entry); + } } EXPORT_SYMBOL(generic_iommu_put_resv_regions); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 9208eca4b0d1..68bcfb3a06d7 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -134,6 +134,7 @@ enum iommu_resv_type { * @length: Length of the region in bytes * @prot: IOMMU Protection flags (READ/WRITE/...) * @type: Type of the reserved region + * @free: Callback to free associated memory allocations */ struct iommu_resv_region { struct list_head list; @@ -141,6 +142,7 @@ struct iommu_resv_region { size_t length; int prot; enum iommu_resv_type type; + void (*free)(struct device *dev, struct iommu_resv_region *region); }; /** From patchwork Tue May 3 16:33:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2D73C433F5 for ; Tue, 3 May 2022 16:34:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238047AbiECQiH (ORCPT ); Tue, 3 May 2022 12:38:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235042AbiECQiG (ORCPT ); Tue, 3 May 2022 12:38:06 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D5D93CA78 for ; Tue, 3 May 2022 09:34:33 -0700 (PDT) Received: from fraeml703-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt5693ylWz67bpd; Wed, 4 May 2022 00:30:09 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml703-chm.china.huawei.com (10.206.15.52) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Tue, 3 May 2022 18:34:31 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:34:24 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 2/9] ACPI/IORT: Make iort_iommu_msi_get_resv_regions() return void Date: Tue, 3 May 2022 17:33:23 +0100 Message-ID: <20220503163330.509-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org At present iort_iommu_msi_get_resv_regions() returns the number of MSI reserved regions on success and there are no users for this. The reserved region list will get populated anyway for platforms that require the HW MSI region reservation. Hence, change the function to return void instead. Reviewed-by: Christoph Hellwig Tested-by: Steven Price Signed-off-by: Shameer Kolothum Reviewed-by: Hanjun Guo --- drivers/acpi/arm64/iort.c | 25 +++++++++---------------- include/linux/acpi_iort.h | 6 +++--- 2 files changed, 12 insertions(+), 19 deletions(-) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index f2f8f05662de..213f61cae176 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -811,22 +811,19 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) * @dev: Device from iommu_get_resv_regions() * @head: Reserved region list from iommu_get_resv_regions() * - * Returns: Number of msi reserved regions on success (0 if platform - * doesn't require the reservation or no associated msi regions), - * appropriate error value otherwise. The ITS interrupt translation - * spaces (ITS_base + SZ_64K, SZ_64K) associated with the device - * are the msi reserved regions. + * The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K) + * associated with the device are the HW MSI reserved regions. */ -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct acpi_iort_its_group *its; struct acpi_iort_node *iommu_node, *its_node = NULL; - int i, resv = 0; + int i; iommu_node = iort_get_msi_resv_iommu(dev); if (!iommu_node) - return 0; + return; /* * Current logic to reserve ITS regions relies on HW topologies @@ -846,7 +843,7 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) } if (!its_node) - return 0; + return; /* Move to ITS specific data */ its = (struct acpi_iort_its_group *)its_node->node_data; @@ -860,14 +857,10 @@ int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K, prot, IOMMU_RESV_MSI); - if (region) { + if (region) list_add_tail(®ion->list, head); - resv++; - } } } - - return (resv == its->its_count) ? resv : -ENODEV; } static inline bool iort_iommu_driver_enabled(u8 type) @@ -1034,8 +1027,8 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) } #else -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) -{ return 0; } +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +{ } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } #endif diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index f1f0842a2cb2..a8198b83753d 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -36,7 +36,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -52,8 +52,8 @@ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } static inline -int iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) -{ return 0; } +void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +{ } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) { return PHYS_ADDR_MAX; } From patchwork Tue May 3 16:33:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F3E6C433EF for ; Tue, 3 May 2022 16:34:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239535AbiECQiQ (ORCPT ); Tue, 3 May 2022 12:38:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235042AbiECQiP (ORCPT ); Tue, 3 May 2022 12:38:15 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1A6C3CA78 for ; Tue, 3 May 2022 09:34:42 -0700 (PDT) Received: from fraeml702-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt57s0mKgz67vBH; Wed, 4 May 2022 00:31:37 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml702-chm.china.huawei.com (10.206.15.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Tue, 3 May 2022 18:34:40 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:34:33 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 3/9] ACPI/IORT: Provide a generic helper to retrieve reserve regions Date: Tue, 3 May 2022 17:33:24 +0100 Message-ID: <20220503163330.509-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Currently IORT provides a helper to retrieve HW MSI reserve regions. Change this to a generic helper to retrieve any IORT related reserve regions. This will be useful when we add support for RMR nodes in subsequent patches. [Lorenzo: For ACPI IORT] Reviewed-by: Lorenzo Pieralisi Reviewed-by: Christoph Hellwig Tested-by: Steven Price Signed-off-by: Shameer Kolothum Reviewed-by: Hanjun Guo --- drivers/acpi/arm64/iort.c | 22 +++++++++++++++------- drivers/iommu/dma-iommu.c | 2 +- include/linux/acpi_iort.h | 4 ++-- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index 213f61cae176..cd5d1d7823cb 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -806,15 +806,13 @@ static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) return NULL; } -/** - * iort_iommu_msi_get_resv_regions - Reserved region driver helper - * @dev: Device from iommu_get_resv_regions() - * @head: Reserved region list from iommu_get_resv_regions() - * +/* + * Retrieve platform specific HW MSI reserve regions. * The ITS interrupt translation spaces (ITS_base + SZ_64K, SZ_64K) * associated with the device are the HW MSI reserved regions. */ -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +static void iort_iommu_msi_get_resv_regions(struct device *dev, + struct list_head *head) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct acpi_iort_its_group *its; @@ -863,6 +861,16 @@ void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) } } +/** + * iort_iommu_get_resv_regions - Generic helper to retrieve reserved regions. + * @dev: Device from iommu_get_resv_regions() + * @head: Reserved region list from iommu_get_resv_regions() + */ +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) +{ + iort_iommu_msi_get_resv_regions(dev, head); +} + static inline bool iort_iommu_driver_enabled(u8 type) { switch (type) { @@ -1027,7 +1035,7 @@ int iort_iommu_configure_id(struct device *dev, const u32 *id_in) } #else -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } int iort_iommu_configure_id(struct device *dev, const u32 *input_id) { return -ENODEV; } diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 09f6e1c0f9c0..93d76b666888 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -384,7 +384,7 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list) { if (!is_of_node(dev_iommu_fwspec_get(dev)->iommu_fwnode)) - iort_iommu_msi_get_resv_regions(dev, list); + iort_iommu_get_resv_regions(dev, list); } EXPORT_SYMBOL(iommu_dma_get_resv_regions); diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index a8198b83753d..e5d2de9caf7f 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -36,7 +36,7 @@ int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head); +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head); phys_addr_t acpi_iort_dma_get_max_cpu_address(void); #else static inline void acpi_iort_init(void) { } @@ -52,7 +52,7 @@ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in) { return -ENODEV; } static inline -void iort_iommu_msi_get_resv_regions(struct device *dev, struct list_head *head) +void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { } static inline phys_addr_t acpi_iort_dma_get_max_cpu_address(void) From patchwork Tue May 3 16:33:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A7C3C433EF for ; Tue, 3 May 2022 16:34:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239598AbiECQiZ (ORCPT ); Tue, 3 May 2022 12:38:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235042AbiECQiY (ORCPT ); Tue, 3 May 2022 12:38:24 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D7193CA78 for ; Tue, 3 May 2022 09:34:51 -0700 (PDT) Received: from fraeml701-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt5815qCRz67bVT; Wed, 4 May 2022 00:31:45 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml701-chm.china.huawei.com (10.206.15.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Tue, 3 May 2022 18:34:49 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:34:42 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 4/9] ACPI/IORT: Add support to retrieve IORT RMR reserved regions Date: Tue, 3 May 2022 17:33:25 +0100 Message-ID: <20220503163330.509-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Parse through the IORT RMR nodes and populate the reserve region list corresponding to a given IOMMU and device(optional). Also, go through the ID mappings of the RMR node and retrieve all the SIDs associated with it. Reviewed-by: Lorenzo Pieralisi Tested-by: Steven Price Signed-off-by: Shameer Kolothum Reviewed-by: Hanjun Guo --- drivers/acpi/arm64/iort.c | 291 ++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 8 ++ 2 files changed, 299 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index cd5d1d7823cb..b6273af316c6 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -788,6 +788,294 @@ void acpi_configure_pmsi_domain(struct device *dev) } #ifdef CONFIG_IOMMU_API +static void iort_rmr_free(struct device *dev, + struct iommu_resv_region *region) +{ + struct iommu_iort_rmr_data *rmr_data; + + rmr_data = container_of(region, struct iommu_iort_rmr_data, rr); + kfree(rmr_data->sids); + kfree(rmr_data); +} + +static struct iommu_iort_rmr_data *iort_rmr_alloc( + struct acpi_iort_rmr_desc *rmr_desc, + int prot, enum iommu_resv_type type, + u32 *sids, u32 num_sids) +{ + struct iommu_iort_rmr_data *rmr_data; + struct iommu_resv_region *region; + u32 *sids_copy; + u64 addr = rmr_desc->base_address, size = rmr_desc->length; + + rmr_data = kmalloc(sizeof(*rmr_data), GFP_KERNEL); + if (!rmr_data) + return NULL; + + /* Create a copy of SIDs array to associate with this rmr_data */ + sids_copy = kmemdup(sids, num_sids * sizeof(*sids), GFP_KERNEL); + if (!sids_copy) { + kfree(rmr_data); + return NULL; + } + rmr_data->sids = sids_copy; + rmr_data->num_sids = num_sids; + + if (!IS_ALIGNED(addr, SZ_64K) || !IS_ALIGNED(size, SZ_64K)) { + /* PAGE align base addr and size */ + addr &= PAGE_MASK; + size = PAGE_ALIGN(size + offset_in_page(rmr_desc->base_address)); + + pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] not aligned to 64K, continue with [0x%llx - 0x%llx]\n", + rmr_desc->base_address, + rmr_desc->base_address + rmr_desc->length - 1, + addr, addr + size - 1); + } + + region = &rmr_data->rr; + INIT_LIST_HEAD(®ion->list); + region->start = addr; + region->length = size; + region->prot = prot; + region->type = type; + region->free = iort_rmr_free; + + return rmr_data; +} + +static void iort_rmr_desc_check_overlap(struct acpi_iort_rmr_desc *desc, + u32 count) +{ + int i, j; + + for (i = 0; i < count; i++) { + u64 end, start = desc[i].base_address, length = desc[i].length; + + if (!length) { + pr_err(FW_BUG "RMR descriptor[0x%llx] with zero length, continue anyway\n", + start); + continue; + } + + end = start + length - 1; + + /* Check for address overlap */ + for (j = i + 1; j < count; j++) { + u64 e_start = desc[j].base_address; + u64 e_end = e_start + desc[j].length - 1; + + if (start <= e_end && end >= e_start) + pr_err(FW_BUG "RMR descriptor[0x%llx - 0x%llx] overlaps, continue anyway\n", + start, end); + } + } +} + +/* + * Please note, we will keep the already allocated RMR reserve + * regions in case of a memory allocation failure. + */ +static void iort_get_rmrs(struct acpi_iort_node *node, + struct acpi_iort_node *smmu, + u32 *sids, u32 num_sids, + struct list_head *head) +{ + struct acpi_iort_rmr *rmr = (struct acpi_iort_rmr *)node->node_data; + struct acpi_iort_rmr_desc *rmr_desc; + int i; + + rmr_desc = ACPI_ADD_PTR(struct acpi_iort_rmr_desc, node, + rmr->rmr_offset); + + iort_rmr_desc_check_overlap(rmr_desc, rmr->rmr_count); + + for (i = 0; i < rmr->rmr_count; i++, rmr_desc++) { + struct iommu_iort_rmr_data *rmr_data; + enum iommu_resv_type type; + int prot = IOMMU_READ | IOMMU_WRITE; + + if (rmr->flags & ACPI_IORT_RMR_REMAP_PERMITTED) + type = IOMMU_RESV_DIRECT_RELAXABLE; + else + type = IOMMU_RESV_DIRECT; + + if (rmr->flags & ACPI_IORT_RMR_ACCESS_PRIVILEGE) + prot |= IOMMU_PRIV; + + /* Attributes 0x00 - 0x03 represents device memory */ + if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) <= + ACPI_IORT_RMR_ATTR_DEVICE_GRE) + prot |= IOMMU_MMIO; + else if (ACPI_IORT_RMR_ACCESS_ATTRIBUTES(rmr->flags) == + ACPI_IORT_RMR_ATTR_NORMAL_IWB_OWB) + prot |= IOMMU_CACHE; + + rmr_data = iort_rmr_alloc(rmr_desc, prot, type, + sids, num_sids); + if (!rmr_data) + return; + + list_add_tail(&rmr_data->rr.list, head); + } +} + +static u32 *iort_rmr_alloc_sids(u32 *sids, u32 count, u32 id_start, + u32 new_count) +{ + u32 *new_sids; + u32 total_count = count + new_count; + int i; + + new_sids = krealloc_array(sids, count + new_count, + sizeof(*new_sids), GFP_KERNEL); + if (!new_sids) + return NULL; + + for (i = count; i < total_count; i++) + new_sids[i] = id_start++; + + return new_sids; +} + +static bool iort_rmr_has_dev(struct device *dev, u32 id_start, + u32 id_count) +{ + int i; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + /* + * Make sure the kernel has preserved the boot firmware PCIe + * configuration. This is required to ensure that the RMR PCIe + * StreamIDs are still valid (Refer: ARM DEN 0049E.d Section 3.1.1.5). + */ + if (dev_is_pci(dev)) { + struct pci_dev *pdev = to_pci_dev(dev); + struct pci_host_bridge *host = pci_find_host_bridge(pdev->bus); + + if (!host->preserve_config) + return false; + } + + for (i = 0; i < fwspec->num_ids; i++) { + if (fwspec->ids[i] >= id_start && + fwspec->ids[i] <= id_start + id_count) + return true; + } + + return false; +} + +static void iort_node_get_rmr_info(struct acpi_iort_node *node, + struct acpi_iort_node *iommu, + struct device *dev, struct list_head *head) +{ + struct acpi_iort_node *smmu = NULL; + struct acpi_iort_rmr *rmr; + struct acpi_iort_id_mapping *map; + u32 *sids = NULL; + u32 num_sids = 0; + int i; + + if (!node->mapping_offset || !node->mapping_count) { + pr_err(FW_BUG "Invalid ID mapping, skipping RMR node %p\n", + node); + return; + } + + rmr = (struct acpi_iort_rmr *)node->node_data; + if (!rmr->rmr_offset || !rmr->rmr_count) + return; + + map = ACPI_ADD_PTR(struct acpi_iort_id_mapping, node, + node->mapping_offset); + + /* + * Go through the ID mappings and see if we have a match for SMMU + * and dev(if !NULL). If found, get the sids for the Node. + * Please note, id_count is equal to the number of IDs in the + * range minus one. + */ + for (i = 0; i < node->mapping_count; i++, map++) { + struct acpi_iort_node *parent; + + if (!map->id_count) + continue; + + parent = ACPI_ADD_PTR(struct acpi_iort_node, iort_table, + map->output_reference); + if (parent != iommu) + continue; + + /* If dev is valid, check RMR node corresponds to the dev SID */ + if (dev && !iort_rmr_has_dev(dev, map->output_base, + map->id_count)) + continue; + + /* Retrieve SIDs associated with the Node. */ + sids = iort_rmr_alloc_sids(sids, num_sids, map->output_base, + map->id_count + 1); + if (!sids) + return; + + num_sids += map->id_count + 1; + } + + if (!sids) + return; + + iort_get_rmrs(node, smmu, sids, num_sids, head); + kfree(sids); +} + +static void iort_find_rmrs(struct acpi_iort_node *iommu, struct device *dev, + struct list_head *head) +{ + struct acpi_table_iort *iort; + struct acpi_iort_node *iort_node, *iort_end; + int i; + + /* Only supports ARM DEN 0049E.d onwards */ + if (iort_table->revision < 5) + return; + + iort = (struct acpi_table_iort *)iort_table; + + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort, + iort->node_offset); + iort_end = ACPI_ADD_PTR(struct acpi_iort_node, iort, + iort_table->length); + + for (i = 0; i < iort->node_count; i++) { + if (WARN_TAINT(iort_node >= iort_end, TAINT_FIRMWARE_WORKAROUND, + "IORT node pointer overflows, bad table!\n")) + return; + + if (iort_node->type == ACPI_IORT_NODE_RMR) + iort_node_get_rmr_info(iort_node, iommu, dev, head); + + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node, + iort_node->length); + } +} + +/* + * Populate the RMR list associated with a given IOMMU and dev(if provided). + * If dev is NULL, the function populates all the RMRs associated with the + * given IOMMU. + */ +static void iort_iommu_rmr_get_resv_regions(struct fwnode_handle *iommu_fwnode, + struct device *dev, + struct list_head *head) +{ + struct acpi_iort_node *iommu; + + iommu = iort_get_iort_node(iommu_fwnode); + if (!iommu) + return; + + iort_find_rmrs(iommu, dev, head); +} + static struct acpi_iort_node *iort_get_msi_resv_iommu(struct device *dev) { struct acpi_iort_node *iommu; @@ -868,7 +1156,10 @@ static void iort_iommu_msi_get_resv_regions(struct device *dev, */ void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head) { + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + iort_iommu_msi_get_resv_regions(dev, head); + iort_iommu_rmr_get_resv_regions(fwspec->iommu_fwnode, dev, head); } static inline bool iort_iommu_driver_enabled(u8 type) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 68bcfb3a06d7..0936d7980ce2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -145,6 +145,14 @@ struct iommu_resv_region { void (*free)(struct device *dev, struct iommu_resv_region *region); }; +struct iommu_iort_rmr_data { + struct iommu_resv_region rr; + + /* Stream IDs associated with IORT RMR entry */ + const u32 *sids; + u32 num_sids; +}; + /** * enum iommu_dev_features - Per device IOMMU features * @IOMMU_DEV_FEAT_SVA: Shared Virtual Addresses From patchwork Tue May 3 16:33:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BA4DC433EF for ; Tue, 3 May 2022 16:35:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239635AbiECQie (ORCPT ); Tue, 3 May 2022 12:38:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235042AbiECQid (ORCPT ); Tue, 3 May 2022 12:38:33 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6D053CA78 for ; Tue, 3 May 2022 09:35:00 -0700 (PDT) Received: from fraeml745-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt58d3Xn6z67sRl; Wed, 4 May 2022 00:32:17 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml745-chm.china.huawei.com (10.206.15.226) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 18:34:58 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:34:51 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 5/9] ACPI/IORT: Add a helper to retrieve RMR info directly Date: Tue, 3 May 2022 17:33:26 +0100 Message-ID: <20220503163330.509-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org This will provide a way for SMMU drivers to retrieve StreamIDs associated with IORT RMR nodes and use that to set bypass settings for those IDs. Tested-by: Steven Price Signed-off-by: Shameer Kolothum Reviewed-by: Hanjun Guo --- drivers/acpi/arm64/iort.c | 28 ++++++++++++++++++++++++++++ include/linux/acpi_iort.h | 8 ++++++++ 2 files changed, 36 insertions(+) diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c index b6273af316c6..cd1349d3544e 100644 --- a/drivers/acpi/arm64/iort.c +++ b/drivers/acpi/arm64/iort.c @@ -1394,6 +1394,34 @@ int iort_dma_get_ranges(struct device *dev, u64 *size) return nc_dma_get_range(dev, size); } +/** + * iort_get_rmr_sids - Retrieve IORT RMR node reserved regions with + * associated StreamIDs information. + * @iommu_fwnode: fwnode associated with IOMMU + * @head: Resereved region list + */ +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head) +{ + iort_iommu_rmr_get_resv_regions(iommu_fwnode, NULL, head); +} +EXPORT_SYMBOL_GPL(iort_get_rmr_sids); + +/** + * iort_put_rmr_sids - Free memory allocated for RMR reserved regions. + * @iommu_fwnode: fwnode associated with IOMMU + * @head: Resereved region list + */ +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head) +{ + struct iommu_resv_region *entry, *next; + + list_for_each_entry_safe(entry, next, head, list) + entry->free(NULL, entry); +} +EXPORT_SYMBOL_GPL(iort_put_rmr_sids); + static void __init acpi_iort_register_irq(int hwirq, const char *name, int trigger, struct resource *res) diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h index e5d2de9caf7f..b43be0987b19 100644 --- a/include/linux/acpi_iort.h +++ b/include/linux/acpi_iort.h @@ -33,6 +33,10 @@ struct irq_domain *iort_get_device_domain(struct device *dev, u32 id, enum irq_domain_bus_token bus_token); void acpi_configure_pmsi_domain(struct device *dev); int iort_pmsi_get_dev_id(struct device *dev, u32 *dev_id); +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head); +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, + struct list_head *head); /* IOMMU interface */ int iort_dma_get_ranges(struct device *dev, u64 *size); int iort_iommu_configure_id(struct device *dev, const u32 *id_in); @@ -46,6 +50,10 @@ static inline struct irq_domain *iort_get_device_domain( struct device *dev, u32 id, enum irq_domain_bus_token bus_token) { return NULL; } static inline void acpi_configure_pmsi_domain(struct device *dev) { } +static inline +void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } +static inline +void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { } /* IOMMU interface */ static inline int iort_dma_get_ranges(struct device *dev, u64 *size) { return -ENODEV; } From patchwork Tue May 3 16:33:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 346CAC433FE for ; Tue, 3 May 2022 16:35:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239649AbiECQjH (ORCPT ); Tue, 3 May 2022 12:39:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239665AbiECQim (ORCPT ); Tue, 3 May 2022 12:38:42 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D46FE3CA78 for ; Tue, 3 May 2022 09:35:09 -0700 (PDT) Received: from fraeml744-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt56s6kV0z67Prq; Wed, 4 May 2022 00:30:45 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml744-chm.china.huawei.com (10.206.15.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 18:35:07 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:35:00 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 6/9] iommu/arm-smmu-v3: Introduce strtab init helper Date: Tue, 3 May 2022 17:33:27 +0100 Message-ID: <20220503163330.509-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Introduce a helper to check the sid range and to init the l2 strtab entries(bypass). This will be useful when we have to initialize the l2 strtab with bypass for RMR SIDs. Signed-off-by: Shameer Kolothum Acked-by: Will Deacon --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 28 +++++++++++---------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 627a3ed5ee8f..df326d8f02c6 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2537,6 +2537,19 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } +static int arm_smmu_init_sid_strtab(struct arm_smmu_device *smmu, u32 sid) +{ + /* Check the SIDs are in range of the SMMU and our stream table */ + if (!arm_smmu_sid_in_range(smmu, sid)) + return -ERANGE; + + /* Ensure l2 strtab is initialised */ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) + return arm_smmu_init_l2_strtab(smmu, sid); + + return 0; +} + static int arm_smmu_insert_master(struct arm_smmu_device *smmu, struct arm_smmu_master *master) { @@ -2560,20 +2573,9 @@ static int arm_smmu_insert_master(struct arm_smmu_device *smmu, new_stream->id = sid; new_stream->master = master; - /* - * Check the SIDs are in range of the SMMU and our stream table - */ - if (!arm_smmu_sid_in_range(smmu, sid)) { - ret = -ERANGE; + ret = arm_smmu_init_sid_strtab(smmu, sid); + if (ret) break; - } - - /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - ret = arm_smmu_init_l2_strtab(smmu, sid); - if (ret) - break; - } /* Insert into SID tree */ new_node = &(smmu->streams.rb_node); From patchwork Tue May 3 16:33:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4637EC433F5 for ; Tue, 3 May 2022 16:35:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239542AbiECQjC (ORCPT ); Tue, 3 May 2022 12:39:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239719AbiECQiy (ORCPT ); Tue, 3 May 2022 12:38:54 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCF8510D1 for ; Tue, 3 May 2022 09:35:18 -0700 (PDT) Received: from fraeml743-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt5725jbXz67Zdq; Wed, 4 May 2022 00:30:54 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml743-chm.china.huawei.com (10.206.15.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 18:35:16 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:35:09 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 7/9] iommu/arm-smmu-v3: Refactor arm_smmu_init_bypass_stes() to force bypass Date: Tue, 3 May 2022 17:33:28 +0100 Message-ID: <20220503163330.509-8-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org By default, disable_bypass flag is set and any dev without an iommu domain installs STE with CFG_ABORT during arm_smmu_init_bypass_stes(). Introduce a "force" flag and move the STE update logic to arm_smmu_init_bypass_stes() so that we can force it to install CFG_BYPASS STE for specific SIDs. This will be useful in a follow-up patch to install bypass for IORT RMR SIDs. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index df326d8f02c6..a939d9e0f747 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1380,12 +1380,21 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); } -static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent) +static void arm_smmu_init_bypass_stes(__le64 *strtab, unsigned int nent, bool force) { unsigned int i; + u64 val = STRTAB_STE_0_V; + + if (disable_bypass && !force) + val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); + else + val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS); for (i = 0; i < nent; ++i) { - arm_smmu_write_strtab_ent(NULL, -1, strtab); + strtab[0] = cpu_to_le64(val); + strtab[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG, + STRTAB_STE_1_SHCFG_INCOMING)); + strtab[2] = 0; strtab += STRTAB_STE_DWORDS; } } @@ -1413,7 +1422,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return -ENOMEM; } - arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT); + arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT, false); arm_smmu_write_strtab_l1_desc(strtab, desc); return 0; } @@ -3051,7 +3060,7 @@ static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu) reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits); cfg->strtab_base_cfg = reg; - arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents); + arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents, false); return 0; } From patchwork Tue May 3 16:33:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89129C433EF for ; Tue, 3 May 2022 16:35:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239655AbiECQjH (ORCPT ); Tue, 3 May 2022 12:39:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239754AbiECQjB (ORCPT ); Tue, 3 May 2022 12:39:01 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 299D86256 for ; Tue, 3 May 2022 09:35:28 -0700 (PDT) Received: from fraeml742-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt58k3S6jz67S5S; Wed, 4 May 2022 00:32:22 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml742-chm.china.huawei.com (10.206.15.223) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 18:35:26 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:35:18 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 8/9] iommu/arm-smmu-v3: Get associated RMR info and install bypass STE Date: Tue, 3 May 2022 17:33:29 +0100 Message-ID: <20220503163330.509-9-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org Check if there is any RMR info associated with the devices behind the SMMUv3 and if any, install bypass STEs for them. This is to keep any ongoing traffic associated with these devices alive when we enable/reset SMMUv3 during probe(). Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 33 +++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index a939d9e0f747..8a5dfd078e95 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3754,6 +3754,36 @@ static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start, return devm_ioremap_resource(dev, &res); } +static void arm_smmu_rmr_install_bypass_ste(struct arm_smmu_device *smmu) +{ + struct list_head rmr_list; + struct iommu_resv_region *e; + + INIT_LIST_HEAD(&rmr_list); + iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); + + list_for_each_entry(e, &rmr_list, list) { + __le64 *step; + struct iommu_iort_rmr_data *rmr; + int ret, i; + + rmr = container_of(e, struct iommu_iort_rmr_data, rr); + for (i = 0; i < rmr->num_sids; i++) { + ret = arm_smmu_init_sid_strtab(smmu, rmr->sids[i]); + if (ret) { + dev_err(smmu->dev, "RMR SID(0x%x) bypass failed\n", + rmr->sids[i]); + continue; + } + + step = arm_smmu_get_step_for_sid(smmu, rmr->sids[i]); + arm_smmu_init_bypass_stes(step, 1, true); + } + } + + iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); +} + static int arm_smmu_device_probe(struct platform_device *pdev) { int irq, ret; @@ -3835,6 +3865,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev) /* Record our private device structure */ platform_set_drvdata(pdev, smmu); + /* Check for RMRs and install bypass STEs if any */ + arm_smmu_rmr_install_bypass_ste(smmu); + /* Reset the device */ ret = arm_smmu_device_reset(smmu, bypass); if (ret) From patchwork Tue May 3 16:33:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 569298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35CC0C433F5 for ; Tue, 3 May 2022 16:35:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237531AbiECQj3 (ORCPT ); Tue, 3 May 2022 12:39:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239657AbiECQjV (ORCPT ); Tue, 3 May 2022 12:39:21 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75ED61DA49 for ; Tue, 3 May 2022 09:35:40 -0700 (PDT) Received: from fraeml741-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Kt58y5jtkz67kFH; Wed, 4 May 2022 00:32:34 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml741-chm.china.huawei.com (10.206.15.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 18:35:38 +0200 Received: from A2006125610.china.huawei.com (10.202.227.178) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 3 May 2022 17:35:31 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , , , Subject: [PATCH v12 9/9] iommu/arm-smmu: Get associated RMR info and install bypass SMR Date: Tue, 3 May 2022 17:33:30 +0100 Message-ID: <20220503163330.509-10-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> References: <20220503163330.509-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org From: Jon Nettleton Check if there is any RMR info associated with the devices behind the SMMU and if any, install bypass SMRs for them. This is to keep any ongoing traffic associated with these devices alive when we enable/reset SMMU during probe(). Signed-off-by: Jon Nettleton Signed-off-by: Steven Price Tested-by: Steven Price Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 52 +++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 568cce590ccc..e02cc2d4fb4e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -2068,6 +2068,54 @@ err_reset_platform_ops: __maybe_unused; return err; } +static void arm_smmu_rmr_install_bypass_smr(struct arm_smmu_device *smmu) +{ + struct list_head rmr_list; + struct iommu_resv_region *e; + int idx, cnt = 0; + u32 reg; + + INIT_LIST_HEAD(&rmr_list); + iort_get_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); + + /* + * Rather than trying to look at existing mappings that + * are setup by the firmware and then invalidate the ones + * that do no have matching RMR entries, just disable the + * SMMU until it gets enabled again in the reset routine. + */ + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0); + reg |= ARM_SMMU_sCR0_CLIENTPD; + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg); + + list_for_each_entry(e, &rmr_list, list) { + struct iommu_iort_rmr_data *rmr; + int i; + + rmr = container_of(e, struct iommu_iort_rmr_data, rr); + for (i = 0; i < rmr->num_sids; i++) { + idx = arm_smmu_find_sme(smmu, rmr->sids[i], ~0); + if (idx < 0) + continue; + + if (smmu->s2crs[idx].count == 0) { + smmu->smrs[idx].id = rmr->sids[i]; + smmu->smrs[idx].mask = 0; + smmu->smrs[idx].valid = true; + } + smmu->s2crs[idx].count++; + smmu->s2crs[idx].type = S2CR_TYPE_BYPASS; + smmu->s2crs[idx].privcfg = S2CR_PRIVCFG_DEFAULT; + + cnt++; + } + } + + dev_notice(smmu->dev, "\tpreserved %d boot mapping%s\n", cnt, + cnt == 1 ? "" : "s"); + iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list); +} + static int arm_smmu_device_probe(struct platform_device *pdev) { struct resource *res; @@ -2189,6 +2237,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) } platform_set_drvdata(pdev, smmu); + + /* Check for RMRs and install bypass SMRs if any */ + arm_smmu_rmr_install_bypass_smr(smmu); + arm_smmu_device_reset(smmu); arm_smmu_test_smr_masks(smmu);