From patchwork Tue Jan 26 23:43:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 372254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A886EC433E0 for ; Wed, 27 Jan 2021 04:54:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B16B20643 for ; Wed, 27 Jan 2021 04:54:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238857AbhA0E32 (ORCPT ); Tue, 26 Jan 2021 23:29:28 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:8559 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389849AbhA0AK1 (ORCPT ); Tue, 26 Jan 2021 19:10:27 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 26 Jan 2021 15:45:03 -0800 Received: from sx1.mtl.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 Jan 2021 23:45:02 +0000 From: Saeed Mahameed To: Jakub Kicinski , "David S. Miller" CC: , Parav Pandit , Eli Cohen , Saeed Mahameed Subject: [net 02/12] net/mlx5e: E-switch, Fix rate calculation for overflow Date: Tue, 26 Jan 2021 15:43:35 -0800 Message-ID: <20210126234345.202096-3-saeedm@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126234345.202096-1-saeedm@nvidia.com> References: <20210126234345.202096-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1611704703; bh=achVT2JcHwQLOd03GYb7JlkKk1FD3N1yavZxHsCtSFc=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=lOrpY5rqzWROcu+URva/I4niDmn8ZHhbkFnvGiMEsn2FrwB2SeYWQSnz2gsG4CmUq dPTOGHf1/xaik/Bbjt8Gk0JjbM1e2GdaaL303hTXluIyMUS7a2WcLqPfNEeZ7sGTOc CYLr0+a/IIEZo2KS+eYGIVa8k6LJipQ0Yk6gnF33cPSbyHOt4e0dwHCuH2HCdhSTW0 8/53bw8nQZYFS9yg0B5bdWMbJNCh2b5Soexit7jR2xUYSD8TVuSbMhajRpaIpeDm/m ZVhU0pSvAEPejKXkr6O4vurmfGCBc82uRHw7ONUz6fGzcBw6C8l8cb7kAtM7yfe7kE 2I7ZYWLYhItQg== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Parav Pandit rate_bytes_ps is a 64-bit field. It passed as 32-bit field to apply_police_params(). Due to this when police rate is higher than 4Gbps, 32-bit calculation ignores the carry. This results in incorrect rate configurationn the device. Fix it by performing 64-bit calculation. Fixes: fcb64c0f5640 ("net/mlx5: E-Switch, add ingress rate support") Signed-off-by: Parav Pandit Reviewed-by: Eli Cohen Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 4cdf834fa74a..661235027b47 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -67,6 +67,7 @@ #include "lib/geneve.h" #include "lib/fs_chains.h" #include "diag/en_tc_tracepoint.h" +#include #define nic_chains(priv) ((priv)->fs.tc.chains) #define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto) @@ -5007,13 +5008,13 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv, return err; } -static int apply_police_params(struct mlx5e_priv *priv, u32 rate, +static int apply_police_params(struct mlx5e_priv *priv, u64 rate, struct netlink_ext_ack *extack) { struct mlx5e_rep_priv *rpriv = priv->ppriv; struct mlx5_eswitch *esw; + u32 rate_mbps = 0; u16 vport_num; - u32 rate_mbps; int err; vport_num = rpriv->rep->vport; @@ -5030,7 +5031,11 @@ static int apply_police_params(struct mlx5e_priv *priv, u32 rate, * Moreover, if rate is non zero we choose to configure to a minimum of * 1 mbit/sec. */ - rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0; + if (rate) { + rate = (rate * BITS_PER_BYTE) + 500000; + rate_mbps = max_t(u32, do_div(rate, 1000000), 1); + } + err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps); if (err) NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware"); From patchwork Tue Jan 26 23:43:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 372259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3905C433E9 for ; Wed, 27 Jan 2021 04:32:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFE612070E for ; Wed, 27 Jan 2021 04:32:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238799AbhA0E3K (ORCPT ); Tue, 26 Jan 2021 23:29:10 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:5319 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389808AbhA0AJa (ORCPT ); Tue, 26 Jan 2021 19:09:30 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 26 Jan 2021 15:45:04 -0800 Received: from sx1.mtl.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 Jan 2021 23:45:03 +0000 From: Saeed Mahameed To: Jakub Kicinski , "David S. Miller" CC: , Pan Bian , Leon Romanovsky , Saeed Mahameed Subject: [net 03/12] net/mlx5e: free page before return Date: Tue, 26 Jan 2021 15:43:36 -0800 Message-ID: <20210126234345.202096-4-saeedm@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126234345.202096-1-saeedm@nvidia.com> References: <20210126234345.202096-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1611704704; bh=52Wt/haiOVdVZVlHDBBITKgpkaBKfvsltefgrNvEml4=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=Hbg2wiFLTe59xDmpJ8YIa5tJfFUwPK8HjtddCEYfM6kylDHBlX/jTOhIuJb2CEmOK yCLJ8hY3LkMc62XAKU1IO0gRX0xDZAnP+D6WFVGFhpIKAmm4ZBZzM4zVelFKTUMN5q +e4VeQ5HrSl0GDEIqYl5uMJsx0DlY+MgFiNu2+Tdszn5xwnhmmCUN6cGC90ZvrIAsh CTB8LGPDwURB5cab3OluC8+6zwnakeTqvcd6piAeha5EEf5wzHoqMfOoPC3Pwg1lnb 2GfLQFRIn5xhmA2vu1GX5+O626g3yIR3+xH4lZgItELyCcGKu7mUbsNCK0nmrQRcie cfjUQvLcYES0w== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Pan Bian Instead of directly return, goto the error handling label to free allocated page. Fixes: 5f29458b77d5 ("net/mlx5e: Support dump callback in TX reporter") Signed-off-by: Pan Bian Reviewed-by: Leon Romanovsky Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/health.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c index 718f8c0a4f6b..84e501e057b4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c @@ -273,7 +273,7 @@ int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key err = devlink_fmsg_binary_pair_nest_start(fmsg, "data"); if (err) - return err; + goto free_page; cmd = mlx5_rsc_dump_cmd_create(mdev, key); if (IS_ERR(cmd)) { From patchwork Tue Jan 26 23:43:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 372257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C364EC433E6 for ; Wed, 27 Jan 2021 04:32:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7DC5620715 for ; Wed, 27 Jan 2021 04:32:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238775AbhA0E26 (ORCPT ); Tue, 26 Jan 2021 23:28:58 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:11313 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389798AbhA0AJW (ORCPT ); Tue, 26 Jan 2021 19:09:22 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 26 Jan 2021 15:45:06 -0800 Received: from sx1.mtl.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 Jan 2021 23:45:05 +0000 From: Saeed Mahameed To: Jakub Kicinski , "David S. Miller" CC: , Daniel Jurgens , "Saeed Mahameed" Subject: [net 06/12] net/mlx5: Maintain separate page trees for ECPF and PF functions Date: Tue, 26 Jan 2021 15:43:39 -0800 Message-ID: <20210126234345.202096-7-saeedm@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126234345.202096-1-saeedm@nvidia.com> References: <20210126234345.202096-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1611704706; bh=q8UVNO8tcmSZPJ3b6CLOYUfTzr98VcEcfA9UpBlSEeI=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=kHNNMkOYo5dCoO9DpnHoKvgy3EGtX9m2BTOyDaZVk2+8yXCFBv8D5Q8JAHF89XmCc UXu15+p6Jcc3eymLenGHwUg/Ee9v0htL+uQaa32wyLuCOhasIfMESJtiner+DUNX7e 25cT9vBFZlKtZhJv2o5W8qXEJ1HDgc7TqpYOxrIrC2oyu6avgNLBgpKl8fw0vjhIc4 io3voe+eazjZOiC9YGKkfQr7aL05z6zO8zQchhOTglDCklcvXCEROd1FuqxMKxcsjJ ZGmWQL1ShANJNBmUZfQlb7dYelLzAPFqz90lptG3zjvLNf/MYB3FrY9A+JUA98GTPn +w/YOaB78baRg== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Daniel Jurgens Pages for the host PF and ECPF were stored in the same tree, so the ECPF pages were being freed along with the host PF's when the host driver unloaded. Combine the function ID and ECPF flag to use as an index into the x-array containing the trees to get a different tree for the host PF and ECPF. Fixes: c6168161f693 ("net/mlx5: Add support for release all pages event") Signed-off-by: Daniel Jurgens Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/pagealloc.c | 58 +++++++++++-------- 1 file changed, 34 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c index eb956ce904bc..eaa8958e24d7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c @@ -58,7 +58,7 @@ struct fw_page { struct rb_node rb_node; u64 addr; struct page *page; - u16 func_id; + u32 function; unsigned long bitmask; struct list_head list; unsigned free_count; @@ -74,12 +74,17 @@ enum { MLX5_NUM_4K_IN_PAGE = PAGE_SIZE / MLX5_ADAPTER_PAGE_SIZE, }; -static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func_id) +static u32 get_function(u16 func_id, bool ec_function) +{ + return func_id & (ec_function << 16); +} + +static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function) { struct rb_root *root; int err; - root = xa_load(&dev->priv.page_root_xa, func_id); + root = xa_load(&dev->priv.page_root_xa, function); if (root) return root; @@ -87,7 +92,7 @@ static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func if (!root) return ERR_PTR(-ENOMEM); - err = xa_insert(&dev->priv.page_root_xa, func_id, root, GFP_KERNEL); + err = xa_insert(&dev->priv.page_root_xa, function, root, GFP_KERNEL); if (err) { kfree(root); return ERR_PTR(err); @@ -98,7 +103,7 @@ static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func return root; } -static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id) +static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u32 function) { struct rb_node *parent = NULL; struct rb_root *root; @@ -107,7 +112,7 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u struct fw_page *tfp; int i; - root = page_root_per_func_id(dev, func_id); + root = page_root_per_function(dev, function); if (IS_ERR(root)) return PTR_ERR(root); @@ -130,7 +135,7 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u nfp->addr = addr; nfp->page = page; - nfp->func_id = func_id; + nfp->function = function; nfp->free_count = MLX5_NUM_4K_IN_PAGE; for (i = 0; i < MLX5_NUM_4K_IN_PAGE; i++) set_bit(i, &nfp->bitmask); @@ -143,14 +148,14 @@ static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u } static struct fw_page *find_fw_page(struct mlx5_core_dev *dev, u64 addr, - u32 func_id) + u32 function) { struct fw_page *result = NULL; struct rb_root *root; struct rb_node *tmp; struct fw_page *tfp; - root = xa_load(&dev->priv.page_root_xa, func_id); + root = xa_load(&dev->priv.page_root_xa, function); if (WARN_ON_ONCE(!root)) return NULL; @@ -194,14 +199,14 @@ static int mlx5_cmd_query_pages(struct mlx5_core_dev *dev, u16 *func_id, return err; } -static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u16 func_id) +static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function) { struct fw_page *fp = NULL; struct fw_page *iter; unsigned n; list_for_each_entry(iter, &dev->priv.free_list, list) { - if (iter->func_id != func_id) + if (iter->function != function) continue; fp = iter; } @@ -231,7 +236,7 @@ static void free_fwp(struct mlx5_core_dev *dev, struct fw_page *fwp, { struct rb_root *root; - root = xa_load(&dev->priv.page_root_xa, fwp->func_id); + root = xa_load(&dev->priv.page_root_xa, fwp->function); if (WARN_ON_ONCE(!root)) return; @@ -244,12 +249,12 @@ static void free_fwp(struct mlx5_core_dev *dev, struct fw_page *fwp, kfree(fwp); } -static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id) +static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function) { struct fw_page *fwp; int n; - fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, func_id); + fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, function); if (!fwp) { mlx5_core_warn_rl(dev, "page not found\n"); return; @@ -263,7 +268,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id) list_add(&fwp->list, &dev->priv.free_list); } -static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id) +static int alloc_system_page(struct mlx5_core_dev *dev, u32 function) { struct device *device = mlx5_core_dma_dev(dev); int nid = dev_to_node(device); @@ -291,7 +296,7 @@ static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id) goto map; } - err = insert_page(dev, addr, page, func_id); + err = insert_page(dev, addr, page, function); if (err) { mlx5_core_err(dev, "failed to track allocated page\n"); dma_unmap_page(device, addr, PAGE_SIZE, DMA_BIDIRECTIONAL); @@ -328,6 +333,7 @@ static void page_notify_fail(struct mlx5_core_dev *dev, u16 func_id, static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, int notify_fail, bool ec_function) { + u32 function = get_function(func_id, ec_function); u32 out[MLX5_ST_SZ_DW(manage_pages_out)] = {0}; int inlen = MLX5_ST_SZ_BYTES(manage_pages_in); u64 addr; @@ -345,10 +351,10 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, for (i = 0; i < npages; i++) { retry: - err = alloc_4k(dev, &addr, func_id); + err = alloc_4k(dev, &addr, function); if (err) { if (err == -ENOMEM) - err = alloc_system_page(dev, func_id); + err = alloc_system_page(dev, function); if (err) goto out_4k; @@ -384,7 +390,7 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, out_4k: for (i--; i >= 0; i--) - free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), func_id); + free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), function); out_free: kvfree(in); if (notify_fail) @@ -392,14 +398,15 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, return err; } -static void release_all_pages(struct mlx5_core_dev *dev, u32 func_id, +static void release_all_pages(struct mlx5_core_dev *dev, u16 func_id, bool ec_function) { + u32 function = get_function(func_id, ec_function); struct rb_root *root; struct rb_node *p; int npages = 0; - root = xa_load(&dev->priv.page_root_xa, func_id); + root = xa_load(&dev->priv.page_root_xa, function); if (WARN_ON_ONCE(!root)) return; @@ -446,6 +453,7 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev, struct rb_root *root; struct fw_page *fwp; struct rb_node *p; + bool ec_function; u32 func_id; u32 npages; u32 i = 0; @@ -456,8 +464,9 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev, /* No hard feelings, we want our pages back! */ npages = MLX5_GET(manage_pages_in, in, input_num_entries); func_id = MLX5_GET(manage_pages_in, in, function_id); + ec_function = MLX5_GET(manage_pages_in, in, embedded_cpu_function); - root = xa_load(&dev->priv.page_root_xa, func_id); + root = xa_load(&dev->priv.page_root_xa, get_function(func_id, ec_function)); if (WARN_ON_ONCE(!root)) return -EEXIST; @@ -473,9 +482,10 @@ static int reclaim_pages_cmd(struct mlx5_core_dev *dev, return 0; } -static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages, +static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, int *nclaimed, bool ec_function) { + u32 function = get_function(func_id, ec_function); int outlen = MLX5_ST_SZ_BYTES(manage_pages_out); u32 in[MLX5_ST_SZ_DW(manage_pages_in)] = {}; int num_claimed; @@ -514,7 +524,7 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages, } for (i = 0; i < num_claimed; i++) - free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), func_id); + free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), function); if (nclaimed) *nclaimed = num_claimed; From patchwork Tue Jan 26 23:43:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 372245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ADFFC433E0 for ; Wed, 27 Jan 2021 07:37:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 022D820663 for ; Wed, 27 Jan 2021 07:37:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234311AbhA0HhG (ORCPT ); Wed, 27 Jan 2021 02:37:06 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:7589 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S316753AbhA0AA1 (ORCPT ); Tue, 26 Jan 2021 19:00:27 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 26 Jan 2021 15:45:08 -0800 Received: from sx1.mtl.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 Jan 2021 23:45:07 +0000 From: Saeed Mahameed To: Jakub Kicinski , "David S. Miller" CC: , Paul Blakey , Roi Dayan , Vlad Buslov , Saeed Mahameed Subject: [net 08/12] net/mlx5e: Fix CT rule + encap slow path offload and deletion Date: Tue, 26 Jan 2021 15:43:41 -0800 Message-ID: <20210126234345.202096-9-saeedm@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126234345.202096-1-saeedm@nvidia.com> References: <20210126234345.202096-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1611704708; bh=jddhupY8wRhDvQ2K0iUuabglKY9vmM1T4CAx/Vx9kjg=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=dgW+Wr0ujeNRwUZMFaJoOGzEd4ka3e3CCwBTQcrjihX67kbpVMWuSOHI3Ut7IFbUz 6OzlWl68EzFIX/3fUOLrNVc7vDKm+9dPtD/LDjebtvdgGi0xP7yedvDKkuLp1lYv4g JKHYVzwvX4e0MnoNhmOKXDSoXSY/Jr3jZFHYnkB5Q0JPHn9trT8H92sQwBjJsE5I/E 0ctKF+Arila1SIWFCFnMvRvdeN9WC+LaedL4lHImwNKjnb7//U8/2GROQbLbj38tPX He6gWz1EL13Y3dq3dSLPPy4FH5Y4qXcFADCXrvC7sHqt0hLdxbAeRGB0cp19UzmExt Sjqdskk4A09HA== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Paul Blakey Currently, if a neighbour isn't valid when offloading tunnel encap rules, we offload the original match and replace the original action with "goto slow path" action. For this we use a temporary flow attribute based on the original flow attribute and then change the action. Flow flags, which among those is the CT flag, are still shared for the slow path rule offload, so we end up parsing this flow as a CT + goto slow path rule. Besides being unnecessary, CT action offload saves extra information in the passed flow attribute, such as created ct_flow and mod_hdr, which is lost onces the temporary flow attribute is freed. When a neigh is updated and is valid, we offload the original CT rule with original CT action, which again creates a ct_flow and mod_hdr and saves it in the flow's original attribute. Then we delete the slow path rule with a temporary flow attribute based on original updated flow attribute, and we free the relevant ct_flow and mod_hdr. Then when tc deletes this flow, we try to free the ct_flow and mod_hdr on the flow's attribute again. To fix the issue, skip all furture proccesing (CT/Sample/Split rules) in offload/unoffload of slow path rules. Call trace: [ 758.850525] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000218 [ 758.952987] Internal error: Oops: 96000005 [#1] PREEMPT SMP [ 758.964170] Modules linked in: act_csum(E) act_pedit(E) act_tunnel_key(E) act_ct(E) nf_flow_table(E) xt_nat(E) ip6table_filter(E) ip6table_nat(E) xt_comment(E) ip6_tables(E) xt_conntrack(E) xt_MASQUERADE(E) nf_conntrack_netlink(E) xt_addrtype(E) iptable_filter(E) iptable_nat(E) bpfilter(E) br_netfilter(E) bridge(E) stp(E) llc(E) xfrm_user(E) overlay(E) act_mirred(E) act_skbedit(E) rdma_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) ib_umad(OE) esp6_offload(E) esp6(E) esp4_offload(E) esp4(E) xfrm_algo(E) mlx5_ib(OE) ib_uverbs(OE) geneve(E) ip6_udp_tunnel(E) udp_tunnel(E) nfnetlink_cttimeout(E) nfnetlink(E) mlx5_core(OE) act_gact(E) cls_flower(E) sch_ingress(E) openvswitch(E) nsh(E) nf_conncount(E) nf_nat(E) mlxfw(OE) psample(E) nf_conntrack(E) nf_defrag_ipv4(E) vfio_mdev(E) mdev(E) ib_core(OE) mlx_compat(OE) crct10dif_ce(E) uio_pdrv_genirq(E) uio(E) i2c_mlx(E) mlxbf_pmc(E) sbsa_gwdt(E) mlxbf_gige(E) gpio_mlxbf2(E) mlxbf_pka(E) mlx_trio(E) mlx_bootctl(E) bluefield_edac(E) knem(O) [ 758.964225] ip_tables(E) mlxbf_tmfifo(E) ipv6(E) crc_ccitt(E) nf_defrag_ipv6(E) [ 759.154186] CPU: 5 PID: 122 Comm: kworker/u16:1 Tainted: G OE 5.4.60-mlnx.52.gde81e85 #1 [ 759.172870] Hardware name: https://www.mellanox.com BlueField SoC/BlueField SoC, BIOS BlueField:3.5.0-2-gc1b5d64 Jan 4 2021 [ 759.195466] Workqueue: mlx5e mlx5e_rep_neigh_update [mlx5_core] [ 759.207344] pstate: a0000005 (NzCv daif -PAN -UAO) [ 759.217003] pc : mlx5_del_flow_rules+0x5c/0x160 [mlx5_core] [ 759.228229] lr : mlx5_del_flow_rules+0x34/0x160 [mlx5_core] [ 759.405858] Call trace: [ 759.410804] mlx5_del_flow_rules+0x5c/0x160 [mlx5_core] [ 759.421337] __mlx5_eswitch_del_rule.isra.43+0x5c/0x1c8 [mlx5_core] [ 759.433963] mlx5_eswitch_del_offloaded_rule_ct+0x34/0x40 [mlx5_core] [ 759.446942] mlx5_tc_rule_delete_ct+0x68/0x74 [mlx5_core] [ 759.457821] mlx5_tc_ct_delete_flow+0x160/0x21c [mlx5_core] [ 759.469051] mlx5e_tc_unoffload_fdb_rules+0x158/0x168 [mlx5_core] [ 759.481325] mlx5e_tc_encap_flows_del+0x140/0x26c [mlx5_core] [ 759.492901] mlx5e_rep_update_flows+0x11c/0x1ec [mlx5_core] [ 759.504127] mlx5e_rep_neigh_update+0x160/0x200 [mlx5_core] [ 759.515314] process_one_work+0x178/0x400 [ 759.523350] worker_thread+0x58/0x3e8 [ 759.530685] kthread+0x100/0x12c [ 759.537152] ret_from_fork+0x10/0x18 [ 759.544320] Code: 97ffef55 51000673 3100067f 54ffff41 (b9421ab3) [ 759.556548] ---[ end trace fab818bb1085832d ]--- Fixes: 4c3844d9e97e ("net/mlx5e: CT: Introduce connection tracking") Signed-off-by: Paul Blakey Reviewed-by: Roi Dayan Reviewed-by: Vlad Buslov Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index f4ce5e208e02..dd0bfbacad47 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1163,6 +1163,9 @@ mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw, struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts; struct mlx5_flow_handle *rule; + if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH) + return mlx5_eswitch_add_offloaded_rule(esw, spec, attr); + if (flow_flag_test(flow, CT)) { mod_hdr_acts = &attr->parse_attr->mod_hdr_acts; @@ -1193,6 +1196,9 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw, { flow_flag_clear(flow, OFFLOADED); + if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH) + goto offload_rule_0; + if (flow_flag_test(flow, CT)) { mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); return; @@ -1201,6 +1207,7 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw, if (attr->esw_attr->split_count) mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); +offload_rule_0: mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); } From patchwork Tue Jan 26 23:43:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 372253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E55AC433E9 for ; Wed, 27 Jan 2021 04:54:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56040205CA for ; Wed, 27 Jan 2021 04:54:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238846AbhA0E3W (ORCPT ); Tue, 26 Jan 2021 23:29:22 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:5320 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389823AbhA0AJp (ORCPT ); Tue, 26 Jan 2021 19:09:45 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 26 Jan 2021 15:45:09 -0800 Received: from sx1.mtl.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 Jan 2021 23:45:08 +0000 From: Saeed Mahameed To: Jakub Kicinski , "David S. Miller" CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net 10/12] net/mlx5e: Revert parameters on errors when changing trust state without reset Date: Tue, 26 Jan 2021 15:43:43 -0800 Message-ID: <20210126234345.202096-11-saeedm@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126234345.202096-1-saeedm@nvidia.com> References: <20210126234345.202096-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1611704709; bh=YOer8oSlxx57rwgQ46zOujK87u81afpYpBHc/ScCcsA=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=ZoIpUCwSdRs+a7s7T4tVgKnDofVJ3UptouYzoML9H9b978Z2E6qeV4MUFDXIoHlrT +YHmEUz4fyCjIGtPaExgfs0ZGWDnHPLm5Y6SzL4kCbUL4y7Io264IGBuO63J2u/dK0 ffFnDmSU5IX6Yo030aAmkAqv1vVVxBrq5JBMbHx+J2fF716lcN8Iw/+aSMTSStZpAD 6Z5o+zTJzYXQKP5++SoWey672YlZNhdYR6ioxuxGfl1OR3GZpR5mintq8hNJQiSI0v JFdna6N4E/CYo/wxhMqTkYcImOeaQP0KSifbttww+Gj4DPumcff79bK0h+egVQUbk3 KrYnqn09g8prg== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy Trust state may be changed without recreating the channels. It happens when the channels are closed, and when channel parameters (min inline mode) stay the same after changing the trust state. Changing the trust state is a hardware command that may fail. The current code didn't restore the channel parameters to their old values if an error happened and the channels were closed. This commit adds handling for this case. Fixes: 6e0504c69811 ("net/mlx5e: Change inline mode correctly when changing trust state") Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c index d20243d6a032..f23c67575073 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c @@ -1151,6 +1151,7 @@ static int mlx5e_set_trust_state(struct mlx5e_priv *priv, u8 trust_state) { struct mlx5e_channels new_channels = {}; bool reset_channels = true; + bool opened; int err = 0; mutex_lock(&priv->state_lock); @@ -1159,22 +1160,24 @@ static int mlx5e_set_trust_state(struct mlx5e_priv *priv, u8 trust_state) mlx5e_params_calc_trust_tx_min_inline_mode(priv->mdev, &new_channels.params, trust_state); - if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { - priv->channels.params = new_channels.params; + opened = test_bit(MLX5E_STATE_OPENED, &priv->state); + if (!opened) reset_channels = false; - } /* Skip if tx_min_inline is the same */ if (new_channels.params.tx_min_inline_mode == priv->channels.params.tx_min_inline_mode) reset_channels = false; - if (reset_channels) + if (reset_channels) { err = mlx5e_safe_switch_channels(priv, &new_channels, mlx5e_update_trust_state_hw, &trust_state); - else + } else { err = mlx5e_update_trust_state_hw(priv, &trust_state); + if (!err && !opened) + priv->channels.params = new_channels.params; + } mutex_unlock(&priv->state_lock); From patchwork Tue Jan 26 23:43:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 372246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB6ACC433DB for ; Wed, 27 Jan 2021 07:37:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 640B320663 for ; Wed, 27 Jan 2021 07:37:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234235AbhA0HgZ (ORCPT ); Wed, 27 Jan 2021 02:36:25 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:7587 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S317437AbhA0AA1 (ORCPT ); Tue, 26 Jan 2021 19:00:27 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 26 Jan 2021 15:45:11 -0800 Received: from sx1.mtl.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 Jan 2021 23:45:10 +0000 From: Saeed Mahameed To: Jakub Kicinski , "David S. Miller" CC: , Paul Blakey , Roi Dayan , Saeed Mahameed Subject: [net 12/12] net/mlx5: CT: Fix incorrect removal of tuple_nat_node from nat rhashtable Date: Tue, 26 Jan 2021 15:43:45 -0800 Message-ID: <20210126234345.202096-13-saeedm@nvidia.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210126234345.202096-1-saeedm@nvidia.com> References: <20210126234345.202096-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1611704711; bh=G4ZBqPijOMv4OqGf21FqpCixXy8Uk5TBskf3z/7ZtSY=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=b/LE0ioCqaJEjsIuMWhnwxZdJY0nv6/D8ihxRfHwI7O2W3sIC5TfqYwFhIpUwLi4k EYwLstbY2bqMajRXN7DtvDaV6C7z1N8LI5iatXQgJTWc4ISlG7+c/s8+Vpe+HfVarq W/zitu7F8Rp7tsJY8gSMaE5qAuTBLBv6Nn8WdXDD1V99hsQJYtT5LkhGqoCEdMByxd tLLOUS9Qsu0BFRFpA9pjXPvXjGJZtMXCMtQZUTrUlLETnNwdXJPj1BOolKpbjhJOPr nBf5iXWEAYHYSxmWZQ5evXxp1hT983a/dKX1a9lX1V17y6593ODj43spgnLM6QnWLc gIWcP6lKGLJpQ== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Paul Blakey If a non nat tuple entry is inserted just to the regular tuples rhashtable (ct_tuples_ht) and not to natted tuples rhashtable (ct_nat_tuples_ht). Commit bc562be9674b ("net/mlx5e: CT: Save ct entries tuples in hashtables") mixed up the return labels and names sot that on cleanup or failure we still try to remove for the natted tuples rhashtable. Fix that by correctly checking if a natted tuples insertion before removing it. While here make it more readable. Fixes: bc562be9674b ("net/mlx5e: CT: Save ct entries tuples in hashtables") Reviewed-by: Roi Dayan Signed-off-by: Paul Blakey Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 20 ++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 072363e73f1c..6bc6b48a56dc 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -167,6 +167,12 @@ static const struct rhashtable_params tuples_nat_ht_params = { .min_size = 16 * 1024, }; +static bool +mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry) +{ + return !!(entry->tuple_nat_node.next); +} + static int mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule) { @@ -911,13 +917,13 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft, err_insert: mlx5_tc_ct_entry_del_rules(ct_priv, entry); err_rules: - rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, - &entry->tuple_nat_node, tuples_nat_ht_params); + if (mlx5_tc_ct_entry_has_nat(entry)) + rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, + &entry->tuple_nat_node, tuples_nat_ht_params); err_tuple_nat: - if (entry->tuple_node.next) - rhashtable_remove_fast(&ct_priv->ct_tuples_ht, - &entry->tuple_node, - tuples_ht_params); + rhashtable_remove_fast(&ct_priv->ct_tuples_ht, + &entry->tuple_node, + tuples_ht_params); err_tuple: err_set: kfree(entry); @@ -932,7 +938,7 @@ mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv, { mlx5_tc_ct_entry_del_rules(ct_priv, entry); mutex_lock(&ct_priv->shared_counter_lock); - if (entry->tuple_node.next) + if (mlx5_tc_ct_entry_has_nat(entry)) rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, &entry->tuple_nat_node, tuples_nat_ht_params);