From patchwork Wed Aug 11 18:16:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 495606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFB5DC4320E for ; Wed, 11 Aug 2021 18:18:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C34F96105A for ; Wed, 11 Aug 2021 18:18:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230102AbhHKSSi (ORCPT ); Wed, 11 Aug 2021 14:18:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:52134 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229847AbhHKSSg (ORCPT ); Wed, 11 Aug 2021 14:18:36 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 688926101D; Wed, 11 Aug 2021 18:18:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628705892; bh=BiN855KtI/R8hEJql+Tf9np2GssqNCPt2H4o2oOKfrI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BjguM6n7hclxL/+AcLDBgqSJIO0HfoFji9JWAkukHYSn1pMYHP2jH4nzy/2IpKRhF PI4JCFqX2A+LrH5rf0Yuw72bAtXPkWVi39i36v+85OE4w2c0Nw+63T1ZB8N9zXdiHu kXc9O6PpBE4N21DE7o0XdJ9CeHsCxZUPFHfKO71QhojMuUbOQzN7F/SV2S87GB67No jAG9ly6aPoGX1XwZ2IWcYJV9+Vmc0Bb7Um2HcxPm8Z8BltSccvQhsnrTAgecMoy53B Sgaca5q/A9q8xxQSn6LT6cOi+cxYd6I80FtyUmUUmcvOAeDVdCQwG0bXifnDEaLB9v s5wPMw2VLgL4A== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , Maor Gottlieb , Saeed Mahameed Subject: [net-next 02/12] net/mlx5: Fix inner TTC table creation Date: Wed, 11 Aug 2021 11:16:48 -0700 Message-Id: <20210811181658.492548-3-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210811181658.492548-1-saeed@kernel.org> References: <20210811181658.492548-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maor Gottlieb Fix typo of the cited commit that calls to mlx5_create_ttc_table, instead of mlx5_create_inner_ttc_table. Fixes: f4b45940e9b9 ("net/mlx5: Embed mlx5_ttc_table") Signed-off-by: Maor Gottlieb Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_fs.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c index 5c754e9af669..c06b4b938ae7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c @@ -1255,7 +1255,8 @@ static int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv) return 0; mlx5e_set_inner_ttc_params(priv, &ttc_params); - priv->fs.inner_ttc = mlx5_create_ttc_table(priv->mdev, &ttc_params); + priv->fs.inner_ttc = mlx5_create_inner_ttc_table(priv->mdev, + &ttc_params); if (IS_ERR(priv->fs.inner_ttc)) return PTR_ERR(priv->fs.inner_ttc); return 0; From patchwork Wed Aug 11 18:16:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 495605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18EE8C4338F for ; Wed, 11 Aug 2021 18:18:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED0FB60FC0 for ; Wed, 11 Aug 2021 18:18:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230329AbhHKSSl (ORCPT ); Wed, 11 Aug 2021 14:18:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:52156 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229802AbhHKSSi (ORCPT ); Wed, 11 Aug 2021 14:18:38 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6DB216108C; Wed, 11 Aug 2021 18:18:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628705893; bh=SASyCz5oOjrzGahpVoJzbFyGP3nFJg1LNkcyfYhDoz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=su8j8Wll8QViMzj89lvgSWT2Wo+VdTVS6XXopLUkmaSallLbzM84PBBsyCrDM150e SZaWPgHfwazlpbMa7n5CE4VLDapmTKgaJXz6jPtEfO4tG2HQMKv8AkMlHkJHAQEYIq X0r7k1B0SG3KJ4yXDRW0LLsmrbNkVL59PrP8VHRFKTNUn63SMg39L2GdKvvyUQ3CSq QjRjWOcQaZWcvkVEIzisOSh5eAe0Ao0EfN4B+O+zm2KnY4gHhwKUp/pFkqw9+vI8uw AhjnDXiFNxyGCQeDaR4pDbfGEza0pzVs2Q9Rpi65bs8A2JGxqhrbMqvr73IYAEXbPu jEPhZDOXWotDA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , Shay Drory , Parav Pandit , Saeed Mahameed Subject: [net-next 04/12] net/mlx5: Align mlx5_irq structure Date: Wed, 11 Aug 2021 11:16:50 -0700 Message-Id: <20210811181658.492548-5-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210811181658.492548-1-saeed@kernel.org> References: <20210811181658.492548-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Shay Drory mlx5_irq structure have holes due to incorrect position of fields in it. Make them naturally align. pahole output after alignment: struct mlx5_irq { struct atomic_notifier_head nh; /* 0 72 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ cpumask_var_t mask; /* 72 8 */ char name[32]; /* 80 32 */ struct mlx5_irq_pool * pool; /* 112 8 */ struct kref kref; /* 120 4 */ u32 index; /* 124 4 */ /* --- cacheline 2 boundary (128 bytes) --- */ int irqn; /* 128 4 */ /* size: 136, cachelines: 3, members: 7 */ /* padding: 4 */ /* last cacheline: 8 bytes */ }; Signed-off-by: Shay Drory Reviewed-by: Parav Pandit Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 9fb75d79bf08..a4f6ba0c91da 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -28,13 +28,13 @@ #define MLX5_EQ_REFS_PER_IRQ (2) struct mlx5_irq { - u32 index; struct atomic_notifier_head nh; cpumask_var_t mask; char name[MLX5_MAX_IRQ_NAME]; + struct mlx5_irq_pool *pool; struct kref kref; + u32 index; int irqn; - struct mlx5_irq_pool *pool; }; struct mlx5_irq_pool { From patchwork Wed Aug 11 18:16:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 495604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18F90C4338F for ; Wed, 11 Aug 2021 18:18:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EEA7360E78 for ; Wed, 11 Aug 2021 18:18:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230270AbhHKSSs (ORCPT ); Wed, 11 Aug 2021 14:18:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:52206 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230143AbhHKSSi (ORCPT ); Wed, 11 Aug 2021 14:18:38 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8FCEF60E78; Wed, 11 Aug 2021 18:18:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628705895; bh=CbA8nd6rG8YLWB86ESDqCoMxSfnnH6kbv1N1xdaj6fw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YI8sr28OvEr9DbeUp6YffOSwQu/d2n/rfrejxJQs7UddjVBttSSrUpl2Opse/1jwB RAKjx5h25IsDr2dxsj0trPnzp2Per/oppGDXfz6tINo5Y7oj7U1QxPw1cldgfkHOKD 1M2gFp+N7Ho7aAlpgzf8/c/YVf96q9vx0BPhpu9Q19VZdC8uvz3RzrKcKOBdR2qs7n zIbeJIdGcexvn8xGyyVYJLXkMM0krDOgmZz4r/xWjNoxp3QgGqPAO832k3qtcZ6L9C oqnpGnsxO9r07CuVH6CSY/92cmPxWDKROL5Bx4TIe09rhRJYUkBGovViYw/OWDlkQ2 ITvZ+jnuJ31kA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , Shay Drory , Parav Pandit , Saeed Mahameed Subject: [net-next 06/12] net/mlx5: Refcount mlx5_irq with integer Date: Wed, 11 Aug 2021 11:16:52 -0700 Message-Id: <20210811181658.492548-7-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210811181658.492548-1-saeed@kernel.org> References: <20210811181658.492548-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Shay Drory Currently, all access to mlx5 IRQs are done undere a lock. Hance, there isn't a reason to have kref in struct mlx5_irq. Switch it to integer. Signed-off-by: Shay Drory Reviewed-by: Parav Pandit Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/pci_irq.c | 65 +++++++++++++------ 1 file changed, 44 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c index 717b9f1850ac..60bfcad1873c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c @@ -32,7 +32,7 @@ struct mlx5_irq { cpumask_var_t mask; char name[MLX5_MAX_IRQ_NAME]; struct mlx5_irq_pool *pool; - struct kref kref; + int refcount; u32 index; int irqn; }; @@ -138,9 +138,8 @@ int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int function_id, return ret; } -static void irq_release(struct kref *kref) +static void irq_release(struct mlx5_irq *irq) { - struct mlx5_irq *irq = container_of(kref, struct mlx5_irq, kref); struct mlx5_irq_pool *pool = irq->pool; xa_erase(&pool->irqs, irq->index); @@ -159,10 +158,31 @@ static void irq_put(struct mlx5_irq *irq) struct mlx5_irq_pool *pool = irq->pool; mutex_lock(&pool->lock); - kref_put(&irq->kref, irq_release); + irq->refcount--; + if (!irq->refcount) + irq_release(irq); mutex_unlock(&pool->lock); } +static int irq_get_locked(struct mlx5_irq *irq) +{ + lockdep_assert_held(&irq->pool->lock); + if (WARN_ON_ONCE(!irq->refcount)) + return 0; + irq->refcount++; + return 1; +} + +static int irq_get(struct mlx5_irq *irq) +{ + int err; + + mutex_lock(&irq->pool->lock); + err = irq_get_locked(irq); + mutex_unlock(&irq->pool->lock); + return err; +} + static irqreturn_t irq_int_handler(int irq, void *nh) { atomic_notifier_call_chain(nh, 0, NULL); @@ -214,7 +234,7 @@ static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i) err = -ENOMEM; goto err_cpumask; } - kref_init(&irq->kref); + irq->refcount = 1; irq->index = i; err = xa_err(xa_store(&pool->irqs, irq->index, irq, GFP_KERNEL)); if (err) { @@ -235,18 +255,18 @@ static struct mlx5_irq *irq_request(struct mlx5_irq_pool *pool, int i) int mlx5_irq_attach_nb(struct mlx5_irq *irq, struct notifier_block *nb) { - int err; + int ret; - err = kref_get_unless_zero(&irq->kref); - if (WARN_ON_ONCE(!err)) + ret = irq_get(irq); + if (!ret) /* Something very bad happens here, we are enabling EQ * on non-existing IRQ. */ return -ENOENT; - err = atomic_notifier_chain_register(&irq->nh, nb); - if (err) + ret = atomic_notifier_chain_register(&irq->nh, nb); + if (ret) irq_put(irq); - return err; + return ret; } int mlx5_irq_detach_nb(struct mlx5_irq *irq, struct notifier_block *nb) @@ -301,10 +321,9 @@ static struct mlx5_irq *irq_pool_find_least_loaded(struct mlx5_irq_pool *pool, xa_for_each_range(&pool->irqs, index, iter, start, end) { if (!cpumask_equal(iter->mask, affinity)) continue; - if (kref_read(&iter->kref) < pool->min_threshold) + if (iter->refcount < pool->min_threshold) return iter; - if (!irq || kref_read(&iter->kref) < - kref_read(&irq->kref)) + if (!irq || iter->refcount < irq->refcount) irq = iter; } return irq; @@ -319,7 +338,7 @@ static struct mlx5_irq *irq_pool_request_affinity(struct mlx5_irq_pool *pool, mutex_lock(&pool->lock); least_loaded_irq = irq_pool_find_least_loaded(pool, affinity); if (least_loaded_irq && - kref_read(&least_loaded_irq->kref) < pool->min_threshold) + least_loaded_irq->refcount < pool->min_threshold) goto out; new_irq = irq_pool_create_irq(pool, affinity); if (IS_ERR(new_irq)) { @@ -337,11 +356,11 @@ static struct mlx5_irq *irq_pool_request_affinity(struct mlx5_irq_pool *pool, least_loaded_irq = new_irq; goto unlock; out: - kref_get(&least_loaded_irq->kref); - if (kref_read(&least_loaded_irq->kref) > pool->max_threshold) + irq_get_locked(least_loaded_irq); + if (least_loaded_irq->refcount > pool->max_threshold) mlx5_core_dbg(pool->dev, "IRQ %u overloaded, pool_name: %s, %u EQs on this irq\n", least_loaded_irq->irqn, pool->name, - kref_read(&least_loaded_irq->kref) / MLX5_EQ_REFS_PER_IRQ); + least_loaded_irq->refcount / MLX5_EQ_REFS_PER_IRQ); unlock: mutex_unlock(&pool->lock); return least_loaded_irq; @@ -357,7 +376,7 @@ irq_pool_request_vector(struct mlx5_irq_pool *pool, int vecidx, mutex_lock(&pool->lock); irq = xa_load(&pool->irqs, vecidx); if (irq) { - kref_get(&irq->kref); + irq_get_locked(irq); goto unlock; } irq = irq_request(pool, vecidx); @@ -424,7 +443,7 @@ struct mlx5_irq *mlx5_irq_request(struct mlx5_core_dev *dev, u16 vecidx, return irq; mlx5_core_dbg(dev, "irq %u mapped to cpu %*pbl, %u EQs on this irq\n", irq->irqn, cpumask_pr_args(affinity), - kref_read(&irq->kref) / MLX5_EQ_REFS_PER_IRQ); + irq->refcount / MLX5_EQ_REFS_PER_IRQ); return irq; } @@ -456,8 +475,12 @@ static void irq_pool_free(struct mlx5_irq_pool *pool) struct mlx5_irq *irq; unsigned long index; + /* There are cases in which we are destrying the irq_table before + * freeing all the IRQs, fast teardown for example. Hence, free the irqs + * which might not have been freed. + */ xa_for_each(&pool->irqs, index, irq) - irq_release(&irq->kref); + irq_release(irq); xa_destroy(&pool->irqs); kvfree(pool); } From patchwork Wed Aug 11 18:16:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 495603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A540BC4320A for ; Wed, 11 Aug 2021 18:18:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8632760F46 for ; Wed, 11 Aug 2021 18:18:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230343AbhHKSSu (ORCPT ); Wed, 11 Aug 2021 14:18:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:52240 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230224AbhHKSSk (ORCPT ); Wed, 11 Aug 2021 14:18:40 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id AB3FF60FE6; Wed, 11 Aug 2021 18:18:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628705896; bh=dXLZmAhfEzMCWq59Bke7+xpjZD0JaHza8+TPiXTQifo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jKYoYLLO8uKc6LBV731/n/cdudvaPQFxYqMK2PWZ3HWdeDbmQ9kJK3e02NuHGWrbz eRYMvnoDfmxgYcDch6/dYzydfpMIXfhilDx40UlLWwrBDDvstLBzdB95VAdgWX8XqR diN8Jtz4/QQUP8o4IufoCa/sNiMHEE6rD+NU/XWm2IrIrd15FIfzKSd6XazAOI/KVz /YZ8a+rpfEHE/MGjlR4dMl1J/HxmbJ7mxYQHPLe+BVbKfl5quytDlDwfj1YIy2gZh1 lTGwW6Ip1aQtZAVl93q+iObRdpGgSICKazqyFIkl8hmcqRleu5O7byMZZf79EjbQtP HmpZ1b/CRC7hw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , Parav Pandit , Shay Drory , Saeed Mahameed Subject: [net-next 08/12] net/mlx5: Reorganize current and maximal capabilities to be per-type Date: Wed, 11 Aug 2021 11:16:54 -0700 Message-Id: <20210811181658.492548-9-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210811181658.492548-1-saeed@kernel.org> References: <20210811181658.492548-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Parav Pandit In the current code, the current and maximal capabilities are maintained in separate arrays which are both per type. In order to allow the creation of such a basic structure as a dynamically allocated array, we move curr and max fields to a unified structure so that specific capabilities can be allocated as one unit. Signed-off-by: Parav Pandit Reviewed-by: Leon Romanovsky Reviewed-by: Shay Drory Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/fs_core.c | 2 +- .../net/ethernet/mellanox/mlx5/core/main.c | 10 +-- include/linux/mlx5/device.h | 66 +++++++++---------- include/linux/mlx5/driver.h | 8 ++- 4 files changed, 45 insertions(+), 41 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index fee51050ed64..813ff8186829 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -2343,7 +2343,7 @@ static int create_leaf_prios(struct mlx5_flow_namespace *ns, int prio, #define FLOW_TABLE_BIT_SZ 1 #define GET_FLOW_TABLE_CAP(dev, offset) \ - ((be32_to_cpu(*((__be32 *)(dev->caps.hca_cur[MLX5_CAP_FLOW_TABLE]) + \ + ((be32_to_cpu(*((__be32 *)(dev->caps.hca[MLX5_CAP_FLOW_TABLE].cur) + \ offset / 32)) >> \ (32 - FLOW_TABLE_BIT_SZ - (offset & 0x1f))) & FLOW_TABLE_BIT_SZ) static bool has_required_caps(struct mlx5_core_dev *dev, struct node_caps *caps) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 1a65e744d2e2..6cefe2a981c7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -389,11 +389,11 @@ static int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, switch (cap_mode) { case HCA_CAP_OPMOD_GET_MAX: - memcpy(dev->caps.hca_max[cap_type], hca_caps, + memcpy(dev->caps.hca[cap_type].max, hca_caps, MLX5_UN_SZ_BYTES(hca_cap_union)); break; case HCA_CAP_OPMOD_GET_CUR: - memcpy(dev->caps.hca_cur[cap_type], hca_caps, + memcpy(dev->caps.hca[cap_type].cur, hca_caps, MLX5_UN_SZ_BYTES(hca_cap_union)); break; default: @@ -469,7 +469,7 @@ static int handle_hca_cap_odp(struct mlx5_core_dev *dev, void *set_ctx) return err; set_hca_cap = MLX5_ADDR_OF(set_hca_cap_in, set_ctx, capability); - memcpy(set_hca_cap, dev->caps.hca_cur[MLX5_CAP_ODP], + memcpy(set_hca_cap, dev->caps.hca[MLX5_CAP_ODP].cur, MLX5_ST_SZ_BYTES(odp_cap)); #define ODP_CAP_SET_MAX(dev, field) \ @@ -514,7 +514,7 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx) set_hca_cap = MLX5_ADDR_OF(set_hca_cap_in, set_ctx, capability); - memcpy(set_hca_cap, dev->caps.hca_cur[MLX5_CAP_GENERAL], + memcpy(set_hca_cap, dev->caps.hca[MLX5_CAP_GENERAL].cur, MLX5_ST_SZ_BYTES(cmd_hca_cap)); mlx5_core_dbg(dev, "Current Pkey table size %d Setting new size %d\n", @@ -596,7 +596,7 @@ static int handle_hca_cap_roce(struct mlx5_core_dev *dev, void *set_ctx) return 0; set_hca_cap = MLX5_ADDR_OF(set_hca_cap_in, set_ctx, capability); - memcpy(set_hca_cap, dev->caps.hca_cur[MLX5_CAP_ROCE], + memcpy(set_hca_cap, dev->caps.hca[MLX5_CAP_ROCE].cur, MLX5_ST_SZ_BYTES(roce_cap)); MLX5_SET(roce_cap, set_hca_cap, sw_r_roce_src_udp_port, 1); diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index 1e9d55dc1a9c..2736f12bb57c 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -1213,55 +1213,55 @@ enum mlx5_qcam_feature_groups { /* GET Dev Caps macros */ #define MLX5_CAP_GEN(mdev, cap) \ - MLX5_GET(cmd_hca_cap, mdev->caps.hca_cur[MLX5_CAP_GENERAL], cap) + MLX5_GET(cmd_hca_cap, mdev->caps.hca[MLX5_CAP_GENERAL].cur, cap) #define MLX5_CAP_GEN_64(mdev, cap) \ - MLX5_GET64(cmd_hca_cap, mdev->caps.hca_cur[MLX5_CAP_GENERAL], cap) + MLX5_GET64(cmd_hca_cap, mdev->caps.hca[MLX5_CAP_GENERAL].cur, cap) #define MLX5_CAP_GEN_MAX(mdev, cap) \ - MLX5_GET(cmd_hca_cap, mdev->caps.hca_max[MLX5_CAP_GENERAL], cap) + MLX5_GET(cmd_hca_cap, mdev->caps.hca[MLX5_CAP_GENERAL].max, cap) #define MLX5_CAP_GEN_2(mdev, cap) \ - MLX5_GET(cmd_hca_cap_2, mdev->caps.hca_cur[MLX5_CAP_GENERAL_2], cap) + MLX5_GET(cmd_hca_cap_2, mdev->caps.hca[MLX5_CAP_GENERAL_2].cur, cap) #define MLX5_CAP_GEN_2_64(mdev, cap) \ - MLX5_GET64(cmd_hca_cap_2, mdev->caps.hca_cur[MLX5_CAP_GENERAL_2], cap) + MLX5_GET64(cmd_hca_cap_2, mdev->caps.hca[MLX5_CAP_GENERAL_2].cur, cap) #define MLX5_CAP_GEN_2_MAX(mdev, cap) \ - MLX5_GET(cmd_hca_cap_2, mdev->caps.hca_max[MLX5_CAP_GENERAL_2], cap) + MLX5_GET(cmd_hca_cap_2, mdev->caps.hca[MLX5_CAP_GENERAL_2].max, cap) #define MLX5_CAP_ETH(mdev, cap) \ MLX5_GET(per_protocol_networking_offload_caps,\ - mdev->caps.hca_cur[MLX5_CAP_ETHERNET_OFFLOADS], cap) + mdev->caps.hca[MLX5_CAP_ETHERNET_OFFLOADS].cur, cap) #define MLX5_CAP_ETH_MAX(mdev, cap) \ MLX5_GET(per_protocol_networking_offload_caps,\ - mdev->caps.hca_max[MLX5_CAP_ETHERNET_OFFLOADS], cap) + mdev->caps.hca[MLX5_CAP_ETHERNET_OFFLOADS].max, cap) #define MLX5_CAP_IPOIB_ENHANCED(mdev, cap) \ MLX5_GET(per_protocol_networking_offload_caps,\ - mdev->caps.hca_cur[MLX5_CAP_IPOIB_ENHANCED_OFFLOADS], cap) + mdev->caps.hca[MLX5_CAP_IPOIB_ENHANCED_OFFLOADS].cur, cap) #define MLX5_CAP_ROCE(mdev, cap) \ - MLX5_GET(roce_cap, mdev->caps.hca_cur[MLX5_CAP_ROCE], cap) + MLX5_GET(roce_cap, mdev->caps.hca[MLX5_CAP_ROCE].cur, cap) #define MLX5_CAP_ROCE_MAX(mdev, cap) \ - MLX5_GET(roce_cap, mdev->caps.hca_max[MLX5_CAP_ROCE], cap) + MLX5_GET(roce_cap, mdev->caps.hca[MLX5_CAP_ROCE].max, cap) #define MLX5_CAP_ATOMIC(mdev, cap) \ - MLX5_GET(atomic_caps, mdev->caps.hca_cur[MLX5_CAP_ATOMIC], cap) + MLX5_GET(atomic_caps, mdev->caps.hca[MLX5_CAP_ATOMIC].cur, cap) #define MLX5_CAP_ATOMIC_MAX(mdev, cap) \ - MLX5_GET(atomic_caps, mdev->caps.hca_max[MLX5_CAP_ATOMIC], cap) + MLX5_GET(atomic_caps, mdev->caps.hca[MLX5_CAP_ATOMIC].max, cap) #define MLX5_CAP_FLOWTABLE(mdev, cap) \ - MLX5_GET(flow_table_nic_cap, mdev->caps.hca_cur[MLX5_CAP_FLOW_TABLE], cap) + MLX5_GET(flow_table_nic_cap, mdev->caps.hca[MLX5_CAP_FLOW_TABLE].cur, cap) #define MLX5_CAP64_FLOWTABLE(mdev, cap) \ - MLX5_GET64(flow_table_nic_cap, (mdev)->caps.hca_cur[MLX5_CAP_FLOW_TABLE], cap) + MLX5_GET64(flow_table_nic_cap, (mdev)->caps.hca[MLX5_CAP_FLOW_TABLE].cur, cap) #define MLX5_CAP_FLOWTABLE_MAX(mdev, cap) \ - MLX5_GET(flow_table_nic_cap, mdev->caps.hca_max[MLX5_CAP_FLOW_TABLE], cap) + MLX5_GET(flow_table_nic_cap, mdev->caps.hca[MLX5_CAP_FLOW_TABLE].max, cap) #define MLX5_CAP_FLOWTABLE_NIC_RX(mdev, cap) \ MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.cap) @@ -1301,11 +1301,11 @@ enum mlx5_qcam_feature_groups { #define MLX5_CAP_ESW_FLOWTABLE(mdev, cap) \ MLX5_GET(flow_table_eswitch_cap, \ - mdev->caps.hca_cur[MLX5_CAP_ESWITCH_FLOW_TABLE], cap) + mdev->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE].cur, cap) #define MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, cap) \ MLX5_GET(flow_table_eswitch_cap, \ - mdev->caps.hca_max[MLX5_CAP_ESWITCH_FLOW_TABLE], cap) + mdev->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE].max, cap) #define MLX5_CAP_ESW_FLOWTABLE_FDB(mdev, cap) \ MLX5_CAP_ESW_FLOWTABLE(mdev, flow_table_properties_nic_esw_fdb.cap) @@ -1327,31 +1327,31 @@ enum mlx5_qcam_feature_groups { #define MLX5_CAP_ESW(mdev, cap) \ MLX5_GET(e_switch_cap, \ - mdev->caps.hca_cur[MLX5_CAP_ESWITCH], cap) + mdev->caps.hca[MLX5_CAP_ESWITCH].cur, cap) #define MLX5_CAP64_ESW_FLOWTABLE(mdev, cap) \ MLX5_GET64(flow_table_eswitch_cap, \ - (mdev)->caps.hca_cur[MLX5_CAP_ESWITCH_FLOW_TABLE], cap) + (mdev)->caps.hca[MLX5_CAP_ESWITCH_FLOW_TABLE].cur, cap) #define MLX5_CAP_ESW_MAX(mdev, cap) \ MLX5_GET(e_switch_cap, \ - mdev->caps.hca_max[MLX5_CAP_ESWITCH], cap) + mdev->caps.hca[MLX5_CAP_ESWITCH].max, cap) #define MLX5_CAP_ODP(mdev, cap)\ - MLX5_GET(odp_cap, mdev->caps.hca_cur[MLX5_CAP_ODP], cap) + MLX5_GET(odp_cap, mdev->caps.hca[MLX5_CAP_ODP].cur, cap) #define MLX5_CAP_ODP_MAX(mdev, cap)\ - MLX5_GET(odp_cap, mdev->caps.hca_max[MLX5_CAP_ODP], cap) + MLX5_GET(odp_cap, mdev->caps.hca[MLX5_CAP_ODP].max, cap) #define MLX5_CAP_VECTOR_CALC(mdev, cap) \ MLX5_GET(vector_calc_cap, \ - mdev->caps.hca_cur[MLX5_CAP_VECTOR_CALC], cap) + mdev->caps.hca[MLX5_CAP_VECTOR_CALC].cur, cap) #define MLX5_CAP_QOS(mdev, cap)\ - MLX5_GET(qos_cap, mdev->caps.hca_cur[MLX5_CAP_QOS], cap) + MLX5_GET(qos_cap, mdev->caps.hca[MLX5_CAP_QOS].cur, cap) #define MLX5_CAP_DEBUG(mdev, cap)\ - MLX5_GET(debug_cap, mdev->caps.hca_cur[MLX5_CAP_DEBUG], cap) + MLX5_GET(debug_cap, mdev->caps.hca[MLX5_CAP_DEBUG].cur, cap) #define MLX5_CAP_PCAM_FEATURE(mdev, fld) \ MLX5_GET(pcam_reg, (mdev)->caps.pcam, feature_cap_mask.enhanced_features.fld) @@ -1387,27 +1387,27 @@ enum mlx5_qcam_feature_groups { MLX5_GET64(fpga_cap, (mdev)->caps.fpga, cap) #define MLX5_CAP_DEV_MEM(mdev, cap)\ - MLX5_GET(device_mem_cap, mdev->caps.hca_cur[MLX5_CAP_DEV_MEM], cap) + MLX5_GET(device_mem_cap, mdev->caps.hca[MLX5_CAP_DEV_MEM].cur, cap) #define MLX5_CAP64_DEV_MEM(mdev, cap)\ - MLX5_GET64(device_mem_cap, mdev->caps.hca_cur[MLX5_CAP_DEV_MEM], cap) + MLX5_GET64(device_mem_cap, mdev->caps.hca[MLX5_CAP_DEV_MEM].cur, cap) #define MLX5_CAP_TLS(mdev, cap) \ - MLX5_GET(tls_cap, (mdev)->caps.hca_cur[MLX5_CAP_TLS], cap) + MLX5_GET(tls_cap, (mdev)->caps.hca[MLX5_CAP_TLS].cur, cap) #define MLX5_CAP_DEV_EVENT(mdev, cap)\ - MLX5_ADDR_OF(device_event_cap, (mdev)->caps.hca_cur[MLX5_CAP_DEV_EVENT], cap) + MLX5_ADDR_OF(device_event_cap, (mdev)->caps.hca[MLX5_CAP_DEV_EVENT].cur, cap) #define MLX5_CAP_DEV_VDPA_EMULATION(mdev, cap)\ MLX5_GET(virtio_emulation_cap, \ - (mdev)->caps.hca_cur[MLX5_CAP_VDPA_EMULATION], cap) + (mdev)->caps.hca[MLX5_CAP_VDPA_EMULATION].cur, cap) #define MLX5_CAP64_DEV_VDPA_EMULATION(mdev, cap)\ MLX5_GET64(virtio_emulation_cap, \ - (mdev)->caps.hca_cur[MLX5_CAP_VDPA_EMULATION], cap) + (mdev)->caps.hca[MLX5_CAP_VDPA_EMULATION].cur, cap) #define MLX5_CAP_IPSEC(mdev, cap)\ - MLX5_GET(ipsec_cap, (mdev)->caps.hca_cur[MLX5_CAP_IPSEC], cap) + MLX5_GET(ipsec_cap, (mdev)->caps.hca[MLX5_CAP_IPSEC].cur, cap) enum { MLX5_CMD_STAT_OK = 0x0, diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 2b5c5604b091..854443ea812c 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -729,6 +729,11 @@ struct mlx5_profile { } mr_cache[MAX_MR_CACHE_ENTRIES]; }; +struct mlx5_hca_cap { + u32 cur[MLX5_UN_SZ_DW(hca_cap_union)]; + u32 max[MLX5_UN_SZ_DW(hca_cap_union)]; +}; + struct mlx5_core_dev { struct device *device; enum mlx5_coredev_type coredev_type; @@ -740,8 +745,7 @@ struct mlx5_core_dev { char board_id[MLX5_BOARD_ID_LEN]; struct mlx5_cmd cmd; struct { - u32 hca_cur[MLX5_CAP_NUM][MLX5_UN_SZ_DW(hca_cap_union)]; - u32 hca_max[MLX5_CAP_NUM][MLX5_UN_SZ_DW(hca_cap_union)]; + struct mlx5_hca_cap hca[MLX5_CAP_NUM]; u32 pcam[MLX5_ST_SZ_DW(pcam_reg)]; u32 mcam[MLX5_MCAM_REGS_NUM][MLX5_ST_SZ_DW(mcam_reg)]; u32 fpga[MLX5_ST_SZ_DW(fpga_cap)]; From patchwork Wed Aug 11 18:16:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 495602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88336C4338F for ; Wed, 11 Aug 2021 18:18:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6484F60E78 for ; Wed, 11 Aug 2021 18:18:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231178AbhHKSSx (ORCPT ); Wed, 11 Aug 2021 14:18:53 -0400 Received: from mail.kernel.org ([198.145.29.99]:52262 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230320AbhHKSSl (ORCPT ); Wed, 11 Aug 2021 14:18:41 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D1F5460F46; Wed, 11 Aug 2021 18:18:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628705897; bh=SJ62LKjMYcrT7sA7hn3QzJ4cZHwbO2H182K13h5duEg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RXWE2yhyr2JW9GGpWn4PBgApROaE+pZqyU/E0E1YbpayUUNaOmYsJOqqbwjfa824D TfcYEDlIF3AJhTMVGRkYJCXcsfDgs0wiMNk+wkixicecYPXhhzDRrweg3EBs3WmNwy w4SRjKqWpOM0CDxjSx2ccyKzQOI8XaBBBAvumRSBQRHk/hoDl9Ip+RnIL7XUXrMoyM yEuAM99jvlI4gUTUBc/fTp/F4wT9Azs2u1tzCr1V5rBrCohFlnalSc7VTPdAdn4AsK eYqtknIEoeQ6StoAY9rkKtHMhUWtTo9sZmH/yFXcEXuBAGW/CbGOvBVT0GwW8CTBei /xoa1gLJk69Kg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , Parav Pandit , Saeed Mahameed Subject: [net-next 10/12] net/mlx5: Initialize numa node for all core devices Date: Wed, 11 Aug 2021 11:16:56 -0700 Message-Id: <20210811181658.492548-11-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210811181658.492548-1-saeed@kernel.org> References: <20210811181658.492548-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Parav Pandit Subsequent patches make use of numa node affinity for memory allocations. Initialize it for PCI PF, VF and SF devices. Signed-off-by: Parav Pandit Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 20f693cf58cc..6df4b940473b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -748,14 +748,12 @@ static int mlx5_core_set_issi(struct mlx5_core_dev *dev) static int mlx5_pci_init(struct mlx5_core_dev *dev, struct pci_dev *pdev, const struct pci_device_id *id) { - struct mlx5_priv *priv = &dev->priv; int err = 0; mutex_init(&dev->pci_status_mutex); pci_set_drvdata(dev->pdev, dev); dev->bar_addr = pci_resource_start(pdev, 0); - priv->numa_node = dev_to_node(mlx5_core_dma_dev(dev)); err = mlx5_pci_enable_device(dev); if (err) { @@ -1448,6 +1446,7 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx) mutex_init(&priv->pgdir_mutex); INIT_LIST_HEAD(&priv->pgdir_list); + priv->numa_node = dev_to_node(mlx5_core_dma_dev(dev)); priv->dbg_root = debugfs_create_dir(dev_name(dev->device), mlx5_debugfs_root); INIT_LIST_HEAD(&priv->traps); From patchwork Wed Aug 11 18:16:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 495601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30041C4320E for ; Wed, 11 Aug 2021 18:18:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 13F1D60E78 for ; Wed, 11 Aug 2021 18:18:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231216AbhHKSSz (ORCPT ); Wed, 11 Aug 2021 14:18:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:52282 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230364AbhHKSSm (ORCPT ); Wed, 11 Aug 2021 14:18:42 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 052B46101D; Wed, 11 Aug 2021 18:18:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628705898; bh=sWAuoEeYx/5xlwvB5MUcECk/0PuLofApRJn7R+l4CA0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hyIjbmrrCVqlFb94z8ojLsh9w3Blt4BJHvee951VJs2XfQSUgdMAOqcJ+BAR17Chy O4cIKs5lmIA99sb/L+q+jPpf/I3kDKVfWWFlkKsECzkrEjn4DL8j50f/SUreABCP2U CDVS7qPT8sn3oGEobdmk7GMECM024Jfb3L6QHuttMfzElu4ztDX3j98iWojWeDA0IW dNiezJApMG2113ajdprhZdqUl8O2cpopuADgTZk+JwDp6bR5c0NEx7M8FPxCOICwnv WXlitr9/mNgf73rEc0xJNfZ3pgCZfzj+biu352+OEccXGA945mhDqKZa+lzF/YZPKF Rn8CsbY8TYHeA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , Cai Huoqing , Saeed Mahameed Subject: [net-next 12/12] net/mlx5e: Make use of netdev_warn() Date: Wed, 11 Aug 2021 11:16:58 -0700 Message-Id: <20210811181658.492548-13-saeed@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210811181658.492548-1-saeed@kernel.org> References: <20210811181658.492548-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Cai Huoqing to replace printk(KERN_WARNING ...) with netdev_warn() kindly Signed-off-by: Cai Huoqing Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index d6ad7328f298..9465a51b6e66 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -2702,7 +2702,9 @@ static int offload_pedit_fields(struct mlx5e_priv *priv, if (s_mask && a_mask) { NL_SET_ERR_MSG_MOD(extack, "can't set and add to the same HW field"); - printk(KERN_WARNING "mlx5: can't set and add to the same HW field (%x)\n", f->field); + netdev_warn(priv->netdev, + "mlx5: can't set and add to the same HW field (%x)\n", + f->field); return -EOPNOTSUPP; } @@ -2741,8 +2743,9 @@ static int offload_pedit_fields(struct mlx5e_priv *priv, if (first < next_z && next_z < last) { NL_SET_ERR_MSG_MOD(extack, "rewrite of few sub-fields isn't supported"); - printk(KERN_WARNING "mlx5: rewrite of few sub-fields (mask %lx) isn't offloaded\n", - mask); + netdev_warn(priv->netdev, + "mlx5: rewrite of few sub-fields (mask %lx) isn't offloaded\n", + mask); return -EOPNOTSUPP; }