From patchwork Fri Jan 8 05:30:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67035C433E6 for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33C082343B for ; Fri, 8 Jan 2021 05:31:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725844AbhAHFbj (ORCPT ); Fri, 8 Jan 2021 00:31:39 -0500 Received: from mail.kernel.org ([198.145.29.99]:35768 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727001AbhAHFbj (ORCPT ); Fri, 8 Jan 2021 00:31:39 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0F99D23447; Fri, 8 Jan 2021 05:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083858; bh=F49kLDrCed6uyBjZ0IqYXQ53NkEM/zkekFsLsal6OB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tt57tPWoExRpz9XTHcBRxl+cHcjiJETWmeA096lrLNNxJ7jQXevKBSeSFtBD75Izw nfE4292Pkzp7kdDg/RbbY8E8VGAAakRh7/NtOgTD5tGJUraO29WbUQn+qe8yHzbg9s QVORrOdm360d+YU7weN8dmkoRBqEu6Kx7EzKR/se0FIpvaQfzyrg1UjY/6M2HqRXD9 ITwh5VQqRkfs4zIyXD7286JZ/zp6Ln4HYqt2lneR1hVZeDwu3eo6OKFfUsdCMAgK3H 6wDhdODK+Hc5aGVsw4X7e4OWmnvHfdOevZub4FVTrPipz0afZ34nEzK4BYonQEkeqJ 88W1z5BjZztQA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Jianbo Liu , Oz Shlomo , Roi Dayan , Saeed Mahameed Subject: [net-next 04/15] net/mlx5e: E-Switch, Offload all chain 0 priorities when modify header and forward action is not supported Date: Thu, 7 Jan 2021 21:30:43 -0800 Message-Id: <20210108053054.660499-5-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jianbo Liu Miss path handling of tc multi chain filters (i.e. filters that are defined on chain > 0) requires the hardware to communicate to the driver the last chain that was processed. This is possible only when the hardware is capable of performing the combination of modify header and forward to table actions. Currently, if the hardware is missing this capability then the driver only offloads rules that are defined on tc chain 0 prio 1. However, this restriction can be relaxed because packets that miss from chain 0 are processed through all the priorities by tc software. Allow the offload of all the supported priorities for chain 0 even when the hardware is not capable to perform modify header and goto table actions. Fixes: 0b3a8b6b5340 ("net/mlx5: E-Switch: Fix using fwd and modify when firmware doesn't support it") Signed-off-by: Jianbo Liu Reviewed-by: Oz Shlomo Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 6 ------ drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c | 3 --- 2 files changed, 9 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 4cdf834fa74a..56aa39ac1a1c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1317,12 +1317,6 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, int err = 0; int out_index; - if (!mlx5_chains_prios_supported(esw_chains(esw)) && attr->prio != 1) { - NL_SET_ERR_MSG_MOD(extack, - "E-switch priorities unsupported, upgrade FW"); - return -EOPNOTSUPP; - } - /* We check chain range only for tc flows. * For ft flows, we checked attr->chain was originally 0 and set it to * FDB_FT_CHAIN which is outside tc range. diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index 947f346bdc2d..fa61d305990c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -141,9 +141,6 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains) u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) { - if (!mlx5_chains_prios_supported(chains)) - return 1; - if (mlx5_chains_ignore_flow_level_supported(chains)) return UINT_MAX; From patchwork Fri Jan 8 05:30:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA497C433DB for ; Fri, 8 Jan 2021 05:32:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A1AA233FB for ; Fri, 8 Jan 2021 05:32:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727614AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:35858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727489AbhAHFcR (ORCPT ); Fri, 8 Jan 2021 00:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 05E9323403; Fri, 8 Jan 2021 05:30:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083859; bh=mdwVSbRH5eEhxTjqaDtaoXYM4NwSCcLKgGfFHlIWcNA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r/klehQv5fq00zQNFBhlLsi4G+SvH1ioQ5MEscK99N1sZYxYxcO9rogniABHnYOg+ INBuzOzN0eRlTjxeydYPfjhCC69DsmhklV6+4QUTSuEV5vopQdP+EjcDk4t5lXC1I/ swySzxyxVHRkKJDhIDtA2jQ+02PAbKKCw/uUsoWCpJTQ24EVaVW/fEFOlAq98udvV9 sSe/EeKGPADL8EbHqFtdBoxu1wIir0XbXJ+L8BKnynjL/mck1FLHp+TFVt+xrSSlGT +JPh/XhPgnwOAhnP+EVtpLTrC9Ksqs6u0oxoi94WmwA1eVoHfOVSz70J2Uq2KnIC9m v2XX0jDCrf2vw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Oz Shlomo , Paul Blakey , Saeed Mahameed Subject: [net-next 06/15] net/mlx5e: Remove redundant initialization to null Date: Thu, 7 Jan 2021 21:30:45 -0800 Message-Id: <20210108053054.660499-7-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Roi Dayan miss_rule and prio_s args are not being referenced before assigned so there is no need to init them. Signed-off-by: Roi Dayan Reviewed-by: Oz Shlomo Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c index fa61d305990c..381325b4a863 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c @@ -538,13 +538,13 @@ mlx5_chains_create_prio(struct mlx5_fs_chains *chains, u32 chain, u32 prio, u32 level) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); - struct mlx5_flow_handle *miss_rule = NULL; + struct mlx5_flow_handle *miss_rule; struct mlx5_flow_group *miss_group; struct mlx5_flow_table *next_ft; struct mlx5_flow_table *ft; - struct prio *prio_s = NULL; struct fs_chain *chain_s; struct list_head *pos; + struct prio *prio_s; u32 *flow_group_in; int err; From patchwork Fri Jan 8 05:30:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4DFC433E0 for ; Fri, 8 Jan 2021 05:32:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EA34233FC for ; Fri, 8 Jan 2021 05:32:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727695AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:35864 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727145AbhAHFcR (ORCPT ); Fri, 8 Jan 2021 00:32:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 77415235FA; Fri, 8 Jan 2021 05:30:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083859; bh=eG6R1DIS9cGxxRKoIEcaWliWd3ODZo8VQnU1QivRPAk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oIp73Xo4rkvv9JF2w2lWE5Yf0nItFdAn0tJVK82u4cE/aA6beJSwV02aJSX4XyeVP RhxNoDifPe0RSPLxU1AJr0a/X1LpoRdcMV0xXGuZ4n+9+bq1ON8bIMXTO1H7eIM6YL z5CGuAAMWVJINKlIJeFtgRapIMDwWopKwdQRl9MRjHruOazywBdSJTPhJEMsDRWUdP QkWTuHYEssBVhRxGUJNBNbpV7MmvnfxYaD34Yn+u3wRJfeb+3cOFG29vKqJM0lFJct eIbbNgSRHjq3+yWlpTWK0D4dCGk3HjtH9BQJw1UYFT9lU6iH7BWoVExT+S6vYNAQ6m T2/0lkrmqzmxg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 07/15] net/mlx5e: CT: Remove redundant usage of zone mask Date: Thu, 7 Jan 2021 21:30:46 -0800 Message-Id: <20210108053054.660499-8-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Roi Dayan The zone member is of type u16 so there is no reason to apply the zone mask on it. This is also matching the call to set a match in other places which don't need and don't apply the mask. Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index a0b193181ba5..d7ecd5e5f7c4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -703,9 +703,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv, attr->flags |= MLX5_ESW_ATTR_FLAG_NO_IN_PORT; mlx5_tc_ct_set_tuple_match(netdev_priv(ct_priv->netdev), spec, flow_rule); - mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, - entry->tuple.zone & MLX5_CT_ZONE_MASK, - MLX5_CT_ZONE_MASK); + mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, entry->tuple.zone, MLX5_CT_ZONE_MASK); zone_rule->rule = mlx5_tc_rule_insert(priv, spec, attr); if (IS_ERR(zone_rule->rule)) { From patchwork Fri Jan 8 05:30:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CFAFC433E0 for ; Fri, 8 Jan 2021 05:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A4D0233EE for ; Fri, 8 Jan 2021 05:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727834AbhAHFcc (ORCPT ); Fri, 8 Jan 2021 00:32:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:35862 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727495AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 07490235FC; Fri, 8 Jan 2021 05:30:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083860; bh=jl1SFtCgX4vu4c2Smrtp/L37oy72Y1Dx5tuHWugVtKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UwHkuhyraX/ZS8dxIjKsbrEpKAQ9DyifryfsafSDh1BtARE7IBdcYCWbyXTEdXVMa q5Fq/jqyQptlddwfUd0baAd+dD/h8uMB5/IV1CIyKHAez05+7kI3qTYQ9iJucQ+KaT xGNh3pJiFUaD/27Ow5SBfrMqzVj3vPnhzK1Vt++sgxEJneJ5TS2Ft5Fm7fcV5Gt1PY pQUV2Ztd73B096l7FaPxIZ3AijCMDpGarBPgusEOM/pz5rtQxQCPFX7CQVJS6AI5Cm i+70xQaMfpR9H6jgtxeBpr+otBz0gDhpaivGcN/83U0gxxfHox9jeTxTKU4JmL6k7a Ip+sO68fb9UVQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 08/15] net/mlx5e: CT: Preparation for offloading +trk+new ct rules Date: Thu, 7 Jan 2021 21:30:47 -0800 Message-Id: <20210108053054.660499-9-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Roi Dayan Connection tracking associates the connection state per packet. The first packet of a connection is assigned with the +trk+new state. The connection enters the established state once a packet is seen on the other direction. Currently we offload only the established flows. However, UDP traffic using source port entropy (e.g. vxlan, RoCE) will never enter the established state. Such protocols do not require stateful processing, and therefore could be offloaded. The change in the model is that a miss on the CT table will be forwarded to a new +trk+new ct table and a miss there will be forwarded to the slow path table. Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 104 ++++++++++++++++-- 1 file changed, 96 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index d7ecd5e5f7c4..6dac2fabb7f5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -21,6 +21,7 @@ #include "en.h" #include "en_tc.h" #include "en_rep.h" +#include "fs_core.h" #define MLX5_CT_ZONE_BITS (mlx5e_tc_attr_to_reg_mappings[ZONE_TO_REG].mlen * 8) #define MLX5_CT_ZONE_MASK GENMASK(MLX5_CT_ZONE_BITS - 1, 0) @@ -50,6 +51,9 @@ struct mlx5_tc_ct_priv { struct mlx5_flow_table *ct; struct mlx5_flow_table *ct_nat; struct mlx5_flow_table *post_ct; + struct mlx5_flow_table *trk_new_ct; + struct mlx5_flow_group *miss_grp; + struct mlx5_flow_handle *miss_rule; struct mutex control_lock; /* guards parallel adds/dels */ struct mutex shared_counter_lock; struct mapping_ctx *zone_mapping; @@ -1490,14 +1494,14 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) * | set zone * v * +--------------------+ - * + CT (nat or no nat) + - * + tuple + zone match + - * +--------------------+ - * | set mark - * | set labels_id - * | set established - * | set zone_restore - * | do nat (if needed) + * + CT (nat or no nat) + miss +---------------------+ miss + * + tuple + zone match +----------------->+ trk_new_ct +-------> SW + * +--------------------+ + vxlan||roce match + + * | set mark +---------------------+ + * | set labels_id | set ct_state +trk+new + * | set established | set zone_restore + * | set zone_restore v + * | do nat (if needed) post_ct * v * +--------------+ * + post_ct + original filter actions @@ -1893,6 +1897,72 @@ mlx5_tc_ct_init_check_support(struct mlx5e_priv *priv, return mlx5_tc_ct_init_check_nic_support(priv, err_msg); } +static struct mlx5_flow_handle * +tc_ct_add_miss_rule(struct mlx5_flow_table *ft, + struct mlx5_flow_table *next_ft) +{ + struct mlx5_flow_destination dest = {}; + struct mlx5_flow_act act = {}; + + act.flags = FLOW_ACT_IGNORE_FLOW_LEVEL | FLOW_ACT_NO_APPEND; + act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest.ft = next_ft; + + return mlx5_add_flow_rules(ft, NULL, &act, &dest, 1); +} + +static int +tc_ct_add_ct_table_miss_rule(struct mlx5_tc_ct_priv *ct_priv) +{ + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_flow_handle *miss_rule; + struct mlx5_flow_group *miss_group; + int max_fte = ct_priv->ct->max_fte; + u32 *flow_group_in; + int err = 0; + + flow_group_in = kvzalloc(inlen, GFP_KERNEL); + if (!flow_group_in) + return -ENOMEM; + + /* create miss group */ + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, + max_fte - 2); + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, + max_fte - 1); + miss_group = mlx5_create_flow_group(ct_priv->ct, flow_group_in); + if (IS_ERR(miss_group)) { + err = PTR_ERR(miss_group); + goto err_miss_grp; + } + + /* add miss rule to next fdb */ + miss_rule = tc_ct_add_miss_rule(ct_priv->ct, ct_priv->trk_new_ct); + if (IS_ERR(miss_rule)) { + err = PTR_ERR(miss_rule); + goto err_miss_rule; + } + + ct_priv->miss_grp = miss_group; + ct_priv->miss_rule = miss_rule; + kvfree(flow_group_in); + return 0; + +err_miss_rule: + mlx5_destroy_flow_group(miss_group); +err_miss_grp: + kvfree(flow_group_in); + return err; +} + +static void +tc_ct_del_ct_table_miss_rule(struct mlx5_tc_ct_priv *ct_priv) +{ + mlx5_del_flow_rules(ct_priv->miss_rule); + mlx5_destroy_flow_group(ct_priv->miss_grp); +} + #define INIT_ERR_PREFIX "tc ct offload init failed" struct mlx5_tc_ct_priv * @@ -1962,6 +2032,18 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, goto err_post_ct_tbl; } + ct_priv->trk_new_ct = mlx5_chains_create_global_table(chains); + if (IS_ERR(ct_priv->trk_new_ct)) { + err = PTR_ERR(ct_priv->trk_new_ct); + mlx5_core_warn(dev, "%s, failed to create trk new ct table err: %d", + INIT_ERR_PREFIX, err); + goto err_trk_new_ct_tbl; + } + + err = tc_ct_add_ct_table_miss_rule(ct_priv); + if (err) + goto err_init_ct_tbl; + idr_init(&ct_priv->fte_ids); mutex_init(&ct_priv->control_lock); mutex_init(&ct_priv->shared_counter_lock); @@ -1971,6 +2053,10 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, return ct_priv; +err_init_ct_tbl: + mlx5_chains_destroy_global_table(chains, ct_priv->trk_new_ct); +err_trk_new_ct_tbl: + mlx5_chains_destroy_global_table(chains, ct_priv->post_ct); err_post_ct_tbl: mlx5_chains_destroy_global_table(chains, ct_priv->ct_nat); err_ct_nat_tbl: @@ -1997,6 +2083,8 @@ mlx5_tc_ct_clean(struct mlx5_tc_ct_priv *ct_priv) chains = ct_priv->chains; + tc_ct_del_ct_table_miss_rule(ct_priv); + mlx5_chains_destroy_global_table(chains, ct_priv->trk_new_ct); mlx5_chains_destroy_global_table(chains, ct_priv->post_ct); mlx5_chains_destroy_global_table(chains, ct_priv->ct_nat); mlx5_chains_destroy_global_table(chains, ct_priv->ct); From patchwork Fri Jan 8 05:30:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B07E7C433E0 for ; Fri, 8 Jan 2021 05:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72913233EE for ; Fri, 8 Jan 2021 05:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727778AbhAHFcZ (ORCPT ); Fri, 8 Jan 2021 00:32:25 -0500 Received: from mail.kernel.org ([198.145.29.99]:35866 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727617AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 73987235FF; Fri, 8 Jan 2021 05:31:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083860; bh=z0P2B5cF3JnhqoaBim1peyysRdd7H8ydax5UE9QTjDM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WRntdumRV+CCnMkXrG4RpTu+stOKIgN5FjtaVgF6UARM2kfm/djenW+vsji9b0QI9 14HO5n+2YDr3HAnLv3W3m0bc/tHZnfax3MSVAEePNCwK2Gg0kx9LHh2UOiAQk5V9Au YZ1jdmpYMt5egPekJSWNsdofUyO+ybuPTMk1sjrUHaeWR+O1ZvIBHrRTw55Z5YqA0N 2K9rmc6mZEhZSwpqjPXqLlF6XtRSmn/7FZUw4ekWhqPYj++5C5ciGYqC6wM2Dg8ZfT rRWry77RArhxnHYV8xI0ceyicd0mxT70jVC0wFNuZ3pBpRWFkP1czNvUnrbbysjxat h/Vs/2MWHgUBg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Roi Dayan , Paul Blakey , Saeed Mahameed Subject: [net-next 09/15] net/mlx5e: CT: Support offload of +trk+new ct rules Date: Thu, 7 Jan 2021 21:30:48 -0800 Message-Id: <20210108053054.660499-10-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Roi Dayan Add support to offload +trk+new rules for terminating flows for udp protocols using source port entropy. This kind of traffic will never be considered connect in conntrack and thus never set as established so no need to keep track of them in SW conntrack and offload this traffic based on dst port. In this commit we support only the default registered vxlan port, RoCE and Geneve ports. Using the registered ports assume the traffic is that of the registered protocol. Signed-off-by: Roi Dayan Reviewed-by: Paul Blakey Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 228 +++++++++++++++++- .../ethernet/mellanox/mlx5/core/en/tc_ct.h | 6 + .../net/ethernet/mellanox/mlx5/core/en_tc.c | 16 +- 3 files changed, 236 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index 6dac2fabb7f5..b0c357f755d4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -25,9 +25,6 @@ #define MLX5_CT_ZONE_BITS (mlx5e_tc_attr_to_reg_mappings[ZONE_TO_REG].mlen * 8) #define MLX5_CT_ZONE_MASK GENMASK(MLX5_CT_ZONE_BITS - 1, 0) -#define MLX5_CT_STATE_ESTABLISHED_BIT BIT(1) -#define MLX5_CT_STATE_TRK_BIT BIT(2) -#define MLX5_CT_STATE_NAT_BIT BIT(3) #define MLX5_FTE_ID_BITS (mlx5e_tc_attr_to_reg_mappings[FTEID_TO_REG].mlen * 8) #define MLX5_FTE_ID_MAX GENMASK(MLX5_FTE_ID_BITS - 1, 0) @@ -39,6 +36,17 @@ #define ct_dbg(fmt, args...)\ netdev_dbg(ct_priv->netdev, "ct_debug: " fmt "\n", ##args) +#define IANA_VXLAN_UDP_PORT 4789 +#define ROCE_V2_UDP_DPORT 4791 +#define GENEVE_UDP_PORT 6081 +#define DEFAULT_UDP_PORTS 3 + +static int default_udp_ports[] = { + IANA_VXLAN_UDP_PORT, + ROCE_V2_UDP_DPORT, + GENEVE_UDP_PORT, +}; + struct mlx5_tc_ct_priv { struct mlx5_core_dev *dev; const struct net_device *netdev; @@ -88,6 +96,16 @@ struct mlx5_tc_ct_pre { struct mlx5_modify_hdr *modify_hdr; }; +struct mlx5_tc_ct_trk_new_rule { + struct mlx5_flow_handle *flow_rule; + struct list_head list; +}; + +struct mlx5_tc_ct_trk_new_rules { + struct list_head rules; + struct mlx5_modify_hdr *modify_hdr; +}; + struct mlx5_ct_ft { struct rhash_head node; u16 zone; @@ -98,6 +116,8 @@ struct mlx5_ct_ft { struct rhashtable ct_entries_ht; struct mlx5_tc_ct_pre pre_ct; struct mlx5_tc_ct_pre pre_ct_nat; + struct mlx5_tc_ct_trk_new_rules trk_new_rules; + struct nf_conn *tmpl; }; struct mlx5_ct_tuple { @@ -1064,7 +1084,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, { struct flow_rule *rule = flow_cls_offload_flow_rule(f); struct flow_dissector_key_ct *mask, *key; - bool trk, est, untrk, unest, new; + bool trk, est, untrk, unest, new, unnew; u32 ctstate = 0, ctstate_mask = 0; u16 ct_state_on, ct_state_off; u16 ct_state, ct_state_mask; @@ -1102,19 +1122,16 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, new = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_NEW; est = ct_state_on & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED; untrk = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_TRACKED; + unnew = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_NEW; unest = ct_state_off & TCA_FLOWER_KEY_CT_FLAGS_ESTABLISHED; ctstate |= trk ? MLX5_CT_STATE_TRK_BIT : 0; + ctstate |= new ? MLX5_CT_STATE_NEW_BIT : 0; ctstate |= est ? MLX5_CT_STATE_ESTABLISHED_BIT : 0; ctstate_mask |= (untrk || trk) ? MLX5_CT_STATE_TRK_BIT : 0; + ctstate_mask |= (unnew || new) ? MLX5_CT_STATE_NEW_BIT : 0; ctstate_mask |= (unest || est) ? MLX5_CT_STATE_ESTABLISHED_BIT : 0; - if (new) { - NL_SET_ERR_MSG_MOD(extack, - "matching on ct_state +new isn't supported"); - return -EOPNOTSUPP; - } - if (mask->ct_zone) mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, key->ct_zone, MLX5_CT_ZONE_MASK); @@ -1136,6 +1153,8 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv, MLX5_CT_LABELS_MASK); } + ct_attr->ct_state = ctstate; + return 0; } @@ -1390,10 +1409,157 @@ mlx5_tc_ct_free_pre_ct_tables(struct mlx5_ct_ft *ft) mlx5_tc_ct_free_pre_ct(ft, &ft->pre_ct); } +static void mlx5_tc_ct_set_match_dst_udp_port(struct mlx5_flow_spec *spec, u16 dst_port) +{ + void *headers_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, + outer_headers); + void *headers_v = MLX5_ADDR_OF(fte_match_param, spec->match_value, + outer_headers); + + MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, headers_c, udp_dport); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dst_port); + + spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; +} + +static struct mlx5_tc_ct_trk_new_rule * +tc_ct_add_trk_new_rule(struct mlx5_ct_ft *ft, int port) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + struct mlx5_tc_ct_trk_new_rule *trk_new_rule; + struct mlx5_flow_destination dest = {}; + struct mlx5_flow_act flow_act = {}; + struct mlx5_flow_handle *rule; + struct mlx5_flow_spec *spec; + int err; + + trk_new_rule = kzalloc(sizeof(*trk_new_rule), GFP_KERNEL); + if (!trk_new_rule) + return ERR_PTR(-ENOMEM); + + spec = kzalloc(sizeof(*spec), GFP_KERNEL); + if (!spec) { + kfree(trk_new_rule); + return ERR_PTR(-ENOMEM); + } + + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | + MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; + flow_act.flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; + flow_act.modify_hdr = ft->trk_new_rules.modify_hdr; + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest.ft = ct_priv->post_ct; + + mlx5e_tc_match_to_reg_match(spec, ZONE_TO_REG, ft->zone, MLX5_CT_ZONE_MASK); + mlx5_tc_ct_set_match_dst_udp_port(spec, port); + + rule = mlx5_add_flow_rules(ct_priv->trk_new_ct, spec, &flow_act, &dest, 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + ct_dbg("Failed to add trk_new rule for udp port %d, err %d", port, err); + goto err_insert; + } + + kfree(spec); + trk_new_rule->flow_rule = rule; + list_add_tail(&trk_new_rule->list, &ft->trk_new_rules.rules); + return trk_new_rule; + +err_insert: + kfree(spec); + kfree(trk_new_rule); + return ERR_PTR(err); +} + +static void +tc_ct_del_trk_new_rule(struct mlx5_tc_ct_trk_new_rule *rule) +{ + list_del(&rule->list); + mlx5_del_flow_rules(rule->flow_rule); + kfree(rule); +} + +static int +tc_ct_init_trk_new_rules(struct mlx5_ct_ft *ft) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + struct mlx5_tc_ct_trk_new_rule *rule, *tmp; + struct mlx5e_tc_mod_hdr_acts mod_acts = {}; + struct mlx5_modify_hdr *mod_hdr; + struct mlx5e_priv *priv; + u32 ct_state; + int i, err; + + priv = netdev_priv(ct_priv->netdev); + + ct_state = MLX5_CT_STATE_TRK_BIT | MLX5_CT_STATE_NEW_BIT; + err = mlx5e_tc_match_to_reg_set(priv->mdev, &mod_acts, ct_priv->ns_type, + CTSTATE_TO_REG, ct_state); + if (err) { + ct_dbg("Failed to set register for ct trk_new"); + goto err_set_registers; + } + + err = mlx5e_tc_match_to_reg_set(priv->mdev, &mod_acts, ct_priv->ns_type, + ZONE_RESTORE_TO_REG, ft->zone_restore_id); + if (err) { + ct_dbg("Failed to set register for ct trk_new zone restore"); + goto err_set_registers; + } + + mod_hdr = mlx5_modify_header_alloc(priv->mdev, + ct_priv->ns_type, + mod_acts.num_actions, + mod_acts.actions); + if (IS_ERR(mod_hdr)) { + err = PTR_ERR(mod_hdr); + ct_dbg("Failed to create ct trk_new mod hdr"); + goto err_set_registers; + } + + ft->trk_new_rules.modify_hdr = mod_hdr; + dealloc_mod_hdr_actions(&mod_acts); + + for (i = 0; i < DEFAULT_UDP_PORTS; i++) { + int port = default_udp_ports[i]; + + rule = tc_ct_add_trk_new_rule(ft, port); + if (IS_ERR(rule)) + goto err_insert; + } + + return 0; + +err_insert: + list_for_each_entry_safe(rule, tmp, &ft->trk_new_rules.rules, list) + tc_ct_del_trk_new_rule(rule); + mlx5_modify_header_dealloc(priv->mdev, mod_hdr); +err_set_registers: + dealloc_mod_hdr_actions(&mod_acts); + netdev_warn(priv->netdev, + "Failed to offload ct trk_new flow, err %d\n", err); + return err; +} + +static void +tc_ct_cleanup_trk_new_rules(struct mlx5_ct_ft *ft) +{ + struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + struct mlx5_tc_ct_trk_new_rule *rule, *tmp; + struct mlx5e_priv *priv; + + list_for_each_entry_safe(rule, tmp, &ft->trk_new_rules.rules, list) + tc_ct_del_trk_new_rule(rule); + + priv = netdev_priv(ct_priv->netdev); + mlx5_modify_header_dealloc(priv->mdev, ft->trk_new_rules.modify_hdr); +} + static struct mlx5_ct_ft * mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, struct nf_flowtable *nf_ft) { + struct nf_conntrack_zone ctzone; struct mlx5_ct_ft *ft; int err; @@ -1415,11 +1581,16 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, ft->nf_ft = nf_ft; ft->ct_priv = ct_priv; refcount_set(&ft->refcount, 1); + INIT_LIST_HEAD(&ft->trk_new_rules.rules); err = mlx5_tc_ct_alloc_pre_ct_tables(ft); if (err) goto err_alloc_pre_ct; + err = tc_ct_init_trk_new_rules(ft); + if (err) + goto err_add_trk_new_rules; + err = rhashtable_init(&ft->ct_entries_ht, &cts_ht_params); if (err) goto err_init; @@ -1429,6 +1600,14 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, if (err) goto err_insert; + nf_ct_zone_init(&ctzone, zone, NF_CT_DEFAULT_ZONE_DIR, 0); + ft->tmpl = nf_ct_tmpl_alloc(&init_net, &ctzone, GFP_KERNEL); + if (!ft->tmpl) + goto err_tmpl; + + __set_bit(IPS_CONFIRMED_BIT, &ft->tmpl->status); + nf_conntrack_get(&ft->tmpl->ct_general); + err = nf_flow_table_offload_add_cb(ft->nf_ft, mlx5_tc_ct_block_flow_offload, ft); if (err) @@ -1437,10 +1616,14 @@ mlx5_tc_ct_add_ft_cb(struct mlx5_tc_ct_priv *ct_priv, u16 zone, return ft; err_add_cb: + nf_conntrack_put(&ft->tmpl->ct_general); +err_tmpl: rhashtable_remove_fast(&ct_priv->zone_ht, &ft->node, zone_params); err_insert: rhashtable_destroy(&ft->ct_entries_ht); err_init: + tc_ct_cleanup_trk_new_rules(ft); +err_add_trk_new_rules: mlx5_tc_ct_free_pre_ct_tables(ft); err_alloc_pre_ct: mapping_remove(ct_priv->zone_mapping, ft->zone_restore_id); @@ -1471,6 +1654,8 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft) rhashtable_free_and_destroy(&ft->ct_entries_ht, mlx5_tc_ct_flush_ft_entry, ct_priv); + nf_conntrack_put(&ft->tmpl->ct_general); + tc_ct_cleanup_trk_new_rules(ft); mlx5_tc_ct_free_pre_ct_tables(ft); mapping_remove(ct_priv->zone_mapping, ft->zone_restore_id); kfree(ft); @@ -2100,6 +2285,27 @@ mlx5_tc_ct_clean(struct mlx5_tc_ct_priv *ct_priv) kfree(ct_priv); } +static bool +mlx5e_tc_ct_restore_trk_new(struct mlx5_tc_ct_priv *ct_priv, + struct sk_buff *skb, + struct mlx5_ct_tuple *tuple, + u16 zone) +{ + struct mlx5_ct_ft *ft; + + if ((ntohs(tuple->port.dst) != IANA_VXLAN_UDP_PORT) && + (ntohs(tuple->port.dst) != ROCE_V2_UDP_DPORT)) + return false; + + ft = rhashtable_lookup_fast(&ct_priv->zone_ht, &zone, zone_params); + if (!ft) + return false; + + nf_conntrack_get(&ft->tmpl->ct_general); + nf_ct_set(skb, ft->tmpl, IP_CT_NEW); + return true; +} + bool mlx5e_tc_ct_restore_flow(struct mlx5_tc_ct_priv *ct_priv, struct sk_buff *skb, u8 zone_restore_id) @@ -2123,7 +2329,7 @@ mlx5e_tc_ct_restore_flow(struct mlx5_tc_ct_priv *ct_priv, entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_nat_ht, &tuple, tuples_nat_ht_params); if (!entry) - return false; + return mlx5e_tc_ct_restore_trk_new(ct_priv, skb, &tuple, zone); tcf_ct_flow_table_restore_skb(skb, entry->restore_cookie); return true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h index 6503b614337c..f730dbfbb02c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h @@ -10,6 +10,11 @@ #include "en.h" +#define MLX5_CT_STATE_ESTABLISHED_BIT BIT(1) +#define MLX5_CT_STATE_TRK_BIT BIT(2) +#define MLX5_CT_STATE_NAT_BIT BIT(3) +#define MLX5_CT_STATE_NEW_BIT BIT(4) + struct mlx5_flow_attr; struct mlx5e_tc_mod_hdr_acts; struct mlx5_rep_uplink_priv; @@ -28,6 +33,7 @@ struct mlx5_ct_attr { struct mlx5_ct_flow *ct_flow; struct nf_flowtable *nf_ft; u32 ct_labels_id; + u32 ct_state; }; #define zone_to_reg_ct {\ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 56aa39ac1a1c..5cf7c221404b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -3255,11 +3255,11 @@ static bool actions_match_supported(struct mlx5e_priv *priv, struct mlx5e_tc_flow *flow, struct netlink_ext_ack *extack) { - bool ct_flow = false, ct_clear = false; + bool ct_flow = false, ct_clear = false, ct_new = false; u32 actions; - ct_clear = flow->attr->ct_attr.ct_action & - TCA_CT_ACT_CLEAR; + ct_clear = flow->attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR; + ct_new = flow->attr->ct_attr.ct_state & MLX5_CT_STATE_NEW_BIT; ct_flow = flow_flag_test(flow, CT) && !ct_clear; actions = flow->attr->action; @@ -3274,6 +3274,16 @@ static bool actions_match_supported(struct mlx5e_priv *priv, } } + if (ct_new && ct_flow) { + NL_SET_ERR_MSG_MOD(extack, "Can't offload ct_state new with action ct"); + return false; + } + + if (ct_new && flow->attr->dest_chain) { + NL_SET_ERR_MSG_MOD(extack, "Can't offload ct_state new with action goto"); + return false; + } + if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) return modify_header_match_supported(priv, &parse_attr->spec, flow_action, actions, From patchwork Fri Jan 8 05:30:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34947C433E6 for ; Fri, 8 Jan 2021 05:32:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A437233FB for ; Fri, 8 Jan 2021 05:32:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727795AbhAHFc2 (ORCPT ); Fri, 8 Jan 2021 00:32:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:35868 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727630AbhAHFcS (ORCPT ); Fri, 8 Jan 2021 00:32:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id EDEB22368A; Fri, 8 Jan 2021 05:31:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083861; bh=BWVnnNUhdzTuf2XjWmoXkpTWbTWBLQvs5rjTLjsPJEs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ndrbxJ+JK58uVO7014rfAnVWjatNCptCU1I1l2N5OYjHE8HX/iBpxqkwcKQwrRGyl 08Njw1BGJ+FfiF3SyGLqo4R4SRpTZ9Eo4R1OThNu+cngJHVt6WYs3lIVAtMXFM/ufd 8rJIvDg1eo1PDIW6WLuqqDLZi5Z5Ss7xJYcaXyGKCFxMYdiT/J8LOutEMlQqGWu+ng 1WIl72GiGEsUlEVj1n4qH6KI5otJDfGmdZzCWZfc6x1wV7frr5USwk39N/A+8QpXzO rq8iA7WxOzsOQKt4oL5O0dl7WYOKpOMsqT9SAQgsfLfvfqNq8X4VKF/T8/xlHJRzlv eoCMlYoRxesfw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Paul Blakey , Maor Dickman , Roi Dayan , Saeed Mahameed Subject: [net-next 10/15] net/mlx5e: CT: Add support for mirroring Date: Thu, 7 Jan 2021 21:30:49 -0800 Message-Id: <20210108053054.660499-11-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Paul Blakey Add support for mirroring before the CT action by splitting the pre ct rule. Mirror outputs are done first on the tc chain,prio table rule (the fwd rule), which will then forward to a per port fwd table. On this fwd table, we insert the original pre ct rule that forwards to ct/ct nat table. Signed-off-by: Paul Blakey Signed-off-by: Maor Dickman Reviewed-by: Roi Dayan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/tc_ct.c | 4 +++ .../net/ethernet/mellanox/mlx5/core/en_tc.c | 25 ++++++++++--------- 2 files changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index b0c357f755d4..9a189c06ab56 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -1825,6 +1825,10 @@ __mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *ct_priv, ct_flow->post_ct_attr->prio = 0; ct_flow->post_ct_attr->ft = ct_priv->post_ct; + /* Splits were handled before CT */ + if (ct_priv->ns_type == MLX5_FLOW_NAMESPACE_FDB) + ct_flow->post_ct_attr->esw_attr->split_count = 0; + ct_flow->post_ct_attr->inner_match_level = MLX5_MATCH_NONE; ct_flow->post_ct_attr->outer_match_level = MLX5_MATCH_NONE; ct_flow->post_ct_attr->action &= ~(MLX5_FLOW_CONTEXT_ACTION_DECAP); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 5cf7c221404b..89bb464850a1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -1165,19 +1165,23 @@ mlx5e_tc_offload_fdb_rules(struct mlx5_eswitch *esw, if (flow_flag_test(flow, CT)) { mod_hdr_acts = &attr->parse_attr->mod_hdr_acts; - return mlx5_tc_ct_flow_offload(get_ct_priv(flow->priv), + rule = mlx5_tc_ct_flow_offload(get_ct_priv(flow->priv), flow, spec, attr, mod_hdr_acts); + } else { + rule = mlx5_eswitch_add_offloaded_rule(esw, spec, attr); } - rule = mlx5_eswitch_add_offloaded_rule(esw, spec, attr); if (IS_ERR(rule)) return rule; if (attr->esw_attr->split_count) { flow->rule[1] = mlx5_eswitch_add_fwd_rule(esw, spec, attr); if (IS_ERR(flow->rule[1])) { - mlx5_eswitch_del_offloaded_rule(esw, rule, attr); + if (flow_flag_test(flow, CT)) + mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); + else + mlx5_eswitch_del_offloaded_rule(esw, rule, attr); return flow->rule[1]; } } @@ -1192,14 +1196,14 @@ mlx5e_tc_unoffload_fdb_rules(struct mlx5_eswitch *esw, { flow_flag_clear(flow, OFFLOADED); + if (attr->esw_attr->split_count) + mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); + if (flow_flag_test(flow, CT)) { mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); return; } - if (attr->esw_attr->split_count) - mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); - mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); } @@ -3264,7 +3268,8 @@ static bool actions_match_supported(struct mlx5e_priv *priv, actions = flow->attr->action; if (mlx5e_is_eswitch_flow(flow)) { - if (flow->attr->esw_attr->split_count && ct_flow) { + if (flow->attr->esw_attr->split_count && ct_flow && + !MLX5_CAP_GEN(flow->attr->esw_attr->in_mdev, reg_c_preserve)) { /* All registers used by ct are cleared when using * split rules. */ @@ -4373,6 +4378,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, return err; flow_flag_set(flow, CT); + esw_attr->split_count = esw_attr->out_count; break; default: NL_SET_ERR_MSG_MOD(extack, "The offload action is not supported"); @@ -4432,11 +4438,6 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, return -EOPNOTSUPP; } - if (attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { - NL_SET_ERR_MSG_MOD(extack, - "Mirroring goto chain rules isn't supported"); - return -EOPNOTSUPP; - } attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; } From patchwork Fri Jan 8 05:30:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 359557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEC16C433E9 for ; Fri, 8 Jan 2021 05:32:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE80F233EE for ; Fri, 8 Jan 2021 05:32:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727729AbhAHFcU (ORCPT ); Fri, 8 Jan 2021 00:32:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:35874 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727714AbhAHFcT (ORCPT ); Fri, 8 Jan 2021 00:32:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5429523716; Fri, 8 Jan 2021 05:31:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610083862; bh=RLY82qF651aA8yYnWKLwQFuD9i1A3WESheA1hYF1+2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qj/tuy/GD/3VPID8FXolc0wzNE9l1iOoFBgkpJTKYQNCla8qcvNxw8a1DpVkNnbmJ GKj15p6/EwqeJiYZ4GcDfu4Mt4ns64DIxsbqylNRefGRKlhHA+da7vXDQ5rQ84J6Fz jTIM+4ODCTJBTfEmhMU+BaFcDzmkeD5tvfK4sMZ0cYho5kbO0PNYQSydz4Y0ghmhHj PvEV7yMcHfmdbZ/987fUf/S/jjQd3HcBTXL8yiAWDj38qKRGpmf5rWyw3G+pd1w1ym mBev1wsJuszlPAeAba0m/vggrVrOw8P6lxWLdKkamEwBxxPFz/me0a7h6eO3ZyEKC1 iUtlAiB3IgBEA== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Huy Nguyen , Saeed Mahameed Subject: [net-next 13/15] net/mlx5e: IPsec, Avoid unreachable return Date: Thu, 7 Jan 2021 21:30:52 -0800 Message-Id: <20210108053054.660499-14-saeed@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210108053054.660499-1-saeed@kernel.org> References: <20210108053054.660499-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tariq Toukan Simple code improvement, move default return operation under the #else block. Signed-off-by: Tariq Toukan Reviewed-by: Huy Nguyen Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index 899b98aca0d3..fb89b24deb2b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -142,9 +142,9 @@ static inline bool mlx5e_accel_tx_is_ipsec_flow(struct mlx5e_accel_tx_state *sta { #ifdef CONFIG_MLX5_EN_IPSEC return mlx5e_ipsec_is_tx_flow(&state->ipsec); -#endif - +#else return false; +#endif } static inline unsigned int mlx5e_accel_tx_ids_len(struct mlx5e_txqsq *sq,