From patchwork Thu Dec 3 04:39:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 337569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73CBEC64E7A for ; Thu, 3 Dec 2020 04:41:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2551D22243 for ; Thu, 3 Dec 2020 04:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727256AbgLCElw (ORCPT ); Wed, 2 Dec 2020 23:41:52 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:18734 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726071AbgLCElw (ORCPT ); Wed, 2 Dec 2020 23:41:52 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 02 Dec 2020 20:40:35 -0800 Received: from sx1.vdiclient.nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 3 Dec 2020 04:40:32 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: "David S. Miller" , , "Randy Dunlap" , kernel test robot , "Saeed Mahameed" Subject: [net 2/4] net: mlx5e: fix fs_tcp.c build when IPV6 is not enabled Date: Wed, 2 Dec 2020 20:39:44 -0800 Message-ID: <20201203043946.235385-3-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201203043946.235385-1-saeedm@nvidia.com> References: <20201203043946.235385-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1606970435; bh=isnds/D69L6Y85bRnsGFPzH35h4NNZYzUdd/ygXjzMA=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=c1QPPNDa2k7MBX8ApAUYrnlcyviaMMwzDa9Sdp4MGVbYBKm7piFhbcbTkpOd81dqp oHz94kqoUbtcDudaZoDDb2aWqS57Tr5NgZnlgS+NrvOn1p3HxXZK74dTytXGxZGIrq NMVO09upksROpF24GeddzaqNaUBgv+WcfEB7BEA3Zx/xdkDXjx6ve6ePS/ahbm269A VdXv1U2L+D9AexBbDGL52JR6VELc0tFnFrGSyne8iVc+bIMPIYb2yetOOl0P0vnNPF 2177jr43s7x2Skd3xA/7fc2UPoKdXTFwHljWEX2MYu179bBouFpRt2atFNyA+CC3BJ WN4LmsUWr95GQ== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Randy Dunlap Fix build when CONFIG_IPV6 is not enabled by making a function be built conditionally. Fixes these build errors and warnings: ../drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c: In function 'accel_fs_tcp_set_ipv6_flow': ../include/net/sock.h:380:34: error: 'struct sock_common' has no member named 'skc_v6_daddr'; did you mean 'skc_daddr'? 380 | #define sk_v6_daddr __sk_common.skc_v6_daddr | ^~~~~~~~~~~~ ../drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c:55:14: note: in expansion of macro 'sk_v6_daddr' 55 | &sk->sk_v6_daddr, 16); | ^~~~~~~~~~~ At top level: ../drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c:47:13: warning: 'accel_fs_tcp_set_ipv6_flow' defined but not used [-Wunused-function] 47 | static void accel_fs_tcp_set_ipv6_flow(struct mlx5_flow_spec *spec, struct sock *sk) Fixes: 5229a96e59ec ("net/mlx5e: Accel, Expose flow steering API for rules add/del") Signed-off-by: Randy Dunlap Reported-by: kernel test robot Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c index 97f1594cee11..e51f60b55daa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.c @@ -44,6 +44,7 @@ static void accel_fs_tcp_set_ipv4_flow(struct mlx5_flow_spec *spec, struct sock outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4); } +#if IS_ENABLED(CONFIG_IPV6) static void accel_fs_tcp_set_ipv6_flow(struct mlx5_flow_spec *spec, struct sock *sk) { MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, outer_headers.ip_protocol); @@ -63,6 +64,7 @@ static void accel_fs_tcp_set_ipv6_flow(struct mlx5_flow_spec *spec, struct sock outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6), 0xff, 16); } +#endif void mlx5e_accel_fs_del_sk(struct mlx5_flow_handle *rule) { From patchwork Thu Dec 3 04:39:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 337570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC278C71155 for ; Thu, 3 Dec 2020 04:41:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9FCD322203 for ; Thu, 3 Dec 2020 04:41:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728138AbgLCElV (ORCPT ); Wed, 2 Dec 2020 23:41:21 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:18158 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727077AbgLCElV (ORCPT ); Wed, 2 Dec 2020 23:41:21 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 02 Dec 2020 20:40:40 -0800 Received: from sx1.vdiclient.nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 3 Dec 2020 04:40:33 +0000 From: Saeed Mahameed To: Jakub Kicinski CC: "David S. Miller" , , "Tariq Toukan" , Maxim Mikityanskiy , "Boris Pismenny" , Saeed Mahameed Subject: [net 3/4] net/mlx5e: kTLS, Enforce HW TX csum offload with kTLS Date: Wed, 2 Dec 2020 20:39:45 -0800 Message-ID: <20201203043946.235385-4-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201203043946.235385-1-saeedm@nvidia.com> References: <20201203043946.235385-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1606970440; bh=kPBOYXriEvmQZGk6qIM+VjxGgCTYl+4PiNUkIxCnawA=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=Z7l24Efc0XGrL3xpERQTKDyAjv7t54zIuHuLUydTHGKmUHpN8iZkohszcIdWyV7eW 0rPUa2e993aiPac5BK8RyJOBd2dJp9DMPZAqXzxD7trhMvOy+SBiZw8OmjPHLDNrC5 6mtFCQA1TzwJccZkQk6Fxgy9Mc3Y1b0VNoUKoxvFb0a8Fakzx+uLx6W6h2KOeUZszd PYMdsa5MrpqU3IGBvzKpXmzocwmUmUeJ4lnSDBVa8HKJAzeu6vl0OoEj+krxLyl0CK AC7v0PJ+dX9YM0V2Oj4EawhkFl3jMSuejVDHcXjnoKl1I7bengAT1vjMRC9ce1jz68 6L3xYDtZmyuLA== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tariq Toukan Checksum calculation cannot be done in SW for TX kTLS HW offloaded packets. Offload it to the device, disregard the declared state of the TX csum offload feature. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan Reviewed-by: Maxim Mikityanskiy Reviewed-by: Boris Pismenny Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en_tx.c | 22 +++++++++++++------ 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index 6dd3ea3cbbed..d97203cf6a00 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -161,7 +161,9 @@ ipsec_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, } static inline void -mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) +mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, + struct mlx5e_accel_tx_state *accel, + struct mlx5_wqe_eth_seg *eseg) { if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM; @@ -173,6 +175,11 @@ mlx5e_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, struct eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM; sq->stats->csum_partial++; } +#ifdef CONFIG_MLX5_EN_TLS + } else if (unlikely(accel && accel->tls.tls_tisn)) { + eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM; + sq->stats->csum_partial++; +#endif } else if (unlikely(eseg->flow_table_metadata & cpu_to_be32(MLX5_ETH_WQE_FT_META_IPSEC))) { ipsec_txwqe_build_eseg_csum(sq, skb, eseg); @@ -607,12 +614,13 @@ void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq) } static bool mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq, - struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) + struct sk_buff *skb, struct mlx5e_accel_tx_state *accel, + struct mlx5_wqe_eth_seg *eseg) { if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg))) return false; - mlx5e_txwqe_build_eseg_csum(sq, skb, eseg); + mlx5e_txwqe_build_eseg_csum(sq, skb, accel, eseg); return true; } @@ -639,7 +647,7 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) if (mlx5e_tx_skb_supports_mpwqe(skb, &attr)) { struct mlx5_wqe_eth_seg eseg = {}; - if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &eseg))) + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg))) return NETDEV_TX_OK; mlx5e_sq_xmit_mpwqe(sq, skb, &eseg, netdev_xmit_more()); @@ -656,7 +664,7 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) /* May update the WQE, but may not post other WQEs. */ mlx5e_accel_tx_finish(sq, wqe, &accel, (struct mlx5_wqe_inline_seg *)(wqe->data + wqe_attr.ds_cnt_inl)); - if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &wqe->eth))) + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth))) return NETDEV_TX_OK; mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, netdev_xmit_more()); @@ -675,7 +683,7 @@ void mlx5e_sq_xmit_simple(struct mlx5e_txqsq *sq, struct sk_buff *skb, bool xmit mlx5e_sq_calc_wqe_attr(skb, &attr, &wqe_attr); pi = mlx5e_txqsq_get_next_pi(sq, wqe_attr.num_wqebbs); wqe = MLX5E_TX_FETCH_WQE(sq, pi); - mlx5e_txwqe_build_eseg_csum(sq, skb, &wqe->eth); + mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, &wqe->eth); mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, xmit_more); } @@ -944,7 +952,7 @@ void mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb, mlx5i_txwqe_build_datagram(av, dqpn, dqkey, datagram); - mlx5e_txwqe_build_eseg_csum(sq, skb, eseg); + mlx5e_txwqe_build_eseg_csum(sq, skb, NULL, eseg); eseg->mss = attr.mss;