From patchwork Wed Mar 25 12:52:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD7BCC54FD0 for ; Wed, 25 Mar 2020 12:53:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A29A120775 for ; Wed, 25 Mar 2020 12:53:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="oKtI4dy0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727395AbgCYMxA (ORCPT ); Wed, 25 Mar 2020 08:53:00 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:62806 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726998AbgCYMxA (ORCPT ); Wed, 25 Mar 2020 08:53:00 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCp3N1009882; Wed, 25 Mar 2020 05:52:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=t5vrkRw/fdywPmFJSOrX6sLRwgIbcLpCgrIYoV8gIDM=; b=oKtI4dy0XUh5vee5VoXxErhiWmKnsTZnNlR9CuqVS9+rrUkTO6ImB+N0Nz7+OK6AXgiB MxA3RetP25e6jnoIWRRd6bRZXkhsKu+qaHC4i0WtyvGMZ/NXizrtOu37B3F+wIHIeZ2k 2BYJ73UEz69FOfIfOZLN3XzOUpsZKNnXL8hWXlMBvsCw1uAD1Em2ePJ3YZnviGLMW+Fb hRcDcG+s/rCfR91/FuaI4QCi0ilE+RdC8e3MTu00d0YVkJHpEzcDR67cTX7FY7/2Irdy tV3AMExJNURi5OXgbLyJIzhdraVu3y62w/pC+xVKhrJIkcBTXsDzA/8R+ZENZIpaA2mR TQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nrgfg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:52:56 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:52:54 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:52:55 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id D25983F7043; Wed, 25 Mar 2020 05:52:53 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH v2 net-next 01/17] net: introduce the MACSEC netdev feature Date: Wed, 25 Mar 2020 15:52:30 +0300 Message-ID: <20200325125246.987-2-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antoine Tenart This patch introduce a new netdev feature, which will be used by drivers to state they can perform MACsec transformations in hardware. The patchset was gathered by Mark, macsec functinality itself was implemented by Dmitry, Mark and Pavel Belous. Signed-off-by: Antoine Tenart Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- include/linux/netdev_features.h | 3 +++ net/ethtool/common.c | 1 + 2 files changed, 4 insertions(+) diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index 34d050bb1ae6..9d53c5ad272c 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -83,6 +83,8 @@ enum { NETIF_F_HW_TLS_RECORD_BIT, /* Offload TLS record */ NETIF_F_GRO_FRAGLIST_BIT, /* Fraglist GRO */ + NETIF_F_HW_MACSEC_BIT, /* Offload MACsec operations */ + /* * Add your fresh new feature above and remember to update * netdev_features_strings[] in net/core/ethtool.c and maybe @@ -154,6 +156,7 @@ enum { #define NETIF_F_HW_TLS_RX __NETIF_F(HW_TLS_RX) #define NETIF_F_GRO_FRAGLIST __NETIF_F(GRO_FRAGLIST) #define NETIF_F_GSO_FRAGLIST __NETIF_F(GSO_FRAGLIST) +#define NETIF_F_HW_MACSEC __NETIF_F(HW_MACSEC) /* Finds the next feature with the highest number of the range of start till 0. */ diff --git a/net/ethtool/common.c b/net/ethtool/common.c index dab047eec943..51a0941fc62f 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -60,6 +60,7 @@ const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN] = { [NETIF_F_HW_TLS_TX_BIT] = "tls-hw-tx-offload", [NETIF_F_HW_TLS_RX_BIT] = "tls-hw-rx-offload", [NETIF_F_GRO_FRAGLIST_BIT] = "rx-gro-list", + [NETIF_F_HW_MACSEC_BIT] = "macsec-hw-offload", }; const char From patchwork Wed Mar 25 12:52:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2466C43331 for ; Wed, 25 Mar 2020 12:53:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8819120775 for ; Wed, 25 Mar 2020 12:53:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="IqdKhpP+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727456AbgCYMxE (ORCPT ); Wed, 25 Mar 2020 08:53:04 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:9848 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727400AbgCYMxD (ORCPT ); Wed, 25 Mar 2020 08:53:03 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCp3N3009882; Wed, 25 Mar 2020 05:53:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=9fs6nDV5mQA+2KJYT5Mllt00nd4z3LNGfsgoCjG2Wao=; b=IqdKhpP+Ppmc2rAXKIs3PTc6YU8JqKg+XipqMjCAeVTLNVzVb6SIXGJ5zcQhvSqtEiq9 c+gpq8c8lYiacIrOtqH7f2JEjq1zpZk9XjOFjnW2BLTuOx7cqmYmxWzQj0YHeGp7tcMh 85sdJORBwbaIWVs1M/oy1TA4Bm5hirdhDREpynfvAEdtPigIa9dB6pPM0mC00UYTOadw 7qw4to6Ed6vdHv9ks0IiZ0pZgOcCgBVaBD4J297un7XPi9MmDljqBWLpg8pR0pgIy6bA PhZ1wI59D8IoiZzQSPEp9iBg4LlRY3blYk2H+c1WTxDJD26GdfXEiCVUUNxyWcGEQH5i 2Q== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nrgfu-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:00 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:52:59 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:52:59 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id D05AB3F7040; Wed, 25 Mar 2020 05:52:57 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH v2 net-next 03/17] net: macsec: allow to reference a netdev from a MACsec context Date: Wed, 25 Mar 2020 15:52:32 +0300 Message-ID: <20200325125246.987-4-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antoine Tenart This patch allows to reference a net_device from a MACsec context. This is needed to allow implementing MACsec operations in net device drivers. Signed-off-by: Antoine Tenart Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- include/net/macsec.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/include/net/macsec.h b/include/net/macsec.h index 2e4780dbf5c6..71de2c863df7 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -220,7 +220,10 @@ struct macsec_secy { * struct macsec_context - MACsec context for hardware offloading */ struct macsec_context { - struct phy_device *phydev; + union { + struct net_device *netdev; + struct phy_device *phydev; + }; enum macsec_offload offload; struct macsec_secy *secy; From patchwork Wed Mar 25 12:52:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 260A5C54FD0 for ; Wed, 25 Mar 2020 12:53:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ED09A2076A for ; Wed, 25 Mar 2020 12:53:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="AmjWm5ax" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727488AbgCYMxJ (ORCPT ); Wed, 25 Mar 2020 08:53:09 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:47950 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727460AbgCYMxH (ORCPT ); Wed, 25 Mar 2020 08:53:07 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCoMIG009169; Wed, 25 Mar 2020 05:53:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=AfJvl4YQkpPHf0YXHp9mM8O4lFNcbvonW9a0wXBuoLo=; b=AmjWm5axmWobDQam5TzOHIXjs1XfRZ1LesjvNYRHy97VyLL/tM62Q2qfrjQqG5FZAdZW Rg/T6KvBhVL/eIbXcgR88QQyOy/PU4uFHrRDOHOkSC+5oCYgxfzdHDxtLwzSr9bhd+QZ FEuw5fi8cnYSLIb852h6PK4M4iF70DhDGMPhjXMAcSTX+AXrpnTgQfAiF409FUabRuCD aon8iNesJ2upTtCyg+ut1HV2BUz8j2jvp3vTi4oeECjlfUEl1oaFrID4dbs1eMwVYwFW X2QnjDwTUz/FLRdHMOjAPgQI9ruT7k+BUBNTfZi2NitGAt85DYVg71G0s4D0m78bdoXu rg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nrgg3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:04 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:03 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:03 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id BCA423F7043; Wed, 25 Mar 2020 05:53:01 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Dmitry Bogdanov" , Igor Russkikh Subject: [PATCH v2 net-next 05/17] net: macsec: init secy pointer in macsec_context Date: Wed, 25 Mar 2020 15:52:34 +0300 Message-ID: <20200325125246.987-6-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds secy pointer initialization in the macsec_context. It will be used by MAC drivers in offloading operations. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index c4d5f609871e..0f6808f3ff91 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1793,6 +1793,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), MACSEC_KEYID_LEN); @@ -1840,6 +1841,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) struct nlattr **attrs = info->attrs; struct macsec_rx_sc *rx_sc; struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1]; + struct macsec_secy *secy; bool was_active; int ret; @@ -1859,6 +1861,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) return PTR_ERR(dev); } + secy = &macsec_priv(dev)->secy; sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]); rx_sc = create_rx_sc(dev, sci); @@ -1882,6 +1885,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_add_rxsc, &ctx); if (ret) @@ -2031,6 +2035,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), MACSEC_KEYID_LEN); @@ -2106,6 +2111,7 @@ static int macsec_del_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_rxsa, &ctx); if (ret) @@ -2171,6 +2177,7 @@ static int macsec_del_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_rxsc, &ctx); if (ret) goto cleanup; @@ -2229,6 +2236,7 @@ static int macsec_del_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_txsa, &ctx); if (ret) @@ -2340,6 +2348,7 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_txsa, &ctx); if (ret) @@ -2432,6 +2441,7 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_rxsa, &ctx); if (ret) @@ -2502,6 +2512,7 @@ static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_rxsc, &ctx); if (ret) @@ -3369,6 +3380,7 @@ static int macsec_dev_open(struct net_device *dev) goto clear_allmulti; } + ctx.secy = &macsec->secy; err = macsec_offload(ops->mdo_dev_open, &ctx); if (err) goto clear_allmulti; @@ -3400,8 +3412,10 @@ static int macsec_dev_stop(struct net_device *dev) struct macsec_context ctx; ops = macsec_get_ops(macsec, &ctx); - if (ops) + if (ops) { + ctx.secy = &macsec->secy; macsec_offload(ops->mdo_dev_stop, &ctx); + } } dev_mc_unsync(real_dev, dev); From patchwork Wed Mar 25 12:52:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4092C54FD3 for ; Wed, 25 Mar 2020 12:53:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 979162076A for ; Wed, 25 Mar 2020 12:53:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="X96ig4je" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727527AbgCYMxO (ORCPT ); Wed, 25 Mar 2020 08:53:14 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:42530 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727460AbgCYMxN (ORCPT ); Wed, 25 Mar 2020 08:53:13 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCp54H014665; Wed, 25 Mar 2020 05:53:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=73hHJf4OezBY000vSHrVIhFi5q3RobR97wc08AVFiyo=; b=X96ig4jeeT12Dw47lrz2aYAKkGgLx0NhdF6iZ/I0Krqbov/YTj7+MemZmNWEsaks34xn 4j4xJf1t1tah5KB8bCC/7xSgIp0re2IKBkaz+Q1PXCwqgF/KkIvGh/g4T02G7NXXtJeI +mMC0P5SUSqmR3CURNPSxRrF1Z8dtiPbZlGwQAv4hK849crRp0RKe3iDOYTyTXafEFGs g+k+HlE2IU9D+HT0IwRSf9OGeOUVV/rpAX7VUJSyAsuS2sPl/gyKWNsW4jN32sWj7gUn m1k/lTL2a/Ngn5Mo1ybe530usg2p+Ex+YeUk8J4azokMfIQXIFJMI3mki9PMDbvtVlwJ tQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3006xkr5pa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:10 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:08 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:07 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:07 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id 390A83F703F; Wed, 25 Mar 2020 05:53:05 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH v2 net-next 07/17] net: macsec: support multicast/broadcast when offloading Date: Wed, 25 Mar 2020 15:52:36 +0300 Message-ID: <20200325125246.987-8-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov The idea is simple. If the frame is an exact match for the controlled port (based on DA comparison), then we simply divert this skb to matching port. Multicast/broadcast messages are delivered to all ports. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 51 +++++++++++++++++++++++++++++++++----------- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 5d1564cda7fe..884407d92f93 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1005,22 +1005,53 @@ static enum rx_handler_result handle_not_macsec(struct sk_buff *skb) { /* Deliver to the uncontrolled port by default */ enum rx_handler_result ret = RX_HANDLER_PASS; + struct ethhdr *hdr = eth_hdr(skb); struct macsec_rxh_data *rxd; struct macsec_dev *macsec; rcu_read_lock(); rxd = macsec_data_rcu(skb->dev); - /* 10.6 If the management control validateFrames is not - * Strict, frames without a SecTAG are received, counted, and - * delivered to the Controlled Port - */ list_for_each_entry_rcu(macsec, &rxd->secys, secys) { struct sk_buff *nskb; struct pcpu_secy_stats *secy_stats = this_cpu_ptr(macsec->stats); + struct net_device *ndev = macsec->secy.netdev; - if (!macsec_is_offloaded(macsec) && - macsec->secy.validate_frames == MACSEC_VALIDATE_STRICT) { + /* If h/w offloading is enabled, HW decodes frames and strips + * the SecTAG, so we have to deduce which port to deliver to. + */ + if (macsec_is_offloaded(macsec) && netif_running(ndev)) { + if (ether_addr_equal_64bits(hdr->h_dest, + ndev->dev_addr)) { + /* exact match, divert skb to this port */ + skb->dev = ndev; + skb->pkt_type = PACKET_HOST; + ret = RX_HANDLER_ANOTHER; + goto out; + } else if (is_multicast_ether_addr_64bits( + hdr->h_dest)) { + /* multicast frame, deliver on this port too */ + nskb = skb_clone(skb, GFP_ATOMIC); + if (!nskb) + break; + + nskb->dev = ndev; + if (ether_addr_equal_64bits(hdr->h_dest, + ndev->broadcast)) + nskb->pkt_type = PACKET_BROADCAST; + else + nskb->pkt_type = PACKET_MULTICAST; + + netif_rx(nskb); + } + continue; + } + + /* 10.6 If the management control validateFrames is not + * Strict, frames without a SecTAG are received, counted, and + * delivered to the Controlled Port + */ + if (macsec->secy.validate_frames == MACSEC_VALIDATE_STRICT) { u64_stats_update_begin(&secy_stats->syncp); secy_stats->stats.InPktsNoTag++; u64_stats_update_end(&secy_stats->syncp); @@ -1032,19 +1063,13 @@ static enum rx_handler_result handle_not_macsec(struct sk_buff *skb) if (!nskb) break; - nskb->dev = macsec->secy.netdev; + nskb->dev = ndev; if (netif_rx(nskb) == NET_RX_SUCCESS) { u64_stats_update_begin(&secy_stats->syncp); secy_stats->stats.InPktsUntagged++; u64_stats_update_end(&secy_stats->syncp); } - - if (netif_running(macsec->secy.netdev) && - macsec_is_offloaded(macsec)) { - ret = RX_HANDLER_EXACT; - goto out; - } } out: From patchwork Wed Mar 25 12:52:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58CD1C54FD0 for ; Wed, 25 Mar 2020 12:53:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2DFBD2076A for ; Wed, 25 Mar 2020 12:53:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="d+3vuWfs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727546AbgCYMxS (ORCPT ); Wed, 25 Mar 2020 08:53:18 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:65002 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727525AbgCYMxQ (ORCPT ); Wed, 25 Mar 2020 08:53:16 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCoqZr014609; Wed, 25 Mar 2020 05:53:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=27XNLfjtBUuh8iOjOQFchNQOTsTcMCRC1b7HI0X+t4M=; b=d+3vuWfs4j14jQMu5vEN8pjh2IjmP5XpfR2g+8P2PMyXIjzvBY7iZOjwHybYWg9Hys2G ySnTtPbbscH/vHo7/cVUe4l8gFfXCD4HYt0tDTN4q1WRZ6vLNuLQKMiHdYYtaIIlRzWE MuBXXv1JpDHDKOu+ExY9sFdZFHIHixiLvZvY2ENyGJbFOWzWk649KX7aAzcQOnAmQqgA H88jMOo48wjn9v3rCJMZA2JKxTget4pY3kLWsm4L/TggyQLpCYUc/d6X/sWBwfoYsVAs /3ZUqJjXfi73DDPvraMdZ5wgyGnMfnoq3Qyzq9io4eAVJorS61B56Ik5GwpeTZB8cqDy Eg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3006xkr5pw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:13 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:12 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:11 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id 837DF3F7041; Wed, 25 Mar 2020 05:53:10 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH v2 net-next 09/17] net: macsec: report real_dev features when HW offloading is enabled Date: Wed, 25 Mar 2020 15:52:38 +0300 Message-ID: <20200325125246.987-10-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch makes real_dev_feature propagation by MACSec offloaded device. Issue description: real_dev features are disabled upon macsec creation. Root cause: Features limitation (specific to SW MACSec limitation) is being applied to HW offloaded case as well. This causes 'set_features' request on the real_dev with reduced feature set due to chain propagation. Proposed solution: Report real_dev features when HW offloading is enabled. NB! MACSec offloaded device does not propagate VLAN offload features at the moment. This can potentially be added later on as a separate patch. Note: this patch requires HW offloading to be enabled by default in order to function properly. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 59bf7d5f39ff..da2e28e6c15d 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -2632,6 +2632,10 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) goto rollback; rtnl_unlock(); + /* Force features update, since they are different for SW MACSec and + * HW offloading cases. + */ + netdev_update_features(dev); return 0; rollback: @@ -3398,9 +3402,16 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb, return ret; } -#define MACSEC_FEATURES \ +#define SW_MACSEC_FEATURES \ (NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST) +/* If h/w offloading is enabled, use real device features save for + * VLAN_FEATURES - they require additional ops + * HW_MACSEC - no reason to report it + */ +#define REAL_DEV_FEATURES(dev) \ + ((dev)->features & ~(NETIF_F_VLAN_FEATURES | NETIF_F_HW_MACSEC)) + static int macsec_dev_init(struct net_device *dev) { struct macsec_dev *macsec = macsec_priv(dev); @@ -3417,8 +3428,12 @@ static int macsec_dev_init(struct net_device *dev) return err; } - dev->features = real_dev->features & MACSEC_FEATURES; - dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE; + if (macsec_is_offloaded(macsec)) { + dev->features = REAL_DEV_FEATURES(real_dev); + } else { + dev->features = real_dev->features & SW_MACSEC_FEATURES; + dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE; + } dev->needed_headroom = real_dev->needed_headroom + MACSEC_NEEDED_HEADROOM; @@ -3447,7 +3462,10 @@ static netdev_features_t macsec_fix_features(struct net_device *dev, struct macsec_dev *macsec = macsec_priv(dev); struct net_device *real_dev = macsec->real_dev; - features &= (real_dev->features & MACSEC_FEATURES) | + if (macsec_is_offloaded(macsec)) + return REAL_DEV_FEATURES(real_dev); + + features &= (real_dev->features & SW_MACSEC_FEATURES) | NETIF_F_GSO_SOFTWARE | NETIF_F_SOFT_FEATURES; features |= NETIF_F_LLTX; From patchwork Wed Mar 25 12:52:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7A61C54FD0 for ; Wed, 25 Mar 2020 12:53:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 67C232076A for ; Wed, 25 Mar 2020 12:53:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="zLjfPv6I" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727563AbgCYMxZ (ORCPT ); Wed, 25 Mar 2020 08:53:25 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:62868 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727046AbgCYMxY (ORCPT ); Wed, 25 Mar 2020 08:53:24 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCp9W9014712; Wed, 25 Mar 2020 05:53:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=qRzMPHMF7P4mGc2SuvSnzRPISyHPDgZRv9c6ywz9Ll0=; b=zLjfPv6Ih9zDa0LALFrlTEKQlJIKbiC02gyi5U0BWZTTUK7pkknSwT5YrjmPYiGrpgVk UsKVbyNTnih/vgzrwwePozgNBjI7k7fdw73qOqjlnAwxIG8mtVPwRINKgL4+/aDR6nna /sSHF8H7pLXvR0bXLN0CFsJGBAimleaaUJtl2Y9zDdYInUQbliX43vZmQYFD6SEfeSwb 4PzUhZUwEI2893XO3afF0rVNDofFpRuzFhuvpYAEPjNjsOx2UlTSVX4sY6klNZUwiKgT BcM2WQlp3u+ERN9lbKzC9b2LoIUDskdPR27MFrGXnR3FkC7eosWp6/+mNksWVXSnWW6t 3w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3006xkr5qf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:21 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:19 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:18 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id 5E1F53F7041; Wed, 25 Mar 2020 05:53:17 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Dmitry Bogdanov" , Igor Russkikh Subject: [PATCH v2 net-next 12/17] net: atlantic: MACSec egress offload implementation Date: Wed, 25 Mar 2020 15:52:41 +0300 Message-ID: <20200325125246.987-13-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds support for MACSec egress HW offloading on Atlantic network cards. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../ethernet/aquantia/atlantic/aq_macsec.c | 660 +++++++++++++++++- .../ethernet/aquantia/atlantic/aq_macsec.h | 4 + 2 files changed, 656 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index e9d8852bfbe0..6ca5361387e6 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -7,44 +7,503 @@ #include "aq_nic.h" #include +#include "macsec/macsec_api.h" +#define AQ_MACSEC_KEY_LEN_128_BIT 16 +#define AQ_MACSEC_KEY_LEN_192_BIT 24 +#define AQ_MACSEC_KEY_LEN_256_BIT 32 + +enum aq_clear_type { + /* update HW configuration */ + AQ_CLEAR_HW = BIT(0), + /* update SW configuration (busy bits, pointers) */ + AQ_CLEAR_SW = BIT(1), + /* update both HW and SW configuration */ + AQ_CLEAR_ALL = AQ_CLEAR_HW | AQ_CLEAR_SW, +}; + +static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, + enum aq_clear_type clear_type); +static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, + const int sa_num, enum aq_clear_type clear_type); +static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, + enum aq_clear_type clear_type); +static int aq_apply_macsec_cfg(struct aq_nic_s *nic); +static int aq_apply_secy_cfg(struct aq_nic_s *nic, + const struct macsec_secy *secy); + +static void aq_ether_addr_to_mac(u32 mac[2], unsigned char *emac) +{ + u32 tmp[2] = { 0 }; + + memcpy(((u8 *)tmp) + 2, emac, ETH_ALEN); + + mac[0] = swab32(tmp[1]); + mac[1] = swab32(tmp[0]); +} + +/* There's a 1:1 mapping between SecY and TX SC */ +static int aq_get_txsc_idx_from_secy(struct aq_macsec_cfg *macsec_cfg, + const struct macsec_secy *secy) +{ + int i; + + if (unlikely(!secy)) + return -1; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (macsec_cfg->aq_txsc[i].sw_secy == secy) + return i; + } + return -1; +} + +static int aq_get_txsc_idx_from_sc_idx(const enum aq_macsec_sc_sa sc_sa, + const int sc_idx) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sc_idx >> 2; + case aq_macsec_sa_sc_2sa_16sc: + return sc_idx >> 1; + case aq_macsec_sa_sc_1sa_32sc: + return sc_idx; + default: + WARN_ONCE(true, "Invalid sc_sa"); + } + return -1; +} + +/* Rotate keys u32[8] */ +static void aq_rotate_keys(u32 (*key)[8], const int key_len) +{ + u32 tmp[8] = { 0 }; + + memcpy(&tmp, key, sizeof(tmp)); + memset(*key, 0, sizeof(*key)); + + if (key_len == AQ_MACSEC_KEY_LEN_128_BIT) { + (*key)[0] = swab32(tmp[3]); + (*key)[1] = swab32(tmp[2]); + (*key)[2] = swab32(tmp[1]); + (*key)[3] = swab32(tmp[0]); + } else if (key_len == AQ_MACSEC_KEY_LEN_192_BIT) { + (*key)[0] = swab32(tmp[5]); + (*key)[1] = swab32(tmp[4]); + (*key)[2] = swab32(tmp[3]); + (*key)[3] = swab32(tmp[2]); + (*key)[4] = swab32(tmp[1]); + (*key)[5] = swab32(tmp[0]); + } else if (key_len == AQ_MACSEC_KEY_LEN_256_BIT) { + (*key)[0] = swab32(tmp[7]); + (*key)[1] = swab32(tmp[6]); + (*key)[2] = swab32(tmp[5]); + (*key)[3] = swab32(tmp[4]); + (*key)[4] = swab32(tmp[3]); + (*key)[5] = swab32(tmp[2]); + (*key)[6] = swab32(tmp[1]); + (*key)[7] = swab32(tmp[0]); + } else { + pr_warn("Rotate_keys: invalid key_len\n"); + } +} + static int aq_mdo_dev_open(struct macsec_context *ctx) { - return 0; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int ret = 0; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev)) + ret = aq_apply_secy_cfg(nic, ctx->secy); + + return ret; } static int aq_mdo_dev_stop(struct macsec_context *ctx) { + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int i; + + if (ctx->prepare) + return 0; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->txsc_idx_busy & BIT(i)) + aq_clear_secy(nic, nic->macsec_cfg->aq_txsc[i].sw_secy, + AQ_CLEAR_HW); + } + return 0; } +static int aq_set_txsc(struct aq_nic_s *nic, const int txsc_idx) +{ + struct aq_macsec_txsc *aq_txsc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + struct aq_mss_egress_class_record tx_class_rec = { 0 }; + const struct macsec_secy *secy = aq_txsc->sw_secy; + struct aq_mss_egress_sc_record sc_rec = { 0 }; + unsigned int sc_idx = aq_txsc->hw_sc_idx; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + aq_ether_addr_to_mac(tx_class_rec.mac_sa, secy->netdev->dev_addr); + + put_unaligned_be64((__force u64)secy->sci, tx_class_rec.sci); + tx_class_rec.sci_mask = 0; + + tx_class_rec.sa_mask = 0x3f; + + tx_class_rec.action = 0; /* forward to SA/SC table */ + tx_class_rec.valid = 1; + + tx_class_rec.sc_idx = sc_idx; + + tx_class_rec.sc_sa = nic->macsec_cfg->sc_sa; + + ret = aq_mss_set_egress_class_record(hw, &tx_class_rec, txsc_idx); + if (ret) + return ret; + + sc_rec.protect = secy->protect_frames; + if (secy->tx_sc.encrypt) + sc_rec.tci |= BIT(1); + if (secy->tx_sc.scb) + sc_rec.tci |= BIT(2); + if (secy->tx_sc.send_sci) + sc_rec.tci |= BIT(3); + if (secy->tx_sc.end_station) + sc_rec.tci |= BIT(4); + /* The C bit is clear if and only if the Secure Data is + * exactly the same as the User Data and the ICV is 16 octets long. + */ + if (!(secy->icv_len == 16 && !secy->tx_sc.encrypt)) + sc_rec.tci |= BIT(0); + + sc_rec.an_roll = 0; + + switch (secy->key_len) { + case AQ_MACSEC_KEY_LEN_128_BIT: + sc_rec.sak_len = 0; + break; + case AQ_MACSEC_KEY_LEN_192_BIT: + sc_rec.sak_len = 1; + break; + case AQ_MACSEC_KEY_LEN_256_BIT: + sc_rec.sak_len = 2; + break; + default: + WARN_ONCE(true, "Invalid sc_sa"); + return -EINVAL; + } + + sc_rec.curr_an = secy->tx_sc.encoding_sa; + sc_rec.valid = 1; + sc_rec.fresh = 1; + + return aq_mss_set_egress_sc_record(hw, &sc_rec, sc_idx); +} + +static u32 aq_sc_idx_max(const enum aq_macsec_sc_sa sc_sa) +{ + u32 result = 0; + + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + result = 8; + break; + case aq_macsec_sa_sc_2sa_16sc: + result = 16; + break; + case aq_macsec_sa_sc_1sa_32sc: + result = 32; + break; + default: + break; + }; + + return result; +} + +static u32 aq_to_hw_sc_idx(const u32 sc_idx, const enum aq_macsec_sc_sa sc_sa) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sc_idx << 2; + case aq_macsec_sa_sc_2sa_16sc: + return sc_idx << 1; + case aq_macsec_sa_sc_1sa_32sc: + return sc_idx; + default: + WARN_ONCE(true, "Invalid sc_sa"); + }; + + return sc_idx; +} + +static enum aq_macsec_sc_sa sc_sa_from_num_an(const int num_an) +{ + enum aq_macsec_sc_sa sc_sa = aq_macsec_sa_sc_not_used; + + switch (num_an) { + case 4: + sc_sa = aq_macsec_sa_sc_4sa_8sc; + break; + case 2: + sc_sa = aq_macsec_sa_sc_2sa_16sc; + break; + case 1: + sc_sa = aq_macsec_sa_sc_1sa_32sc; + break; + default: + break; + } + + return sc_sa; +} + static int aq_mdo_add_secy(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + enum aq_macsec_sc_sa sc_sa; + u32 txsc_idx; + int ret = 0; + + sc_sa = sc_sa_from_num_an(MACSEC_NUM_AN); + if (sc_sa == aq_macsec_sa_sc_not_used) + return -EINVAL; + + if (hweight32(cfg->txsc_idx_busy) >= aq_sc_idx_max(sc_sa)) + return -ENOSPC; + + txsc_idx = ffz(cfg->txsc_idx_busy); + if (txsc_idx == AQ_MACSEC_MAX_SC) + return -ENOSPC; + + if (ctx->prepare) + return 0; + + cfg->sc_sa = sc_sa; + cfg->aq_txsc[txsc_idx].hw_sc_idx = aq_to_hw_sc_idx(txsc_idx, sc_sa); + cfg->aq_txsc[txsc_idx].sw_secy = secy; + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_set_txsc(nic, txsc_idx); + + set_bit(txsc_idx, &cfg->txsc_idx_busy); + + return 0; } static int aq_mdo_upd_secy(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + const struct macsec_secy *secy = ctx->secy; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); + if (txsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_set_txsc(nic, txsc_idx); + + return ret; +} + +static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, + enum aq_clear_type clear_type) +{ + struct aq_macsec_txsc *tx_sc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + struct aq_mss_egress_class_record tx_class_rec = { 0 }; + struct aq_mss_egress_sc_record sc_rec = { 0 }; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + int sa_num; + + for_each_set_bit (sa_num, &tx_sc->tx_sa_idx_busy, AQ_MACSEC_MAX_SA) { + ret = aq_clear_txsa(nic, tx_sc, sa_num, clear_type); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_HW) { + ret = aq_mss_set_egress_class_record(hw, &tx_class_rec, + txsc_idx); + if (ret) + return ret; + + sc_rec.fresh = 1; + ret = aq_mss_set_egress_sc_record(hw, &sc_rec, + tx_sc->hw_sc_idx); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_SW) { + clear_bit(txsc_idx, &nic->macsec_cfg->txsc_idx_busy); + nic->macsec_cfg->aq_txsc[txsc_idx].sw_secy = NULL; + } + + return ret; } static int aq_mdo_del_secy(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int ret = 0; + + if (ctx->prepare) + return 0; + + if (!nic->macsec_cfg) + return 0; + + ret = aq_clear_secy(nic, ctx->secy, AQ_CLEAR_ALL); + + return ret; +} + +static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx, + const struct macsec_secy *secy, + const struct macsec_tx_sa *tx_sa, + const unsigned char *key, const unsigned char an) +{ + struct aq_mss_egress_sakey_record key_rec; + const unsigned int sa_idx = sc_idx | an; + struct aq_mss_egress_sa_record sa_rec; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + memset(&sa_rec, 0, sizeof(sa_rec)); + sa_rec.valid = tx_sa->active; + sa_rec.fresh = 1; + sa_rec.next_pn = tx_sa->next_pn; + + ret = aq_mss_set_egress_sa_record(hw, &sa_rec, sa_idx); + if (ret) + return ret; + + if (!key) + return ret; + + memset(&key_rec, 0, sizeof(key_rec)); + memcpy(&key_rec.key, key, secy->key_len); + + aq_rotate_keys(&key_rec.key, secy->key_len); + + ret = aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx); + + return ret; } static int aq_mdo_add_txsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_txsc *aq_txsc; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + set_bit(ctx->sa.assoc_num, &aq_txsc->tx_sa_idx_busy); + + memcpy(aq_txsc->tx_sa_key[ctx->sa.assoc_num], ctx->sa.key, + secy->key_len); + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_txsa(nic, aq_txsc->hw_sc_idx, secy, + ctx->sa.tx_sa, ctx->sa.key, + ctx->sa.assoc_num); + + return ret; } static int aq_mdo_upd_txsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_txsc *aq_txsc; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_txsa(nic, aq_txsc->hw_sc_idx, secy, + ctx->sa.tx_sa, NULL, ctx->sa.assoc_num); + + return ret; +} + +static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, + const int sa_num, enum aq_clear_type clear_type) +{ + const int sa_idx = aq_txsc->hw_sc_idx | sa_num; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + if (clear_type & AQ_CLEAR_SW) + clear_bit(sa_num, &aq_txsc->tx_sa_idx_busy); + + if ((clear_type & AQ_CLEAR_HW) && netif_carrier_ok(nic->ndev)) { + struct aq_mss_egress_sakey_record key_rec; + struct aq_mss_egress_sa_record sa_rec; + + memset(&sa_rec, 0, sizeof(sa_rec)); + sa_rec.fresh = 1; + + ret = aq_mss_set_egress_sa_record(hw, &sa_rec, sa_idx); + if (ret) + return ret; + + memset(&key_rec, 0, sizeof(key_rec)); + return aq_mss_set_egress_sakey_record(hw, &key_rec, sa_idx); + } + + return 0; } static int aq_mdo_del_txsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, ctx->secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + ret = aq_clear_txsa(nic, &cfg->aq_txsc[txsc_idx], ctx->sa.assoc_num, + AQ_CLEAR_ALL); + + return ret; } static int aq_mdo_add_rxsc(struct macsec_context *ctx) @@ -77,8 +536,170 @@ static int aq_mdo_del_rxsa(struct macsec_context *ctx) return -EOPNOTSUPP; } +static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) +{ + struct aq_macsec_txsc *aq_txsc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + const struct macsec_secy *secy = aq_txsc->sw_secy; + struct macsec_tx_sa *tx_sa; + int ret = 0; + int i; + + if (!netif_running(secy->netdev)) + return ret; + + ret = aq_set_txsc(nic, txsc_idx); + if (ret) + return ret; + + for (i = 0; i < MACSEC_NUM_AN; i++) { + tx_sa = rcu_dereference_bh(secy->tx_sc.sa[i]); + if (tx_sa) { + ret = aq_update_txsa(nic, aq_txsc->hw_sc_idx, secy, + tx_sa, aq_txsc->tx_sa_key[i], i); + if (ret) + return ret; + } + } + + return ret; +} + +static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, + enum aq_clear_type clear_type) +{ + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); + if (txsc_idx >= 0) { + ret = aq_clear_txsc(nic, txsc_idx, clear_type); + if (ret) + return ret; + } + + return ret; +} + +static int aq_apply_secy_cfg(struct aq_nic_s *nic, + const struct macsec_secy *secy) +{ + int txsc_idx; + int ret = 0; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); + if (txsc_idx >= 0) + apply_txsc_cfg(nic, txsc_idx); + + return ret; +} + +static int aq_apply_macsec_cfg(struct aq_nic_s *nic) +{ + int ret = 0; + int i; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->txsc_idx_busy & BIT(i)) { + ret = apply_txsc_cfg(nic, i); + if (ret) + return ret; + } + } + + return ret; +} + +static int aq_sa_from_sa_idx(const enum aq_macsec_sc_sa sc_sa, const int sa_idx) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sa_idx & 3; + case aq_macsec_sa_sc_2sa_16sc: + return sa_idx & 1; + case aq_macsec_sa_sc_1sa_32sc: + return 0; + default: + WARN_ONCE(true, "Invalid sc_sa"); + } + return -EINVAL; +} + +static int aq_sc_idx_from_sa_idx(const enum aq_macsec_sc_sa sc_sa, + const int sa_idx) +{ + switch (sc_sa) { + case aq_macsec_sa_sc_4sa_8sc: + return sa_idx & ~3; + case aq_macsec_sa_sc_2sa_16sc: + return sa_idx & ~1; + case aq_macsec_sa_sc_1sa_32sc: + return sa_idx; + default: + WARN_ONCE(true, "Invalid sc_sa"); + } + return -EINVAL; +} + static void aq_check_txsa_expiration(struct aq_nic_s *nic) { + u32 egress_sa_expired, egress_sa_threshold_expired; + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_hw_s *hw = nic->aq_hw; + struct aq_macsec_txsc *aq_txsc; + const struct macsec_secy *secy; + int sc_idx = 0, txsc_idx = 0; + enum aq_macsec_sc_sa sc_sa; + struct macsec_tx_sa *tx_sa; + unsigned char an = 0; + int ret; + int i; + + sc_sa = cfg->sc_sa; + + ret = aq_mss_get_egress_sa_expired(hw, &egress_sa_expired); + if (unlikely(ret)) + return; + + ret = aq_mss_get_egress_sa_threshold_expired(hw, + &egress_sa_threshold_expired); + + for (i = 0; i < AQ_MACSEC_MAX_SA; i++) { + if (egress_sa_expired & BIT(i)) { + an = aq_sa_from_sa_idx(sc_sa, i); + sc_idx = aq_sc_idx_from_sa_idx(sc_sa, i); + txsc_idx = aq_get_txsc_idx_from_sc_idx(sc_sa, sc_idx); + if (txsc_idx < 0) + continue; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + if (!(cfg->txsc_idx_busy & BIT(txsc_idx))) { + netdev_warn(nic->ndev, + "PN threshold expired on invalid TX SC"); + continue; + } + + secy = aq_txsc->sw_secy; + if (!netif_running(secy->netdev)) { + netdev_warn(nic->ndev, + "PN threshold expired on down TX SC"); + continue; + } + + if (unlikely(!(aq_txsc->tx_sa_idx_busy & BIT(an)))) { + netdev_warn(nic->ndev, + "PN threshold expired on invalid TX SA"); + continue; + } + + tx_sa = rcu_dereference_bh(secy->tx_sc.sa[an]); + macsec_pn_wrapped((struct macsec_secy *)secy, tx_sa); + } + } + + aq_mss_set_egress_sa_expired(hw, egress_sa_expired); + if (likely(!ret)) + aq_mss_set_egress_sa_threshold_expired(hw, + egress_sa_threshold_expired); } const struct macsec_ops aq_macsec_ops = { @@ -129,10 +750,13 @@ void aq_macsec_free(struct aq_nic_s *nic) int aq_macsec_enable(struct aq_nic_s *nic) { + u32 ctl_ether_types[1] = { ETH_P_PAE }; struct macsec_msg_fw_response resp = { 0 }; struct macsec_msg_fw_request msg = { 0 }; struct aq_hw_s *hw = nic->aq_hw; - int ret = 0; + int num_ctl_ether_types = 0; + int index = 0, tbl_idx; + int ret; if (!nic->macsec_cfg) return 0; @@ -155,6 +779,26 @@ int aq_macsec_enable(struct aq_nic_s *nic) goto unlock; } + /* Init Ethertype bypass filters */ + for (index = 0; index < ARRAY_SIZE(ctl_ether_types); index++) { + struct aq_mss_egress_ctlf_record tx_ctlf_rec; + + if (ctl_ether_types[index] == 0) + continue; + + memset(&tx_ctlf_rec, 0, sizeof(tx_ctlf_rec)); + tx_ctlf_rec.eth_type = ctl_ether_types[index]; + tx_ctlf_rec.match_type = 4; /* Match eth_type only */ + tx_ctlf_rec.match_mask = 0xf; /* match for eth_type */ + tx_ctlf_rec.action = 0; /* Bypass MACSEC modules */ + tbl_idx = NUMROWS_EGRESSCTLFRECORD - num_ctl_ether_types - 1; + aq_mss_set_egress_ctlf_record(hw, &tx_ctlf_rec, tbl_idx); + + num_ctl_ether_types++; + } + + ret = aq_apply_macsec_cfg(nic); + unlock: rtnl_unlock(); return ret; diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h index e4c4cf3bea38..5ab0ee4bea73 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h @@ -24,6 +24,10 @@ enum aq_macsec_sc_sa { }; struct aq_macsec_txsc { + u32 hw_sc_idx; + unsigned long tx_sa_idx_busy; + const struct macsec_secy *sw_secy; + u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; }; struct aq_macsec_rxsc { From patchwork Wed Mar 25 12:52:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F0F1C54FD3 for ; Wed, 25 Mar 2020 12:53:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 536452076A for ; Wed, 25 Mar 2020 12:53:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="lRx2Qmdg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727578AbgCYMx3 (ORCPT ); Wed, 25 Mar 2020 08:53:29 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:36892 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727046AbgCYMx2 (ORCPT ); Wed, 25 Mar 2020 08:53:28 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCoHTg009160; Wed, 25 Mar 2020 05:53:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=NXK837NxwBCL0CGQnUZmq+nR/VKuIuc49LtckQd6ryU=; b=lRx2QmdgEF/cJ1Eyfs140Cn4fANbhqdGWYMWZVoYvwMyFYQg245+gM5OSbq8ctz2U72a feacfxxPaXNRiel/26IrResFwNMJ+8Xsb1gnNZhVlXUgRGCZSraru5jgnmU8QG47Tdqd opf4QKKKmNhQupNRyUKV/IaENqIx2aFHbc4hB8jusva/CDfSwgfmTb75IWP6t2t6zsAn 4kjXOjs6nqbg/cwEQdxHRzSqCTCJ1PP6KWKMigKkuQAgA1T8NTTsLintfeuN9toQ/L6j k1bghD7PCx7/XVQDWW0AuNU+GGOrdLnIo1AWzJEgJxfVNecHmmK0ghBtpFJBoBw2/au2 5Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nrgje-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:24 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:22 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:22 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id C4A9C3F7040; Wed, 25 Mar 2020 05:53:21 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH v2 net-next 14/17] net: atlantic: MACSec ingress offload implementation Date: Wed, 25 Mar 2020 15:52:43 +0300 Message-ID: <20200325125246.987-15-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch adds support for MACSec ingress HW offloading on Atlantic network cards. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../ethernet/aquantia/atlantic/aq_macsec.c | 421 +++++++++++++++++- .../ethernet/aquantia/atlantic/aq_macsec.h | 5 + 2 files changed, 420 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index 6ca5361387e6..53e533f8a53d 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -25,6 +25,10 @@ static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, enum aq_clear_type clear_type); static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, const int sa_num, enum aq_clear_type clear_type); +static int aq_clear_rxsc(struct aq_nic_s *nic, const int rxsc_idx, + enum aq_clear_type clear_type); +static int aq_clear_rxsa(struct aq_nic_s *nic, struct aq_macsec_rxsc *aq_rxsc, + const int sa_num, enum aq_clear_type clear_type); static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, enum aq_clear_type clear_type); static int aq_apply_macsec_cfg(struct aq_nic_s *nic); @@ -57,6 +61,22 @@ static int aq_get_txsc_idx_from_secy(struct aq_macsec_cfg *macsec_cfg, return -1; } +static int aq_get_rxsc_idx_from_rxsc(struct aq_macsec_cfg *macsec_cfg, + const struct macsec_rx_sc *rxsc) +{ + int i; + + if (unlikely(!rxsc)) + return -1; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (macsec_cfg->aq_rxsc[i].sw_rxsc == rxsc) + return i; + } + + return -1; +} + static int aq_get_txsc_idx_from_sc_idx(const enum aq_macsec_sc_sa sc_sa, const int sc_idx) { @@ -506,34 +526,351 @@ static int aq_mdo_del_txsa(struct macsec_context *ctx) return ret; } +static int aq_rxsc_validate_frames(const enum macsec_validation_type validate) +{ + switch (validate) { + case MACSEC_VALIDATE_DISABLED: + return 2; + case MACSEC_VALIDATE_CHECK: + return 1; + case MACSEC_VALIDATE_STRICT: + return 0; + default: + WARN_ONCE(true, "Invalid validation type"); + } + + return 0; +} + +static int aq_set_rxsc(struct aq_nic_s *nic, const u32 rxsc_idx) +{ + const struct aq_macsec_rxsc *aq_rxsc = + &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + struct aq_mss_ingress_preclass_record pre_class_record; + const struct macsec_rx_sc *rx_sc = aq_rxsc->sw_rxsc; + const struct macsec_secy *secy = aq_rxsc->sw_secy; + const u32 hw_sc_idx = aq_rxsc->hw_sc_idx; + struct aq_mss_ingress_sc_record sc_record; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + memset(&pre_class_record, 0, sizeof(pre_class_record)); + put_unaligned_be64((__force u64)rx_sc->sci, pre_class_record.sci); + pre_class_record.sci_mask = 0xff; + /* match all MACSEC ethertype packets */ + pre_class_record.eth_type = ETH_P_MACSEC; + pre_class_record.eth_type_mask = 0x3; + + aq_ether_addr_to_mac(pre_class_record.mac_sa, (char *)&rx_sc->sci); + pre_class_record.sa_mask = 0x3f; + + pre_class_record.an_mask = nic->macsec_cfg->sc_sa; + pre_class_record.sc_idx = hw_sc_idx; + /* strip SecTAG & forward for decryption */ + pre_class_record.action = 0x0; + pre_class_record.valid = 1; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx + 1); + if (ret) + return ret; + + /* If SCI is absent, then match by SA alone */ + pre_class_record.sci_mask = 0; + pre_class_record.sci_from_table = 1; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx); + if (ret) + return ret; + + memset(&sc_record, 0, sizeof(sc_record)); + sc_record.validate_frames = + aq_rxsc_validate_frames(secy->validate_frames); + if (secy->replay_protect) { + sc_record.replay_protect = 1; + sc_record.anti_replay_window = secy->replay_window; + } + sc_record.valid = 1; + sc_record.fresh = 1; + + ret = aq_mss_set_ingress_sc_record(hw, &sc_record, hw_sc_idx); + if (ret) + return ret; + + return ret; +} + static int aq_mdo_add_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const u32 rxsc_idx_max = aq_sc_idx_max(cfg->sc_sa); + u32 rxsc_idx; + int ret = 0; + + if (hweight32(cfg->rxsc_idx_busy) >= rxsc_idx_max) + return -ENOSPC; + + rxsc_idx = ffz(cfg->rxsc_idx_busy); + if (rxsc_idx >= rxsc_idx_max) + return -ENOSPC; + + if (ctx->prepare) + return 0; + + cfg->aq_rxsc[rxsc_idx].hw_sc_idx = aq_to_hw_sc_idx(rxsc_idx, + cfg->sc_sa); + cfg->aq_rxsc[rxsc_idx].sw_secy = ctx->secy; + cfg->aq_rxsc[rxsc_idx].sw_rxsc = ctx->rx_sc; + + if (netif_carrier_ok(nic->ndev) && netif_running(ctx->secy->netdev)) + ret = aq_set_rxsc(nic, rxsc_idx); + + if (ret < 0) + return ret; + + set_bit(rxsc_idx, &cfg->rxsc_idx_busy); + + return 0; } static int aq_mdo_upd_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(ctx->secy->netdev)) + ret = aq_set_rxsc(nic, rxsc_idx); + + return ret; +} + +static int aq_clear_rxsc(struct aq_nic_s *nic, const int rxsc_idx, + enum aq_clear_type clear_type) +{ + struct aq_macsec_rxsc *rx_sc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + int sa_num; + + for_each_set_bit (sa_num, &rx_sc->rx_sa_idx_busy, AQ_MACSEC_MAX_SA) { + ret = aq_clear_rxsa(nic, rx_sc, sa_num, clear_type); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_HW) { + struct aq_mss_ingress_preclass_record pre_class_record; + struct aq_mss_ingress_sc_record sc_record; + + memset(&pre_class_record, 0, sizeof(pre_class_record)); + memset(&sc_record, 0, sizeof(sc_record)); + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx); + if (ret) + return ret; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx + 1); + if (ret) + return ret; + + sc_record.fresh = 1; + ret = aq_mss_set_ingress_sc_record(hw, &sc_record, + rx_sc->hw_sc_idx); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_SW) { + clear_bit(rxsc_idx, &nic->macsec_cfg->rxsc_idx_busy); + rx_sc->sw_secy = NULL; + rx_sc->sw_rxsc = NULL; + } + + return ret; } static int aq_mdo_del_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + enum aq_clear_type clear_type = AQ_CLEAR_SW; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev)) + clear_type = AQ_CLEAR_ALL; + + ret = aq_clear_rxsc(nic, rxsc_idx, clear_type); + + return ret; +} + +static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx, + const struct macsec_secy *secy, + const struct macsec_rx_sa *rx_sa, + const unsigned char *key, const unsigned char an) +{ + struct aq_mss_ingress_sakey_record sa_key_record; + struct aq_mss_ingress_sa_record sa_record; + struct aq_hw_s *hw = nic->aq_hw; + const int sa_idx = sc_idx | an; + int ret = 0; + + memset(&sa_record, 0, sizeof(sa_record)); + sa_record.valid = rx_sa->active; + sa_record.fresh = 1; + sa_record.next_pn = rx_sa->next_pn; + + ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); + if (ret) + return ret; + + if (!key) + return ret; + + memset(&sa_key_record, 0, sizeof(sa_key_record)); + memcpy(&sa_key_record.key, key, secy->key_len); + + switch (secy->key_len) { + case AQ_MACSEC_KEY_LEN_128_BIT: + sa_key_record.key_len = 0; + break; + case AQ_MACSEC_KEY_LEN_192_BIT: + sa_key_record.key_len = 1; + break; + case AQ_MACSEC_KEY_LEN_256_BIT: + sa_key_record.key_len = 2; + break; + default: + return -1; + } + + aq_rotate_keys(&sa_key_record.key, secy->key_len); + + ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx); + + return ret; } static int aq_mdo_add_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_rxsc *aq_rxsc; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_rxsc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + set_bit(ctx->sa.assoc_num, &aq_rxsc->rx_sa_idx_busy); + + memcpy(aq_rxsc->rx_sa_key[ctx->sa.assoc_num], ctx->sa.key, + secy->key_len); + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_rxsa(nic, aq_rxsc->hw_sc_idx, secy, + ctx->sa.rx_sa, ctx->sa.key, + ctx->sa.assoc_num); + + return ret; } static int aq_mdo_upd_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_rxsa(nic, cfg->aq_rxsc[rxsc_idx].hw_sc_idx, + secy, ctx->sa.rx_sa, NULL, + ctx->sa.assoc_num); + + return ret; +} + +static int aq_clear_rxsa(struct aq_nic_s *nic, struct aq_macsec_rxsc *aq_rxsc, + const int sa_num, enum aq_clear_type clear_type) +{ + int sa_idx = aq_rxsc->hw_sc_idx | sa_num; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + if (clear_type & AQ_CLEAR_SW) + clear_bit(sa_num, &aq_rxsc->rx_sa_idx_busy); + + if ((clear_type & AQ_CLEAR_HW) && netif_carrier_ok(nic->ndev)) { + struct aq_mss_ingress_sakey_record sa_key_record; + struct aq_mss_ingress_sa_record sa_record; + + memset(&sa_key_record, 0, sizeof(sa_key_record)); + memset(&sa_record, 0, sizeof(sa_record)); + sa_record.fresh = 1; + ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); + if (ret) + return ret; + + return aq_mss_set_ingress_sakey_record(hw, &sa_key_record, + sa_idx); + } + + return ret; } static int aq_mdo_del_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + ret = aq_clear_rxsa(nic, &cfg->aq_rxsc[rxsc_idx], ctx->sa.assoc_num, + AQ_CLEAR_ALL); + + return ret; } static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) @@ -564,10 +901,40 @@ static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) return ret; } +static int apply_rxsc_cfg(struct aq_nic_s *nic, const int rxsc_idx) +{ + struct aq_macsec_rxsc *aq_rxsc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + const struct macsec_secy *secy = aq_rxsc->sw_secy; + struct macsec_rx_sa *rx_sa; + int ret = 0; + int i; + + if (!netif_running(secy->netdev)) + return ret; + + ret = aq_set_rxsc(nic, rxsc_idx); + if (ret) + return ret; + + for (i = 0; i < MACSEC_NUM_AN; i++) { + rx_sa = rcu_dereference_bh(aq_rxsc->sw_rxsc->sa[i]); + if (rx_sa) { + ret = aq_update_rxsa(nic, aq_rxsc->hw_sc_idx, secy, + rx_sa, aq_rxsc->rx_sa_key[i], i); + if (ret) + return ret; + } + } + + return ret; +} + static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, enum aq_clear_type clear_type) { + struct macsec_rx_sc *rx_sc; int txsc_idx; + int rxsc_idx; int ret = 0; txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); @@ -577,19 +944,43 @@ static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, return ret; } + for (rx_sc = rcu_dereference_bh(secy->rx_sc); rx_sc; + rx_sc = rcu_dereference_bh(rx_sc->next)) { + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (rxsc_idx < 0) + continue; + + ret = aq_clear_rxsc(nic, rxsc_idx, clear_type); + if (ret) + return ret; + } + return ret; } static int aq_apply_secy_cfg(struct aq_nic_s *nic, const struct macsec_secy *secy) { + struct macsec_rx_sc *rx_sc; int txsc_idx; + int rxsc_idx; int ret = 0; txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); if (txsc_idx >= 0) apply_txsc_cfg(nic, txsc_idx); + for (rx_sc = rcu_dereference_bh(secy->rx_sc); rx_sc && rx_sc->active; + rx_sc = rcu_dereference_bh(rx_sc->next)) { + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (unlikely(rxsc_idx < 0)) + continue; + + ret = apply_rxsc_cfg(nic, rxsc_idx); + if (ret) + return ret; + } + return ret; } @@ -606,6 +997,14 @@ static int aq_apply_macsec_cfg(struct aq_nic_s *nic) } } + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->rxsc_idx_busy & BIT(i)) { + ret = apply_rxsc_cfg(nic, i); + if (ret) + return ret; + } + } + return ret; } @@ -781,6 +1180,7 @@ int aq_macsec_enable(struct aq_nic_s *nic) /* Init Ethertype bypass filters */ for (index = 0; index < ARRAY_SIZE(ctl_ether_types); index++) { + struct aq_mss_ingress_prectlf_record rx_prectlf_rec; struct aq_mss_egress_ctlf_record tx_ctlf_rec; if (ctl_ether_types[index] == 0) @@ -794,6 +1194,15 @@ int aq_macsec_enable(struct aq_nic_s *nic) tbl_idx = NUMROWS_EGRESSCTLFRECORD - num_ctl_ether_types - 1; aq_mss_set_egress_ctlf_record(hw, &tx_ctlf_rec, tbl_idx); + memset(&rx_prectlf_rec, 0, sizeof(rx_prectlf_rec)); + rx_prectlf_rec.eth_type = ctl_ether_types[index]; + rx_prectlf_rec.match_type = 4; /* Match eth_type only */ + rx_prectlf_rec.match_mask = 0xf; /* match for eth_type */ + rx_prectlf_rec.action = 0; /* Bypass MACSEC modules */ + tbl_idx = + NUMROWS_INGRESSPRECTLFRECORD - num_ctl_ether_types - 1; + aq_mss_set_ingress_prectlf_record(hw, &rx_prectlf_rec, tbl_idx); + num_ctl_ether_types++; } diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h index 5ab0ee4bea73..b8485c1cb667 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h @@ -31,6 +31,11 @@ struct aq_macsec_txsc { }; struct aq_macsec_rxsc { + u32 hw_sc_idx; + unsigned long rx_sa_idx_busy; + const struct macsec_secy *sw_secy; + const struct macsec_rx_sc *sw_rxsc; + u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; }; struct aq_macsec_cfg { From patchwork Wed Mar 25 12:52:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1EBDC43331 for ; Wed, 25 Mar 2020 12:53:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 676C62076A for ; Wed, 25 Mar 2020 12:53:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="fKXzI/EV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727586AbgCYMxc (ORCPT ); Wed, 25 Mar 2020 08:53:32 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:31866 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727046AbgCYMxb (ORCPT ); Wed, 25 Mar 2020 08:53:31 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCp54N014665; Wed, 25 Mar 2020 05:53:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=w7MpRhW46gnw/EeuLy13gY4vdgRrmqXsxFiC3eZL1EQ=; b=fKXzI/EVKpE3nTYKDEae7qb7X9I+pIP5KJpi6f2LVtpW0a8WfLNlN2Qvbgx6DeWX/qqM 2zsUOCxq3EDR2TR6st/MjYYbjYkb+DwklxkEbAxKb9hiEUbWbbPdP5D+NMhsW5MZDur0 02myxH4B2GVRpMfAR/Vt07vgRe+IecjydwlKb8GmMBO2FmNUMpiBhc/TMPxAzmMk/UM6 r/OrliQ1xnWv87aNuIvQvzx7bBNb0D6aH8MKLZ/q70XQl9sNJCEpeZB4HahAj0DxfKog /mgbh3piI0URZLkFzeta4JLx/RFG45Scd2/jby4OWQzOC5zuVrIN7UTzq/z1KG0Gqfj0 fg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3006xkr5r1-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:28 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:25 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id BAA2B3F703F; Wed, 25 Mar 2020 05:53:23 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Dmitry Bogdanov" , Igor Russkikh Subject: [PATCH v2 net-next 15/17] net: atlantic: MACSec offload statistics HW bindings Date: Wed, 25 Mar 2020 15:52:44 +0300 Message-ID: <20200325125246.987-16-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds the Atlantic HW-specific bindings for MACSec statistics, e.g. register addresses / structs, helper function, etc, which will be used by actual callback implementations. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../aquantia/atlantic/macsec/macsec_api.c | 545 ++++++++++++++++++ .../aquantia/atlantic/macsec/macsec_api.h | 47 ++ .../aquantia/atlantic/macsec/macsec_struct.h | 214 +++++++ 3 files changed, 806 insertions(+) diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c index 44d9296dfa63..97901c114bfa 100644 --- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c @@ -1817,6 +1817,551 @@ int aq_mss_get_egress_sakey_record(struct aq_hw_s *hw, return AQ_API_CALL_SAFE(get_egress_sakey_record, hw, rec, table_index); } +static int get_egress_sc_counters(struct aq_hw_s *hw, + struct aq_mss_egress_sc_counters *counters, + u16 sc_index) +{ + u16 packed_record[4]; + int ret; + + if (sc_index >= NUMROWS_EGRESSSCRECORD) + return -EINVAL; + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sc_index * 8 + 4); + if (unlikely(ret)) + return ret; + counters->sc_protected_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sc_protected_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sc_index * 8 + 5); + if (unlikely(ret)) + return ret; + counters->sc_encrypted_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sc_encrypted_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sc_index * 8 + 6); + if (unlikely(ret)) + return ret; + counters->sc_protected_octets[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sc_protected_octets[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sc_index * 8 + 7); + if (unlikely(ret)) + return ret; + counters->sc_encrypted_octets[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sc_encrypted_octets[1] = + packed_record[2] | (packed_record[3] << 16); + + return 0; +} + +int aq_mss_get_egress_sc_counters(struct aq_hw_s *hw, + struct aq_mss_egress_sc_counters *counters, + u16 sc_index) +{ + memset(counters, 0, sizeof(*counters)); + + return AQ_API_CALL_SAFE(get_egress_sc_counters, hw, counters, sc_index); +} + +static int get_egress_sa_counters(struct aq_hw_s *hw, + struct aq_mss_egress_sa_counters *counters, + u16 sa_index) +{ + u16 packed_record[4]; + int ret; + + if (sa_index >= NUMROWS_EGRESSSARECORD) + return -EINVAL; + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sa_index * 8 + 0); + if (unlikely(ret)) + return ret; + counters->sa_hit_drop_redirect[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sa_hit_drop_redirect[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sa_index * 8 + 1); + if (unlikely(ret)) + return ret; + counters->sa_protected2_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sa_protected2_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sa_index * 8 + 2); + if (unlikely(ret)) + return ret; + counters->sa_protected_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sa_protected_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, sa_index * 8 + 3); + if (unlikely(ret)) + return ret; + counters->sa_encrypted_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->sa_encrypted_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + return 0; +} + +int aq_mss_get_egress_sa_counters(struct aq_hw_s *hw, + struct aq_mss_egress_sa_counters *counters, + u16 sa_index) +{ + memset(counters, 0, sizeof(*counters)); + + return AQ_API_CALL_SAFE(get_egress_sa_counters, hw, counters, sa_index); +} + +static int +get_egress_common_counters(struct aq_hw_s *hw, + struct aq_mss_egress_common_counters *counters) +{ + u16 packed_record[4]; + int ret; + + ret = get_raw_egress_record(hw, packed_record, 4, 3, 256 + 0); + if (unlikely(ret)) + return ret; + counters->ctl_pkt[0] = packed_record[0] | (packed_record[1] << 16); + counters->ctl_pkt[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, 256 + 1); + if (unlikely(ret)) + return ret; + counters->unknown_sa_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unknown_sa_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, 256 + 2); + if (unlikely(ret)) + return ret; + counters->untagged_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->untagged_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, 256 + 3); + if (unlikely(ret)) + return ret; + counters->too_long[0] = packed_record[0] | (packed_record[1] << 16); + counters->too_long[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, 256 + 4); + if (unlikely(ret)) + return ret; + counters->ecc_error_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->ecc_error_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_egress_record(hw, packed_record, 4, 3, 256 + 5); + if (unlikely(ret)) + return ret; + counters->unctrl_hit_drop_redir[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unctrl_hit_drop_redir[1] = + packed_record[2] | (packed_record[3] << 16); + + return 0; +} + +int aq_mss_get_egress_common_counters(struct aq_hw_s *hw, + struct aq_mss_egress_common_counters *counters) +{ + memset(counters, 0, sizeof(*counters)); + + return AQ_API_CALL_SAFE(get_egress_common_counters, hw, counters); +} + +static int clear_egress_counters(struct aq_hw_s *hw) +{ + struct mss_egress_ctl_register ctl_reg; + int ret; + + memset(&ctl_reg, 0, sizeof(ctl_reg)); + + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, MSS_EGRESS_CTL_REGISTER_ADDR, + &ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR + 4, + &ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + /* Toggle the Egress MIB clear bit 0->1->0 */ + ctl_reg.bits_0.clear_counter = 0; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR, ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR + 4, + ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + ctl_reg.bits_0.clear_counter = 1; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR, ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR + 4, + ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + ctl_reg.bits_0.clear_counter = 0; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR, ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_CTL_REGISTER_ADDR + 4, + ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + return 0; +} + +int aq_mss_clear_egress_counters(struct aq_hw_s *hw) +{ + return AQ_API_CALL_SAFE(clear_egress_counters, hw); +} + +static int get_ingress_sa_counters(struct aq_hw_s *hw, + struct aq_mss_ingress_sa_counters *counters, + u16 sa_index) +{ + u16 packed_record[4]; + int ret; + + if (sa_index >= NUMROWS_INGRESSSARECORD) + return -EINVAL; + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 0); + if (unlikely(ret)) + return ret; + counters->untagged_hit_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->untagged_hit_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 1); + if (unlikely(ret)) + return ret; + counters->ctrl_hit_drop_redir_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->ctrl_hit_drop_redir_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 2); + if (unlikely(ret)) + return ret; + counters->not_using_sa[0] = packed_record[0] | (packed_record[1] << 16); + counters->not_using_sa[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 3); + if (unlikely(ret)) + return ret; + counters->unused_sa[0] = packed_record[0] | (packed_record[1] << 16); + counters->unused_sa[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 4); + if (unlikely(ret)) + return ret; + counters->not_valid_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->not_valid_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 5); + if (unlikely(ret)) + return ret; + counters->invalid_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->invalid_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 6); + if (unlikely(ret)) + return ret; + counters->ok_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->ok_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 7); + if (unlikely(ret)) + return ret; + counters->late_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->late_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 8); + if (unlikely(ret)) + return ret; + counters->delayed_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->delayed_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 9); + if (unlikely(ret)) + return ret; + counters->unchecked_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unchecked_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 10); + if (unlikely(ret)) + return ret; + counters->validated_octets[0] = + packed_record[0] | (packed_record[1] << 16); + counters->validated_octets[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, + sa_index * 12 + 11); + if (unlikely(ret)) + return ret; + counters->decrypted_octets[0] = + packed_record[0] | (packed_record[1] << 16); + counters->decrypted_octets[1] = + packed_record[2] | (packed_record[3] << 16); + + return 0; +} + +int aq_mss_get_ingress_sa_counters(struct aq_hw_s *hw, + struct aq_mss_ingress_sa_counters *counters, + u16 sa_index) +{ + memset(counters, 0, sizeof(*counters)); + + return AQ_API_CALL_SAFE(get_ingress_sa_counters, hw, counters, + sa_index); +} + +static int +get_ingress_common_counters(struct aq_hw_s *hw, + struct aq_mss_ingress_common_counters *counters) +{ + u16 packed_record[4]; + int ret; + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 0); + if (unlikely(ret)) + return ret; + counters->ctl_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->ctl_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 1); + if (unlikely(ret)) + return ret; + counters->tagged_miss_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->tagged_miss_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 2); + if (unlikely(ret)) + return ret; + counters->untagged_miss_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->untagged_miss_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 3); + if (unlikely(ret)) + return ret; + counters->notag_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->notag_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 4); + if (unlikely(ret)) + return ret; + counters->untagged_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->untagged_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 5); + if (unlikely(ret)) + return ret; + counters->bad_tag_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->bad_tag_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 6); + if (unlikely(ret)) + return ret; + counters->no_sci_pkts[0] = packed_record[0] | (packed_record[1] << 16); + counters->no_sci_pkts[1] = packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 7); + if (unlikely(ret)) + return ret; + counters->unknown_sci_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unknown_sci_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 8); + if (unlikely(ret)) + return ret; + counters->ctrl_prt_pass_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->ctrl_prt_pass_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 9); + if (unlikely(ret)) + return ret; + counters->unctrl_prt_pass_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unctrl_prt_pass_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 10); + if (unlikely(ret)) + return ret; + counters->ctrl_prt_fail_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->ctrl_prt_fail_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 11); + if (unlikely(ret)) + return ret; + counters->unctrl_prt_fail_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unctrl_prt_fail_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 12); + if (unlikely(ret)) + return ret; + counters->too_long_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->too_long_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 13); + if (unlikely(ret)) + return ret; + counters->igpoc_ctl_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->igpoc_ctl_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 14); + if (unlikely(ret)) + return ret; + counters->ecc_error_pkts[0] = + packed_record[0] | (packed_record[1] << 16); + counters->ecc_error_pkts[1] = + packed_record[2] | (packed_record[3] << 16); + + ret = get_raw_ingress_record(hw, packed_record, 4, 6, 385 + 15); + if (unlikely(ret)) + return ret; + counters->unctrl_hit_drop_redir[0] = + packed_record[0] | (packed_record[1] << 16); + counters->unctrl_hit_drop_redir[1] = + packed_record[2] | (packed_record[3] << 16); + + return 0; +} + +int aq_mss_get_ingress_common_counters(struct aq_hw_s *hw, + struct aq_mss_ingress_common_counters *counters) +{ + memset(counters, 0, sizeof(*counters)); + + return AQ_API_CALL_SAFE(get_ingress_common_counters, hw, counters); +} + +static int clear_ingress_counters(struct aq_hw_s *hw) +{ + struct mss_ingress_ctl_register ctl_reg; + int ret; + + memset(&ctl_reg, 0, sizeof(ctl_reg)); + + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR, &ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR + 4, + &ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + /* Toggle the Ingress MIB clear bit 0->1->0 */ + ctl_reg.bits_0.clear_count = 0; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR, ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR + 4, + ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + ctl_reg.bits_0.clear_count = 1; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR, ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR + 4, + ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + ctl_reg.bits_0.clear_count = 0; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR, ctl_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_INGRESS_CTL_REGISTER_ADDR + 4, + ctl_reg.word_1); + if (unlikely(ret)) + return ret; + + return 0; +} + +int aq_mss_clear_ingress_counters(struct aq_hw_s *hw) +{ + return AQ_API_CALL_SAFE(clear_ingress_counters, hw); +} + static int get_egress_sa_expired(struct aq_hw_s *hw, u32 *expired) { u16 val; diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h index ab5415f99a32..ff03cc462a37 100644 --- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h @@ -262,6 +262,53 @@ int aq_mss_set_ingress_postctlf_record(struct aq_hw_s *hw, const struct aq_mss_ingress_postctlf_record *rec, u16 table_index); +/*! Read the counters for the specified SC, and unpack them into the + * fields of counters. + * counters - [OUT] The raw table row data will be unpacked here. + * sc_index - The table row to read (max 31). + */ +int aq_mss_get_egress_sc_counters(struct aq_hw_s *hw, + struct aq_mss_egress_sc_counters *counters, + u16 sc_index); + +/*! Read the counters for the specified SA, and unpack them into the + * fields of counters. + * counters - [OUT] The raw table row data will be unpacked here. + * sa_index - The table row to read (max 31). + */ +int aq_mss_get_egress_sa_counters(struct aq_hw_s *hw, + struct aq_mss_egress_sa_counters *counters, + u16 sa_index); + +/*! Read the counters for the common egress counters, and unpack them + * into the fields of counters. + * counters - [OUT] The raw table row data will be unpacked here. + */ +int aq_mss_get_egress_common_counters(struct aq_hw_s *hw, + struct aq_mss_egress_common_counters *counters); + +/*! Clear all Egress counters to 0.*/ +int aq_mss_clear_egress_counters(struct aq_hw_s *hw); + +/*! Read the counters for the specified SA, and unpack them into the + * fields of counters. + * counters - [OUT] The raw table row data will be unpacked here. + * sa_index - The table row to read (max 31). + */ +int aq_mss_get_ingress_sa_counters(struct aq_hw_s *hw, + struct aq_mss_ingress_sa_counters *counters, + u16 sa_index); + +/*! Read the counters for the common ingress counters, and unpack them + * into the fields of counters. + * counters - [OUT] The raw table row data will be unpacked here. + */ +int aq_mss_get_ingress_common_counters(struct aq_hw_s *hw, + struct aq_mss_ingress_common_counters *counters); + +/*! Clear all Ingress counters to 0. */ +int aq_mss_clear_ingress_counters(struct aq_hw_s *hw); + /*! Get Egress SA expired. */ int aq_mss_get_egress_sa_expired(struct aq_hw_s *hw, u32 *expired); /*! Get Egress SA threshold expired. */ diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h index 8c38a3470518..b6119dcc3bb9 100644 --- a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h @@ -697,4 +697,218 @@ struct aq_mss_ingress_postctlf_record { u32 action; }; +/*! Represents the Egress MIB counters for a single SC. Counters are + * 64 bits, lower 32 bits in field[0]. + */ +struct aq_mss_egress_sc_counters { + /*! The number of integrity protected but not encrypted packets + * for this transmitting SC. + */ + u32 sc_protected_pkts[2]; + /*! The number of integrity protected and encrypted packets for + * this transmitting SC. + */ + u32 sc_encrypted_pkts[2]; + /*! The number of plain text octets that are integrity protected + * but not encrypted on the transmitting SC. + */ + u32 sc_protected_octets[2]; + /*! The number of plain text octets that are integrity protected + * and encrypted on the transmitting SC. + */ + u32 sc_encrypted_octets[2]; +}; + +/*! Represents the Egress MIB counters for a single SA. Counters are + * 64 bits, lower 32 bits in field[0]. + */ +struct aq_mss_egress_sa_counters { + /*! The number of dropped packets for this transmitting SA. */ + u32 sa_hit_drop_redirect[2]; + /*! TODO */ + u32 sa_protected2_pkts[2]; + /*! The number of integrity protected but not encrypted packets + * for this transmitting SA. + */ + u32 sa_protected_pkts[2]; + /*! The number of integrity protected and encrypted packets for + * this transmitting SA. + */ + u32 sa_encrypted_pkts[2]; +}; + +/*! Represents the common Egress MIB counters; the counter not + * associated with a particular SC/SA. Counters are 64 bits, lower 32 + * bits in field[0]. + */ +struct aq_mss_egress_common_counters { + /*! The number of transmitted packets classified as MAC_CTL packets. */ + u32 ctl_pkt[2]; + /*! The number of transmitted packets that did not match any rows + * in the Egress Packet Classifier table. + */ + u32 unknown_sa_pkts[2]; + /*! The number of transmitted packets where the SC table entry has + * protect=0 (so packets are forwarded unchanged). + */ + u32 untagged_pkts[2]; + /*! The number of transmitted packets discarded because the packet + * length is greater than the ifMtu of the Common Port interface. + */ + u32 too_long[2]; + /*! The number of transmitted packets for which table memory was + * affected by an ECC error during processing. + */ + u32 ecc_error_pkts[2]; + /*! The number of transmitted packets for where the matched row in + * the Egress Packet Classifier table has action=drop. + */ + u32 unctrl_hit_drop_redir[2]; +}; + +/*! Represents the Ingress MIB counters for a single SA. Counters are + * 64 bits, lower 32 bits in field[0]. + */ +struct aq_mss_ingress_sa_counters { + /*! For this SA, the number of received packets without a SecTAG. */ + u32 untagged_hit_pkts[2]; + /*! For this SA, the number of received packets that were dropped. */ + u32 ctrl_hit_drop_redir_pkts[2]; + /*! For this SA which is not currently in use, the number of + * received packets that have been discarded, and have either the + * packets encrypted or the matched row in the Ingress SC Lookup + * table has validate_frames=Strict. + */ + u32 not_using_sa[2]; + /*! For this SA which is not currently in use, the number of + * received, unencrypted, packets with the matched row in the + * Ingress SC Lookup table has validate_frames!=Strict. + */ + u32 unused_sa[2]; + /*! For this SA, the number discarded packets with the condition + * that the packets are not valid and one of the following + * conditions are true: either the matched row in the Ingress SC + * Lookup table has validate_frames=Strict or the packets + * encrypted. + */ + u32 not_valid_pkts[2]; + /*! For this SA, the number of packets with the condition that the + * packets are not valid and the matched row in the Ingress SC + * Lookup table has validate_frames=Check. + */ + u32 invalid_pkts[2]; + /*! For this SA, the number of validated packets. */ + u32 ok_pkts[2]; + /*! For this SC, the number of received packets that have been + * discarded with the condition: the matched row in the Ingress + * SC Lookup table has replay_protect=1 and the PN of the packet + * is lower than the lower bound replay check PN. + */ + u32 late_pkts[2]; + /*! For this SA, the number of packets with the condition that the + * PN of the packets is lower than the lower bound replay + * protection PN. + */ + u32 delayed_pkts[2]; + /*! For this SC, the number of packets with the following condition: + * - the matched row in the Ingress SC Lookup table has + * replay_protect=0 or + * - the matched row in the Ingress SC Lookup table has + * replay_protect=1 and the packet is not encrypted and the + * integrity check has failed or + * - the matched row in the Ingress SC Lookup table has + * replay_protect=1 and the packet is encrypted and integrity + * check has failed. + */ + u32 unchecked_pkts[2]; + /*! The number of octets of plaintext recovered from received + * packets that were integrity protected but not encrypted. + */ + u32 validated_octets[2]; + /*! The number of octets of plaintext recovered from received + * packets that were integrity protected and encrypted. + */ + u32 decrypted_octets[2]; +}; + +/*! Represents the common Ingress MIB counters; the counter not + * associated with a particular SA. Counters are 64 bits, lower 32 + * bits in field[0]. + */ +struct aq_mss_ingress_common_counters { + /*! The number of received packets classified as MAC_CTL packets. */ + u32 ctl_pkts[2]; + /*! The number of received packets with the MAC security tag + * (SecTAG), not matching any rows in the Ingress Pre-MACSec + * Packet Classifier table. + */ + u32 tagged_miss_pkts[2]; + /*! The number of received packets without the MAC security tag + * (SecTAG), not matching any rows in the Ingress Pre-MACSec + * Packet Classifier table. + */ + u32 untagged_miss_pkts[2]; + /*! The number of received packets discarded without the MAC + * security tag (SecTAG) and with the matched row in the Ingress + * SC Lookup table having validate_frames=Strict. + */ + u32 notag_pkts[2]; + /*! The number of received packets without the MAC security tag + * (SecTAG) and with the matched row in the Ingress SC Lookup + * table having validate_frames!=Strict. + */ + u32 untagged_pkts[2]; + /*! The number of received packets discarded with an invalid + * SecTAG or a zero value PN or an invalid ICV. + */ + u32 bad_tag_pkts[2]; + /*! The number of received packets discarded with unknown SCI + * information with the condition: + * the matched row in the Ingress SC Lookup table has + * validate_frames=Strict or the C bit in the SecTAG is set. + */ + u32 no_sci_pkts[2]; + /*! The number of received packets with unknown SCI with the condition: + * The matched row in the Ingress SC Lookup table has + * validate_frames!=Strict and the C bit in the SecTAG is not set. + */ + u32 unknown_sci_pkts[2]; + /*! The number of received packets by the controlled port service + * that passed the Ingress Post-MACSec Packet Classifier table + * check. + */ + u32 ctrl_prt_pass_pkts[2]; + /*! The number of received packets by the uncontrolled port + * service that passed the Ingress Post-MACSec Packet Classifier + * table check. + */ + u32 unctrl_prt_pass_pkts[2]; + /*! The number of received packets by the controlled port service + * that failed the Ingress Post-MACSec Packet Classifier table + * check. + */ + u32 ctrl_prt_fail_pkts[2]; + /*! The number of received packets by the uncontrolled port + * service that failed the Ingress Post-MACSec Packet Classifier + * table check. + */ + u32 unctrl_prt_fail_pkts[2]; + /*! The number of received packets discarded because the packet + * length is greater than the ifMtu of the Common Port interface. + */ + u32 too_long_pkts[2]; + /*! The number of received packets classified as MAC_CTL by the + * Ingress Post-MACSec CTL Filter table. + */ + u32 igpoc_ctl_pkts[2]; + /*! The number of received packets for which table memory was + * affected by an ECC error during processing. + */ + u32 ecc_error_pkts[2]; + /*! The number of received packets by the uncontrolled port + * service that were dropped. + */ + u32 unctrl_hit_drop_redir[2]; +}; + #endif From patchwork Wed Mar 25 12:52:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 221876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF87AC54FD3 for ; Wed, 25 Mar 2020 12:53:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 81F702076A for ; Wed, 25 Mar 2020 12:53:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="AC3kgUtV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727604AbgCYMxf (ORCPT ); Wed, 25 Mar 2020 08:53:35 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:14988 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727457AbgCYMxe (ORCPT ); Wed, 25 Mar 2020 08:53:34 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02PCoN0O009175; Wed, 25 Mar 2020 05:53:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=itwi2ToL4W7nzRJv58uWA0ZxMBFYMoP2GBorxev9bWc=; b=AC3kgUtVDjOsq7mbXinX/m1xuE+IJthzryo7N+NWZ+BK5mYvh/OVcSE42yS7enxmYWMQ WSTSlM1dkBWiXFIZxGX9S3FdJetGL1oJFfXBDwVT8xQBuCc3tA5hT29MTlQzvgZJN5TC XdiuF2g8JEOLl7g9uzAhfgtNx4BWTb1yrDEFcq5OH0MN5LFxZZuvTNZHO/Wqx+gdPUJK O3HRT0re7KY8PGC3x/LeROQ06Gl3zsM/UW5ovC0SgWIu3SJmwq9vZh6sczr+Y7oBFbgo y1+A/W+xdi2+W4RIjPweBpBAxHsnweN9hK9RHAVLTBHPVbBCPz2usHNihv4i85nh5O4P 8w== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nrgjp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 25 Mar 2020 05:53:31 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 25 Mar 2020 05:53:29 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 25 Mar 2020 05:53:29 -0700 Received: from localhost.localdomain (unknown [10.9.16.55]) by maili.marvell.com (Postfix) with ESMTP id 402033F7041; Wed, 25 Mar 2020 05:53:27 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Antoine Tenart , "Igor Russkikh" Subject: [PATCH v2 net-next 17/17] net: atlantic: add XPN handling Date: Wed, 25 Mar 2020 15:52:46 +0300 Message-ID: <20200325125246.987-18-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325125246.987-1-irusskikh@marvell.com> References: <20200325125246.987-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-25_05:2020-03-24,2020-03-25 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch adds XPN handling. Our driver doesn't support XPN, but we should still update a couple of places in the code, because the size of 'next_pn' field has changed. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/ethernet/aquantia/atlantic/aq_macsec.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index 4bd283ba0d56..0b3e234a54aa 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -452,6 +452,9 @@ static int aq_mdo_add_secy(struct macsec_context *ctx) u32 txsc_idx; int ret = 0; + if (secy->xpn) + return -EOPNOTSUPP; + sc_sa = sc_sa_from_num_an(MACSEC_NUM_AN); if (sc_sa == aq_macsec_sa_sc_not_used) return -EINVAL; @@ -556,6 +559,7 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx, const struct macsec_tx_sa *tx_sa, const unsigned char *key, const unsigned char an) { + const u32 next_pn = tx_sa->next_pn_halves.lower; struct aq_mss_egress_sakey_record key_rec; const unsigned int sa_idx = sc_idx | an; struct aq_mss_egress_sa_record sa_rec; @@ -565,7 +569,7 @@ static int aq_update_txsa(struct aq_nic_s *nic, const unsigned int sc_idx, memset(&sa_rec, 0, sizeof(sa_rec)); sa_rec.valid = tx_sa->active; sa_rec.fresh = 1; - sa_rec.next_pn = tx_sa->next_pn; + sa_rec.next_pn = next_pn; ret = aq_mss_set_egress_sa_record(hw, &sa_rec, sa_idx); if (ret) @@ -889,6 +893,7 @@ static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx, const unsigned char *key, const unsigned char an) { struct aq_mss_ingress_sakey_record sa_key_record; + const u32 next_pn = rx_sa->next_pn_halves.lower; struct aq_mss_ingress_sa_record sa_record; struct aq_hw_s *hw = nic->aq_hw; const int sa_idx = sc_idx | an; @@ -897,7 +902,7 @@ static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx, memset(&sa_record, 0, sizeof(sa_record)); sa_record.valid = rx_sa->active; sa_record.fresh = 1; - sa_record.next_pn = rx_sa->next_pn; + sa_record.next_pn = next_pn; ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); if (ret)