From patchwork Tue Mar 10 15:03:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0D11C18E5B for ; Tue, 10 Mar 2020 15:04:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7732820675 for ; Tue, 10 Mar 2020 15:04:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="tOpSq+HC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727654AbgCJPEC (ORCPT ); Tue, 10 Mar 2020 11:04:02 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:30346 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726295AbgCJPEC (ORCPT ); Tue, 10 Mar 2020 11:04:02 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AEtw7L011804; Tue, 10 Mar 2020 08:03:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=VQ/+GVF/PtRQoBvYgA7OQQOmpzLhcumWjCG1WdXx2Hc=; b=tOpSq+HCBZME8ToF+c/szJTGlF26sPkADEq6hva8+EIoVCGeGuZfG+MvZcy2tRWwhKLk qVn01z0hfX8+7dTRfAgzDdkhyjOI41MXkraU3zLA4NN2aty02lNxGgITp11HBB2yxoA2 P5wG9hd8kCiAgjElAoGGNru9HZHXSVoFqRbecQ0X0O3BG/GHbmSklBRW/1lNCVDaPAL3 e/k9TRvUvcFkyTClbfAGogRT7I3HDh4499WexjyYF7B5RbWuyqrcgbUs/1b8pDDkxdwm nCbucn0wEDz7KbjyJigy/4+Y0JrtVlfPZf9DoXAl3+YYFIwqsjSjRuc/d1lkshEg7y1c FQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2yp04fm0ps-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:03:59 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:03:57 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:03:57 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id 2367A3F7040; Tue, 10 Mar 2020 08:03:55 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Antoine Tenart Subject: [RFC v2 01/16] net: introduce the MACSEC netdev feature Date: Tue, 10 Mar 2020 18:03:27 +0300 Message-ID: <20200310150342.1701-2-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antoine Tenart This patch introduce a new netdev feature, which will be used by drivers to state they can perform MACsec transformations in hardware. Signed-off-by: Antoine Tenart Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- include/linux/netdev_features.h | 3 +++ net/ethtool/common.c | 1 + 2 files changed, 4 insertions(+) diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h index 34d050bb1ae6..9d53c5ad272c 100644 --- a/include/linux/netdev_features.h +++ b/include/linux/netdev_features.h @@ -83,6 +83,8 @@ enum { NETIF_F_HW_TLS_RECORD_BIT, /* Offload TLS record */ NETIF_F_GRO_FRAGLIST_BIT, /* Fraglist GRO */ + NETIF_F_HW_MACSEC_BIT, /* Offload MACsec operations */ + /* * Add your fresh new feature above and remember to update * netdev_features_strings[] in net/core/ethtool.c and maybe @@ -154,6 +156,7 @@ enum { #define NETIF_F_HW_TLS_RX __NETIF_F(HW_TLS_RX) #define NETIF_F_GRO_FRAGLIST __NETIF_F(GRO_FRAGLIST) #define NETIF_F_GSO_FRAGLIST __NETIF_F(GSO_FRAGLIST) +#define NETIF_F_HW_MACSEC __NETIF_F(HW_MACSEC) /* Finds the next feature with the highest number of the range of start till 0. */ diff --git a/net/ethtool/common.c b/net/ethtool/common.c index 7b6969af5ae7..686d3c7dce83 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -60,6 +60,7 @@ const char netdev_features_strings[NETDEV_FEATURE_COUNT][ETH_GSTRING_LEN] = { [NETIF_F_HW_TLS_TX_BIT] = "tls-hw-tx-offload", [NETIF_F_HW_TLS_RX_BIT] = "tls-hw-rx-offload", [NETIF_F_GRO_FRAGLIST_BIT] = "rx-gro-list", + [NETIF_F_HW_MACSEC_BIT] = "macsec-hw-offload", }; const char From patchwork Tue Mar 10 15:03:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 202D6C18E5A for ; Tue, 10 Mar 2020 15:04:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E8D3020675 for ; Tue, 10 Mar 2020 15:04:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="lRkM8Wh+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727691AbgCJPEH (ORCPT ); Tue, 10 Mar 2020 11:04:07 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:63400 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbgCJPEG (ORCPT ); Tue, 10 Mar 2020 11:04:06 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AEtvrr011722; Tue, 10 Mar 2020 08:04:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=OymuE2jcSVf19RVItHryrTrurLbJO2M7B33Lej9Unis=; b=lRkM8Wh+7rsU+rbjKmcAR2Vt5iv2+74IXYyLePsalXOsMYCyO2sudq9DwuzQSxMxAld5 OQiA7BmaNjDkXdqkXg7asaWwpf1EyUZFHwgol+r/Tnq1ic2yh4M2edG6bzqbatideMA1 Wzbo2Ve+ZYIpZj4WA0Lb+meVvEXlE2Vio4xDXhUDtuz9Hb2We+gapq1ce1ZqnOZMSUg5 VaDFGrzOkksWERoYO9Fuw0x3jU0iC7aYpm331ez+MZDO+GDO+0kWxGYH5At7QA4SWVib 4qL7FPQfYr7DJjacay6yr0Ref68NNx+MQWFTZtNUgHzxUgDhc3BcswzlRQgFPgTR8oh1 Bg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2yp04fm0q3-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:03 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:01 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:01 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:01 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id CC10D3F7043; Tue, 10 Mar 2020 08:03:59 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Antoine Tenart Subject: [RFC v2 03/16] net: macsec: allow to reference a netdev from a MACsec context Date: Tue, 10 Mar 2020 18:03:29 +0300 Message-ID: <20200310150342.1701-4-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Antoine Tenart This patch allows to reference a net_device from a MACsec context. This is needed to allow implementing MACsec operations in net device drivers. Signed-off-by: Antoine Tenart Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- include/net/macsec.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/include/net/macsec.h b/include/net/macsec.h index 92e43db8b566..5ccdb2bc84df 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -178,7 +178,10 @@ struct macsec_secy { * struct macsec_context - MACsec context for hardware offloading */ struct macsec_context { - struct phy_device *phydev; + union { + struct net_device *netdev; + struct phy_device *phydev; + }; enum macsec_offload offload; struct macsec_secy *secy; From patchwork Tue Mar 10 15:03:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53514C18E5A for ; Tue, 10 Mar 2020 15:04:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2D82B20675 for ; Tue, 10 Mar 2020 15:04:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="oSLOIpjN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727726AbgCJPEK (ORCPT ); Tue, 10 Mar 2020 11:04:10 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:35920 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727693AbgCJPEJ (ORCPT ); Tue, 10 Mar 2020 11:04:09 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AEtvtW011748; Tue, 10 Mar 2020 08:04:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=KvX7DLdi1dWU3Wz5dlWkE2n1w8Gxut+P0tCzcRF2J2Y=; b=oSLOIpjNeGQ6vCme6vGQgD2Eh5z6vYB0jwt24c13ZiePEcH7cKb1VMUgp7KE4a8ECuT8 bwp5FKgXYK/1ms96Ue95qPpUEH5Dbsj+pqVnk6Ei2ssTt0KyOH2eualA5HEO7l26sRWr yDvax3W+GunUbOP+r+49NhQ+YIjXCOtkhLU/fYApSpBa2qI67q4m1N3BFQiV66nmB25c +V+7THiS1L0euoecbLZ4h73b3x+T7U+WnUxH2rY8ZGlJdOC7uSHfJVYV/OYFQlMm3rRJ Y8cppF9Q5G0X4dPqv7YYO8rV9VLoROZ1k/8/yJ9CIAzwak7vIs4MSXgE4zyezQS6l1ea iw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2yp04fm0qk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:06 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:04 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:04 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id 7CEFA3F7041; Tue, 10 Mar 2020 08:04:03 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Dmitry Bogdanov Subject: [RFC v2 05/16] net: macsec: init secy pointer in macsec_context Date: Tue, 10 Mar 2020 18:03:31 +0300 Message-ID: <20200310150342.1701-6-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds secy pointer initialization in the macsec_context. It will be used by MAC drivers in offloading operations. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index a88b41a79103..af41887d9a1e 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1692,6 +1692,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), MACSEC_KEYID_LEN); @@ -1733,6 +1734,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) struct nlattr **attrs = info->attrs; struct macsec_rx_sc *rx_sc; struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1]; + struct macsec_secy *secy; bool was_active; int ret; @@ -1752,6 +1754,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) return PTR_ERR(dev); } + secy = &macsec_priv(dev)->secy; sci = nla_get_sci(tb_rxsc[MACSEC_RXSC_ATTR_SCI]); rx_sc = create_rx_sc(dev, sci); @@ -1775,6 +1778,7 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_add_rxsc, &ctx); if (ret) @@ -1900,6 +1904,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]), MACSEC_KEYID_LEN); @@ -1969,6 +1974,7 @@ static int macsec_del_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_rxsa, &ctx); if (ret) @@ -2034,6 +2040,7 @@ static int macsec_del_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_rxsc, &ctx); if (ret) goto cleanup; @@ -2092,6 +2099,7 @@ static int macsec_del_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_del_txsa, &ctx); if (ret) @@ -2189,6 +2197,7 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.tx_sa = tx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_txsa, &ctx); if (ret) @@ -2269,6 +2278,7 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info) ctx.sa.assoc_num = assoc_num; ctx.sa.rx_sa = rx_sa; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_rxsa, &ctx); if (ret) @@ -2339,6 +2349,7 @@ static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info) } ctx.rx_sc = rx_sc; + ctx.secy = secy; ret = macsec_offload(ops->mdo_upd_rxsc, &ctx); if (ret) @@ -3184,6 +3195,7 @@ static int macsec_dev_open(struct net_device *dev) goto clear_allmulti; } + ctx.secy = &macsec->secy; err = macsec_offload(ops->mdo_dev_open, &ctx); if (err) goto clear_allmulti; @@ -3215,8 +3227,10 @@ static int macsec_dev_stop(struct net_device *dev) struct macsec_context ctx; ops = macsec_get_ops(macsec, &ctx); - if (ops) + if (ops) { + ctx.secy = &macsec->secy; macsec_offload(ops->mdo_dev_stop, &ctx); + } } dev_mc_unsync(real_dev, dev); From patchwork Tue Mar 10 15:03:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC87AC18E5B for ; Tue, 10 Mar 2020 15:04:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A60D521655 for ; Tue, 10 Mar 2020 15:04:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="Om4AYGf3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727732AbgCJPEN (ORCPT ); Tue, 10 Mar 2020 11:04:13 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:30918 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727682AbgCJPEN (ORCPT ); Tue, 10 Mar 2020 11:04:13 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AF1eWU007670; Tue, 10 Mar 2020 08:04:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=Nip/TeKWG8TPAEfH1Zvuq66z9dZjVCFARRyUqNTG9XU=; b=Om4AYGf31JBnsocc5pa65pkisKGUC3hvqboOJjHh2cjXM0CmQnLCtqbYl3bIc/v4Z5Va gkfZG3283COluZcWYjEXO7XYtQVw+SNeQHuslqSeaCn0OvyiIZvq25tmS+TyNVLKzUFb Dnl+mIcnCNYfwevzUpOx2VrngVnq61Ru+5j7EVELZ1M4sMBOR2LaEvD/zuJbczmO8C0g 6QN5s9HZ0WICMAaxwi4wLfd7+KcsZsqVVRypqH0by4wFwNn13leIf8xmW6pmvGCDZDMs sjLW1hzoPfHmsBUx8SWSuVc58SQYD4JMX+KkoLQTVemeG6iwhyeCA4eK0fHjtO0SKnaI Tw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2ym9uwpm6x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:08 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:06 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id 6251C3F7043; Tue, 10 Mar 2020 08:04:05 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Dmitry Bogdanov Subject: [RFC v2 06/16] net: macsec: allow multiple macsec devices with offload Date: Tue, 10 Mar 2020 18:03:32 +0300 Message-ID: <20200310150342.1701-7-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov Offload engine can setup several SecY. Each macsec interface shall have its own mac address. It will filter a traffic by dest mac address. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 25 +------------------------ 1 file changed, 1 insertion(+), 24 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index af41887d9a1e..45ede40692f8 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -2389,11 +2389,10 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) enum macsec_offload offload, prev_offload; int (*func)(struct macsec_context *ctx); struct nlattr **attrs = info->attrs; - struct net_device *dev, *loop_dev; + struct net_device *dev; const struct macsec_ops *ops; struct macsec_context ctx; struct macsec_dev *macsec; - struct net *loop_net; int ret; if (!attrs[MACSEC_ATTR_IFINDEX]) @@ -2421,28 +2420,6 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) !macsec_check_offload(offload, macsec)) return -EOPNOTSUPP; - if (offload == MACSEC_OFFLOAD_OFF) - goto skip_limitation; - - /* Check the physical interface isn't offloading another interface - * first. - */ - for_each_net(loop_net) { - for_each_netdev(loop_net, loop_dev) { - struct macsec_dev *priv; - - if (!netif_is_macsec(loop_dev)) - continue; - - priv = macsec_priv(loop_dev); - - if (priv->real_dev == macsec->real_dev && - priv->offload != MACSEC_OFFLOAD_OFF) - return -EBUSY; - } - } - -skip_limitation: /* Check if the net device is busy. */ if (netif_running(dev)) return -EBUSY; From patchwork Tue Mar 10 15:03:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97138C10F27 for ; Tue, 10 Mar 2020 15:04:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5744320675 for ; Tue, 10 Mar 2020 15:04:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="OtM6nyVk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727771AbgCJPER (ORCPT ); Tue, 10 Mar 2020 11:04:17 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:20290 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727693AbgCJPEQ (ORCPT ); Tue, 10 Mar 2020 11:04:16 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AEtvqD011753; Tue, 10 Mar 2020 08:04:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=iar1zpmC6yNx5X0d5PwkPD/A1ROLG9RtAwnJVfywjWA=; b=OtM6nyVkXK5RWaWmxyaiYKud3lpYowPWbR0VPXNLa/+hXX5F58ZttLQ1p1h4+4wwfR9X DU+PI8v99uyNFofW2CLmd/KCYPrAwXkZ/6O9M7NyFKC7vvJXOqTnod9euBa+ZycUdWjy Kmtlb3iChaA7OLks8ruQ1hTbYVkqdV5ci463j/W6xiF9EXI/sKNR8ceNr3h3cH9XtEvn GZU8KkAaCWuAqz/+P+pxdfivpCYlpOzaSccOdSO/K/M9GWFqDzVFPqi8vIFg9CPSkOWD VhhEc4m5DDxMRZVoCzKYPRQOHtPVjIkpGrPDi1wrgl9ThjmREXNLsZxZOmO5MLoS1I4C tg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2yp04fm0rk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:12 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:10 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:10 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id D8BEC3F7043; Tue, 10 Mar 2020 08:04:08 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Dmitry Bogdanov Subject: [RFC v2 08/16] net: macsec: add support for getting offloaded stats Date: Tue, 10 Mar 2020 18:03:34 +0300 Message-ID: <20200310150342.1701-9-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov When HW offloading is enabled, offloaded stats should be used, because s/w stats are wrong and out of sync with the HW in this case. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- drivers/net/macsec.c | 316 +++++++++++++++++++++++++++++-------------- include/net/macsec.h | 24 ++++ 2 files changed, 235 insertions(+), 105 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 20a5b4c414d0..58396b5193fd 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -76,17 +76,6 @@ struct gcm_iv { __be32 pn; }; -struct macsec_dev_stats { - __u64 OutPktsUntagged; - __u64 InPktsUntagged; - __u64 OutPktsTooLong; - __u64 InPktsNoTag; - __u64 InPktsBadTag; - __u64 InPktsUnknownSCI; - __u64 InPktsNoSCI; - __u64 InPktsOverrun; -}; - #define MACSEC_VALIDATE_DEFAULT MACSEC_VALIDATE_STRICT struct pcpu_secy_stats { @@ -2489,207 +2478,309 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) return ret; } -static int copy_tx_sa_stats(struct sk_buff *skb, - struct macsec_tx_sa_stats __percpu *pstats) +static void get_tx_sa_stats(struct net_device *dev, int an, + struct macsec_tx_sa *tx_sa, + struct macsec_tx_sa_stats *sum) { - struct macsec_tx_sa_stats sum = {0, }; + struct macsec_dev *macsec = macsec_priv(dev); int cpu; + /* If h/w offloading is available, propagate to the device */ + if (macsec_is_offloaded(macsec)) { + const struct macsec_ops *ops; + struct macsec_context ctx; + + ops = macsec_get_ops(macsec, &ctx); + if (ops) { + ctx.sa.assoc_num = an; + ctx.sa.tx_sa = tx_sa; + ctx.stats.tx_sa_stats = sum; + ctx.secy = &macsec_priv(dev)->secy; + macsec_offload(ops->mdo_get_tx_sa_stats, &ctx); + } + return; + } + for_each_possible_cpu(cpu) { - const struct macsec_tx_sa_stats *stats = per_cpu_ptr(pstats, cpu); + const struct macsec_tx_sa_stats *stats = + per_cpu_ptr(tx_sa->stats, cpu); - sum.OutPktsProtected += stats->OutPktsProtected; - sum.OutPktsEncrypted += stats->OutPktsEncrypted; + sum->OutPktsProtected += stats->OutPktsProtected; + sum->OutPktsEncrypted += stats->OutPktsEncrypted; } +} - if (nla_put_u32(skb, MACSEC_SA_STATS_ATTR_OUT_PKTS_PROTECTED, sum.OutPktsProtected) || - nla_put_u32(skb, MACSEC_SA_STATS_ATTR_OUT_PKTS_ENCRYPTED, sum.OutPktsEncrypted)) +static int copy_tx_sa_stats(struct sk_buff *skb, struct macsec_tx_sa_stats *sum) +{ + if (nla_put_u32(skb, MACSEC_SA_STATS_ATTR_OUT_PKTS_PROTECTED, + sum->OutPktsProtected) || + nla_put_u32(skb, MACSEC_SA_STATS_ATTR_OUT_PKTS_ENCRYPTED, + sum->OutPktsEncrypted)) return -EMSGSIZE; return 0; } -static noinline_for_stack int -copy_rx_sa_stats(struct sk_buff *skb, - struct macsec_rx_sa_stats __percpu *pstats) +static void get_rx_sa_stats(struct net_device *dev, + struct macsec_rx_sc *rx_sc, int an, + struct macsec_rx_sa *rx_sa, + struct macsec_rx_sa_stats *sum) { - struct macsec_rx_sa_stats sum = {0, }; + struct macsec_dev *macsec = macsec_priv(dev); int cpu; - for_each_possible_cpu(cpu) { - const struct macsec_rx_sa_stats *stats = per_cpu_ptr(pstats, cpu); + /* If h/w offloading is available, propagate to the device */ + if (macsec_is_offloaded(macsec)) { + const struct macsec_ops *ops; + struct macsec_context ctx; - sum.InPktsOK += stats->InPktsOK; - sum.InPktsInvalid += stats->InPktsInvalid; - sum.InPktsNotValid += stats->InPktsNotValid; - sum.InPktsNotUsingSA += stats->InPktsNotUsingSA; - sum.InPktsUnusedSA += stats->InPktsUnusedSA; + ops = macsec_get_ops(macsec, &ctx); + if (ops) { + ctx.sa.assoc_num = an; + ctx.sa.rx_sa = rx_sa; + ctx.stats.rx_sa_stats = sum; + ctx.secy = &macsec_priv(dev)->secy; + ctx.rx_sc = rx_sc; + macsec_offload(ops->mdo_get_rx_sa_stats, &ctx); + } + return; + } + + for_each_possible_cpu(cpu) { + const struct macsec_rx_sa_stats *stats = + per_cpu_ptr(rx_sa->stats, cpu); + + sum->InPktsOK += stats->InPktsOK; + sum->InPktsInvalid += stats->InPktsInvalid; + sum->InPktsNotValid += stats->InPktsNotValid; + sum->InPktsNotUsingSA += stats->InPktsNotUsingSA; + sum->InPktsUnusedSA += stats->InPktsUnusedSA; } +} - if (nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_OK, sum.InPktsOK) || - nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_INVALID, sum.InPktsInvalid) || - nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_NOT_VALID, sum.InPktsNotValid) || - nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_NOT_USING_SA, sum.InPktsNotUsingSA) || - nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_UNUSED_SA, sum.InPktsUnusedSA)) +static int copy_rx_sa_stats(struct sk_buff *skb, + struct macsec_rx_sa_stats *sum) +{ + if (nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_OK, sum->InPktsOK) || + nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_INVALID, + sum->InPktsInvalid) || + nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_NOT_VALID, + sum->InPktsNotValid) || + nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_NOT_USING_SA, + sum->InPktsNotUsingSA) || + nla_put_u32(skb, MACSEC_SA_STATS_ATTR_IN_PKTS_UNUSED_SA, + sum->InPktsUnusedSA)) return -EMSGSIZE; return 0; } -static noinline_for_stack int -copy_rx_sc_stats(struct sk_buff *skb, struct pcpu_rx_sc_stats __percpu *pstats) +static void get_rx_sc_stats(struct net_device *dev, + struct macsec_rx_sc *rx_sc, + struct macsec_rx_sc_stats *sum) { - struct macsec_rx_sc_stats sum = {0, }; + struct macsec_dev *macsec = macsec_priv(dev); int cpu; + /* If h/w offloading is available, propagate to the device */ + if (macsec_is_offloaded(macsec)) { + const struct macsec_ops *ops; + struct macsec_context ctx; + + ops = macsec_get_ops(macsec, &ctx); + if (ops) { + ctx.stats.rx_sc_stats = sum; + ctx.secy = &macsec_priv(dev)->secy; + ctx.rx_sc = rx_sc; + macsec_offload(ops->mdo_get_rx_sc_stats, &ctx); + } + return; + } + for_each_possible_cpu(cpu) { const struct pcpu_rx_sc_stats *stats; struct macsec_rx_sc_stats tmp; unsigned int start; - stats = per_cpu_ptr(pstats, cpu); + stats = per_cpu_ptr(rx_sc->stats, cpu); do { start = u64_stats_fetch_begin_irq(&stats->syncp); memcpy(&tmp, &stats->stats, sizeof(tmp)); } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); - sum.InOctetsValidated += tmp.InOctetsValidated; - sum.InOctetsDecrypted += tmp.InOctetsDecrypted; - sum.InPktsUnchecked += tmp.InPktsUnchecked; - sum.InPktsDelayed += tmp.InPktsDelayed; - sum.InPktsOK += tmp.InPktsOK; - sum.InPktsInvalid += tmp.InPktsInvalid; - sum.InPktsLate += tmp.InPktsLate; - sum.InPktsNotValid += tmp.InPktsNotValid; - sum.InPktsNotUsingSA += tmp.InPktsNotUsingSA; - sum.InPktsUnusedSA += tmp.InPktsUnusedSA; + sum->InOctetsValidated += tmp.InOctetsValidated; + sum->InOctetsDecrypted += tmp.InOctetsDecrypted; + sum->InPktsUnchecked += tmp.InPktsUnchecked; + sum->InPktsDelayed += tmp.InPktsDelayed; + sum->InPktsOK += tmp.InPktsOK; + sum->InPktsInvalid += tmp.InPktsInvalid; + sum->InPktsLate += tmp.InPktsLate; + sum->InPktsNotValid += tmp.InPktsNotValid; + sum->InPktsNotUsingSA += tmp.InPktsNotUsingSA; + sum->InPktsUnusedSA += tmp.InPktsUnusedSA; } +} +static int copy_rx_sc_stats(struct sk_buff *skb, struct macsec_rx_sc_stats *sum) +{ if (nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_VALIDATED, - sum.InOctetsValidated, + sum->InOctetsValidated, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_DECRYPTED, - sum.InOctetsDecrypted, + sum->InOctetsDecrypted, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNCHECKED, - sum.InPktsUnchecked, + sum->InPktsUnchecked, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_DELAYED, - sum.InPktsDelayed, + sum->InPktsDelayed, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_OK, - sum.InPktsOK, + sum->InPktsOK, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_INVALID, - sum.InPktsInvalid, + sum->InPktsInvalid, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_LATE, - sum.InPktsLate, + sum->InPktsLate, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_VALID, - sum.InPktsNotValid, + sum->InPktsNotValid, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_USING_SA, - sum.InPktsNotUsingSA, + sum->InPktsNotUsingSA, MACSEC_RXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNUSED_SA, - sum.InPktsUnusedSA, + sum->InPktsUnusedSA, MACSEC_RXSC_STATS_ATTR_PAD)) return -EMSGSIZE; return 0; } -static noinline_for_stack int -copy_tx_sc_stats(struct sk_buff *skb, struct pcpu_tx_sc_stats __percpu *pstats) +static void get_tx_sc_stats(struct net_device *dev, + struct macsec_tx_sc_stats *sum) { - struct macsec_tx_sc_stats sum = {0, }; + struct macsec_dev *macsec = macsec_priv(dev); int cpu; + /* If h/w offloading is available, propagate to the device */ + if (macsec_is_offloaded(macsec)) { + const struct macsec_ops *ops; + struct macsec_context ctx; + + ops = macsec_get_ops(macsec, &ctx); + if (ops) { + ctx.stats.tx_sc_stats = sum; + ctx.secy = &macsec_priv(dev)->secy; + macsec_offload(ops->mdo_get_tx_sc_stats, &ctx); + } + return; + } + for_each_possible_cpu(cpu) { const struct pcpu_tx_sc_stats *stats; struct macsec_tx_sc_stats tmp; unsigned int start; - stats = per_cpu_ptr(pstats, cpu); + stats = per_cpu_ptr(macsec_priv(dev)->secy.tx_sc.stats, cpu); do { start = u64_stats_fetch_begin_irq(&stats->syncp); memcpy(&tmp, &stats->stats, sizeof(tmp)); } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); - sum.OutPktsProtected += tmp.OutPktsProtected; - sum.OutPktsEncrypted += tmp.OutPktsEncrypted; - sum.OutOctetsProtected += tmp.OutOctetsProtected; - sum.OutOctetsEncrypted += tmp.OutOctetsEncrypted; + sum->OutPktsProtected += tmp.OutPktsProtected; + sum->OutPktsEncrypted += tmp.OutPktsEncrypted; + sum->OutOctetsProtected += tmp.OutOctetsProtected; + sum->OutOctetsEncrypted += tmp.OutOctetsEncrypted; } +} +static int copy_tx_sc_stats(struct sk_buff *skb, struct macsec_tx_sc_stats *sum) +{ if (nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_PROTECTED, - sum.OutPktsProtected, + sum->OutPktsProtected, MACSEC_TXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_ENCRYPTED, - sum.OutPktsEncrypted, + sum->OutPktsEncrypted, MACSEC_TXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_PROTECTED, - sum.OutOctetsProtected, + sum->OutOctetsProtected, MACSEC_TXSC_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_ENCRYPTED, - sum.OutOctetsEncrypted, + sum->OutOctetsEncrypted, MACSEC_TXSC_STATS_ATTR_PAD)) return -EMSGSIZE; return 0; } -static noinline_for_stack int -copy_secy_stats(struct sk_buff *skb, struct pcpu_secy_stats __percpu *pstats) +static void get_secy_stats(struct net_device *dev, struct macsec_dev_stats *sum) { - struct macsec_dev_stats sum = {0, }; + struct macsec_dev *macsec = macsec_priv(dev); int cpu; + /* If h/w offloading is available, propagate to the device */ + if (macsec_is_offloaded(macsec)) { + const struct macsec_ops *ops; + struct macsec_context ctx; + + ops = macsec_get_ops(macsec, &ctx); + if (ops) { + ctx.stats.dev_stats = sum; + ctx.secy = &macsec_priv(dev)->secy; + macsec_offload(ops->mdo_get_dev_stats, &ctx); + } + return; + } + for_each_possible_cpu(cpu) { const struct pcpu_secy_stats *stats; struct macsec_dev_stats tmp; unsigned int start; - stats = per_cpu_ptr(pstats, cpu); + stats = per_cpu_ptr(macsec_priv(dev)->stats, cpu); do { start = u64_stats_fetch_begin_irq(&stats->syncp); memcpy(&tmp, &stats->stats, sizeof(tmp)); } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); - sum.OutPktsUntagged += tmp.OutPktsUntagged; - sum.InPktsUntagged += tmp.InPktsUntagged; - sum.OutPktsTooLong += tmp.OutPktsTooLong; - sum.InPktsNoTag += tmp.InPktsNoTag; - sum.InPktsBadTag += tmp.InPktsBadTag; - sum.InPktsUnknownSCI += tmp.InPktsUnknownSCI; - sum.InPktsNoSCI += tmp.InPktsNoSCI; - sum.InPktsOverrun += tmp.InPktsOverrun; + sum->OutPktsUntagged += tmp.OutPktsUntagged; + sum->InPktsUntagged += tmp.InPktsUntagged; + sum->OutPktsTooLong += tmp.OutPktsTooLong; + sum->InPktsNoTag += tmp.InPktsNoTag; + sum->InPktsBadTag += tmp.InPktsBadTag; + sum->InPktsUnknownSCI += tmp.InPktsUnknownSCI; + sum->InPktsNoSCI += tmp.InPktsNoSCI; + sum->InPktsOverrun += tmp.InPktsOverrun; } +} +static int copy_secy_stats(struct sk_buff *skb, struct macsec_dev_stats *sum) +{ if (nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_UNTAGGED, - sum.OutPktsUntagged, + sum->OutPktsUntagged, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNTAGGED, - sum.InPktsUntagged, + sum->InPktsUntagged, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_TOO_LONG, - sum.OutPktsTooLong, + sum->OutPktsTooLong, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_TAG, - sum.InPktsNoTag, + sum->InPktsNoTag, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_BAD_TAG, - sum.InPktsBadTag, + sum->InPktsBadTag, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNKNOWN_SCI, - sum.InPktsUnknownSCI, + sum->InPktsUnknownSCI, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_SCI, - sum.InPktsNoSCI, + sum->InPktsNoSCI, MACSEC_SECY_STATS_ATTR_PAD) || nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_OVERRUN, - sum.InPktsOverrun, + sum->InPktsOverrun, MACSEC_SECY_STATS_ATTR_PAD)) return -EMSGSIZE; @@ -2750,7 +2841,12 @@ static noinline_for_stack int dump_secy(struct macsec_secy *secy, struct net_device *dev, struct sk_buff *skb, struct netlink_callback *cb) { + struct macsec_tx_sc_stats tx_sc_stats = {0, }; + struct macsec_tx_sa_stats tx_sa_stats = {0, }; + struct macsec_rx_sc_stats rx_sc_stats = {0, }; + struct macsec_rx_sa_stats rx_sa_stats = {0, }; struct macsec_dev *macsec = netdev_priv(dev); + struct macsec_dev_stats dev_stats = {0, }; struct macsec_tx_sc *tx_sc = &secy->tx_sc; struct nlattr *txsa_list, *rxsc_list; struct macsec_rx_sc *rx_sc; @@ -2781,7 +2877,9 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, attr = nla_nest_start_noflag(skb, MACSEC_ATTR_TXSC_STATS); if (!attr) goto nla_put_failure; - if (copy_tx_sc_stats(skb, tx_sc->stats)) { + + get_tx_sc_stats(dev, &tx_sc_stats); + if (copy_tx_sc_stats(skb, &tx_sc_stats)) { nla_nest_cancel(skb, attr); goto nla_put_failure; } @@ -2790,7 +2888,8 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, attr = nla_nest_start_noflag(skb, MACSEC_ATTR_SECY_STATS); if (!attr) goto nla_put_failure; - if (copy_secy_stats(skb, macsec_priv(dev)->stats)) { + get_secy_stats(dev, &dev_stats); + if (copy_secy_stats(skb, &dev_stats)) { nla_nest_cancel(skb, attr); goto nla_put_failure; } @@ -2812,22 +2911,15 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, goto nla_put_failure; } - if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) || - nla_put_u32(skb, MACSEC_SA_ATTR_PN, tx_sa->next_pn) || - nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, tx_sa->key.id) || - nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, tx_sa->active)) { - nla_nest_cancel(skb, txsa_nest); - nla_nest_cancel(skb, txsa_list); - goto nla_put_failure; - } - attr = nla_nest_start_noflag(skb, MACSEC_SA_ATTR_STATS); if (!attr) { nla_nest_cancel(skb, txsa_nest); nla_nest_cancel(skb, txsa_list); goto nla_put_failure; } - if (copy_tx_sa_stats(skb, tx_sa->stats)) { + memset(&tx_sa_stats, 0, sizeof(tx_sa_stats)); + get_tx_sa_stats(dev, i, tx_sa, &tx_sa_stats); + if (copy_tx_sa_stats(skb, &tx_sa_stats)) { nla_nest_cancel(skb, attr); nla_nest_cancel(skb, txsa_nest); nla_nest_cancel(skb, txsa_list); @@ -2835,6 +2927,16 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, } nla_nest_end(skb, attr); + if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) || + nla_put_u32(skb, MACSEC_SA_ATTR_PN, tx_sa->next_pn) || + nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, + tx_sa->key.id) || + nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, tx_sa->active)) { + nla_nest_cancel(skb, txsa_nest); + nla_nest_cancel(skb, txsa_list); + goto nla_put_failure; + } + nla_nest_end(skb, txsa_nest); } nla_nest_end(skb, txsa_list); @@ -2868,7 +2970,9 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, nla_nest_cancel(skb, rxsc_list); goto nla_put_failure; } - if (copy_rx_sc_stats(skb, rx_sc->stats)) { + memset(&rx_sc_stats, 0, sizeof(rx_sc_stats)); + get_rx_sc_stats(dev, rx_sc, &rx_sc_stats); + if (copy_rx_sc_stats(skb, &rx_sc_stats)) { nla_nest_cancel(skb, attr); nla_nest_cancel(skb, rxsc_nest); nla_nest_cancel(skb, rxsc_list); @@ -2907,7 +3011,9 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev, nla_nest_cancel(skb, rxsc_list); goto nla_put_failure; } - if (copy_rx_sa_stats(skb, rx_sa->stats)) { + memset(&rx_sa_stats, 0, sizeof(rx_sa_stats)); + get_rx_sa_stats(dev, rx_sc, i, rx_sa, &rx_sa_stats); + if (copy_rx_sa_stats(skb, &rx_sa_stats)) { nla_nest_cancel(skb, attr); nla_nest_cancel(skb, rxsa_list); nla_nest_cancel(skb, rxsc_nest); diff --git a/include/net/macsec.h b/include/net/macsec.h index 5ccdb2bc84df..bfcd9a23cb71 100644 --- a/include/net/macsec.h +++ b/include/net/macsec.h @@ -58,6 +58,17 @@ struct macsec_tx_sc_stats { __u64 OutOctetsEncrypted; }; +struct macsec_dev_stats { + __u64 OutPktsUntagged; + __u64 InPktsUntagged; + __u64 OutPktsTooLong; + __u64 InPktsNoTag; + __u64 InPktsBadTag; + __u64 InPktsUnknownSCI; + __u64 InPktsNoSCI; + __u64 InPktsOverrun; +}; + /** * struct macsec_rx_sa - receive secure association * @active: @@ -194,6 +205,13 @@ struct macsec_context { struct macsec_tx_sa *tx_sa; }; } sa; + union { + struct macsec_tx_sc_stats *tx_sc_stats; + struct macsec_tx_sa_stats *tx_sa_stats; + struct macsec_rx_sc_stats *rx_sc_stats; + struct macsec_rx_sa_stats *rx_sa_stats; + struct macsec_dev_stats *dev_stats; + } stats; u8 prepare:1; }; @@ -220,6 +238,12 @@ struct macsec_ops { int (*mdo_add_txsa)(struct macsec_context *ctx); int (*mdo_upd_txsa)(struct macsec_context *ctx); int (*mdo_del_txsa)(struct macsec_context *ctx); + /* Statistics */ + int (*mdo_get_dev_stats)(struct macsec_context *ctx); + int (*mdo_get_tx_sc_stats)(struct macsec_context *ctx); + int (*mdo_get_tx_sa_stats)(struct macsec_context *ctx); + int (*mdo_get_rx_sc_stats)(struct macsec_context *ctx); + int (*mdo_get_rx_sa_stats)(struct macsec_context *ctx); }; void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa); From patchwork Tue Mar 10 15:03:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 168F8C18E5A for ; Tue, 10 Mar 2020 15:04:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B800A20675 for ; Tue, 10 Mar 2020 15:04:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="ihpNoCww" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727827AbgCJPEa (ORCPT ); Tue, 10 Mar 2020 11:04:30 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:35566 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727795AbgCJPE3 (ORCPT ); Tue, 10 Mar 2020 11:04:29 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AEtvrv011722; Tue, 10 Mar 2020 08:04:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=qAwtVA0pSArvPc2LeiNju90cYTO3do4vvvia5wplE7M=; b=ihpNoCwwurTSmYhn0/O52TVN30Pk4miNN/0cLiNz5lqIEd4DIxsrN8mmcbPXEq9pVO4C jnshjSRrZpPMok8SfqD6pcMjPCtMBEaTu3Pw79up2hL3k8hT1qFLTpL2un5vih2OLF+j 95GaSDt1cPqiOrquyA84pG29qAJznqY5EhKW+NyaXcxlzU6f3gmeGmuQ4t30XrrePeld 2Bj6tHFImobrpedFpiGXACo66lLgxxq4X6j5kU5+Kd4F6YysfTN3Cycfl3Molu3A04Fs vXf5Yegd/JPLAi9ySMtSw+Aslo39tKryTCNVIDgSeuuwl8o1wYNhqdK3hOnB+FCni0Cl Aw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2yp04fm0s3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:21 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:15 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:15 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id 3341B3F7044; Tue, 10 Mar 2020 08:04:13 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Dmitry Bogdanov Subject: [RFC v2 11/16] net: atlantic: MACSec egress offload HW bindings Date: Tue, 10 Mar 2020 18:03:37 +0300 Message-ID: <20200310150342.1701-12-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds the Atlantic HW-specific bindings for MACSec egress, e.g. register addresses / structs, helper function, etc, which will be used by actual callback implementations. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../net/ethernet/aquantia/atlantic/Makefile | 3 +- .../atlantic/macsec/MSS_Egress_registers.h | 78 ++ .../aquantia/atlantic/macsec/macsec_api.c | 1154 +++++++++++++++++ .../aquantia/atlantic/macsec/macsec_api.h | 133 ++ .../aquantia/atlantic/macsec/macsec_struct.h | 322 +++++ 5 files changed, 1689 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Egress_registers.h create mode 100644 drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c create mode 100644 drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h create mode 100644 drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h diff --git a/drivers/net/ethernet/aquantia/atlantic/Makefile b/drivers/net/ethernet/aquantia/atlantic/Makefile index 2db5569d05cb..8b555665a33a 100644 --- a/drivers/net/ethernet/aquantia/atlantic/Makefile +++ b/drivers/net/ethernet/aquantia/atlantic/Makefile @@ -24,7 +24,8 @@ atlantic-objs := aq_main.o \ hw_atl/hw_atl_b0.o \ hw_atl/hw_atl_utils.o \ hw_atl/hw_atl_utils_fw2x.o \ - hw_atl/hw_atl_llh.o + hw_atl/hw_atl_llh.o \ + macsec/macsec_api.o atlantic-$(CONFIG_MACSEC) += aq_macsec.o diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Egress_registers.h b/drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Egress_registers.h new file mode 100644 index 000000000000..9d7b33e614c3 --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/MSS_Egress_registers.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Atlantic Network Driver + * + * Copyright (C) 2020 Marvell International Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef MSS_EGRESS_REGS_HEADER +#define MSS_EGRESS_REGS_HEADER + +#define MSS_EGRESS_CTL_REGISTER_ADDR 0x00005002 +#define MSS_EGRESS_SA_EXPIRED_STATUS_REGISTER_ADDR 0x00005060 +#define MSS_EGRESS_SA_THRESHOLD_EXPIRED_STATUS_REGISTER_ADDR 0x00005062 +#define MSS_EGRESS_LUT_ADDR_CTL_REGISTER_ADDR 0x00005080 +#define MSS_EGRESS_LUT_CTL_REGISTER_ADDR 0x00005081 +#define MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR 0x000050A0 + +struct mss_egress_ctl_register { + union { + struct { + unsigned int soft_reset : 1; + unsigned int drop_kay_packet : 1; + unsigned int drop_egprc_lut_miss : 1; + unsigned int gcm_start : 1; + unsigned int gcm_test_mode : 1; + unsigned int unmatched_use_sc_0 : 1; + unsigned int drop_invalid_sa_sc_packets : 1; + unsigned int reserved0 : 1; + /* Should always be set to 0. */ + unsigned int external_classification_enable : 1; + unsigned int icv_lsb_8bytes_enable : 1; + unsigned int high_prio : 1; + unsigned int clear_counter : 1; + unsigned int clear_global_time : 1; + unsigned int ethertype_explicit_sectag_lsb : 3; + } bits_0; + unsigned short word_0; + }; + union { + struct { + unsigned int ethertype_explicit_sectag_msb : 13; + unsigned int reserved0 : 3; + } bits_1; + unsigned short word_1; + }; +}; + +struct mss_egress_lut_addr_ctl_register { + union { + struct { + unsigned int lut_addr : 9; + unsigned int reserved0 : 3; + /* 0x0 : Egress MAC Control FIlter (CTLF) LUT + * 0x1 : Egress Classification LUT + * 0x2 : Egress SC/SA LUT + * 0x3 : Egress SMIB + */ + unsigned int lut_select : 4; + } bits_0; + unsigned short word_0; + }; +}; + +struct mss_egress_lut_ctl_register { + union { + struct { + unsigned int reserved0 : 14; + unsigned int lut_read : 1; + unsigned int lut_write : 1; + } bits_0; + unsigned short word_0; + }; +}; + +#endif /* MSS_EGRESS_REGS_HEADER */ diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c new file mode 100644 index 000000000000..8bb275185277 --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.c @@ -0,0 +1,1154 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Atlantic Network Driver + * + * Copyright (C) 2020 Marvell International Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include "macsec_api.h" +#include +#include "MSS_Egress_registers.h" +#include "aq_phy.h" + +#define AQ_API_CALL_SAFE(func, ...) \ +({ \ + int ret; \ + do { \ + ret = aq_mss_mdio_sem_get(hw); \ + if (unlikely(ret)) \ + break; \ + \ + ret = func(__VA_ARGS__); \ + \ + aq_mss_mdio_sem_put(hw); \ + } while (0); \ + ret; \ +}) + +/******************************************************************************* + * MDIO wrappers + ******************************************************************************/ +static int aq_mss_mdio_sem_get(struct aq_hw_s *hw) +{ + u32 val; + + return readx_poll_timeout_atomic(hw_atl_sem_mdio_get, hw, val, + val == 1U, 10U, 100000U); +} + +static void aq_mss_mdio_sem_put(struct aq_hw_s *hw) +{ + hw_atl_reg_glb_cpu_sem_set(hw, 1U, HW_ATL_FW_SM_MDIO); +} + +static int aq_mss_mdio_read(struct aq_hw_s *hw, u16 mmd, u16 addr, u16 *data) +{ + *data = aq_mdio_read_word(hw, mmd, addr); + return (*data != 0xffff) ? 0 : -ETIME; +} + +static int aq_mss_mdio_write(struct aq_hw_s *hw, u16 mmd, u16 addr, u16 data) +{ + aq_mdio_write_word(hw, mmd, addr, data); + return 0; +} + +/******************************************************************************* + * MACSEC config and status + ******************************************************************************/ + +/*! Write packed_record to the specified Egress LUT table row. */ +static int set_raw_egress_record(struct aq_hw_s *hw, u16 *packed_record, + u8 num_words, u8 table_id, + u16 table_index) +{ + struct mss_egress_lut_addr_ctl_register lut_sel_reg; + struct mss_egress_lut_ctl_register lut_op_reg; + + unsigned int i; + + /* Write the packed record words to the data buffer registers. */ + for (i = 0; i < num_words; i += 2) { + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR + i, + packed_record[i]); + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR + i + 1, + packed_record[i + 1]); + } + + /* Clear out the unused data buffer registers. */ + for (i = num_words; i < 28; i += 2) { + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR + i, 0); + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR + i + 1, + 0); + } + + /* Select the table and row index to write to */ + lut_sel_reg.bits_0.lut_select = table_id; + lut_sel_reg.bits_0.lut_addr = table_index; + + lut_op_reg.bits_0.lut_read = 0; + lut_op_reg.bits_0.lut_write = 1; + + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_ADDR_CTL_REGISTER_ADDR, + lut_sel_reg.word_0); + aq_mss_mdio_write(hw, MDIO_MMD_VEND1, MSS_EGRESS_LUT_CTL_REGISTER_ADDR, + lut_op_reg.word_0); + + return 0; +} + +static int get_raw_egress_record(struct aq_hw_s *hw, u16 *packed_record, + u8 num_words, u8 table_id, + u16 table_index) +{ + struct mss_egress_lut_addr_ctl_register lut_sel_reg; + struct mss_egress_lut_ctl_register lut_op_reg; + int ret; + + unsigned int i; + + /* Select the table and row index to read */ + lut_sel_reg.bits_0.lut_select = table_id; + lut_sel_reg.bits_0.lut_addr = table_index; + + lut_op_reg.bits_0.lut_read = 1; + lut_op_reg.bits_0.lut_write = 0; + + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_ADDR_CTL_REGISTER_ADDR, + lut_sel_reg.word_0); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_CTL_REGISTER_ADDR, + lut_op_reg.word_0); + if (unlikely(ret)) + return ret; + + memset(packed_record, 0, sizeof(u16) * num_words); + + for (i = 0; i < num_words; i += 2) { + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR + + i, + &packed_record[i]); + if (unlikely(ret)) + return ret; + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_LUT_DATA_CTL_REGISTER_ADDR + + i + 1, + &packed_record[i + 1]); + if (unlikely(ret)) + return ret; + } + + return 0; +} + +static int set_egress_ctlf_record(struct aq_hw_s *hw, + const struct aq_mss_egress_ctlf_record *rec, + u16 table_index) +{ + u16 packed_record[6]; + + if (table_index >= NUMROWS_EGRESSCTLFRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 6); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->sa_da[0] >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->sa_da[0] >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->sa_da[1] >> 0) & 0xFFFF) << 0); + + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->eth_type >> 0) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->match_mask >> 0) & 0xFFFF) << 0); + + packed_record[5] = (packed_record[5] & 0xFFF0) | + (((rec->match_type >> 0) & 0xF) << 0); + + packed_record[5] = + (packed_record[5] & 0xFFEF) | (((rec->action >> 0) & 0x1) << 4); + + return set_raw_egress_record(hw, packed_record, 6, 0, + ROWOFFSET_EGRESSCTLFRECORD + table_index); +} + +int aq_mss_set_egress_ctlf_record(struct aq_hw_s *hw, + const struct aq_mss_egress_ctlf_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_egress_ctlf_record, hw, rec, table_index); +} + +static int get_egress_ctlf_record(struct aq_hw_s *hw, + struct aq_mss_egress_ctlf_record *rec, + u16 table_index) +{ + u16 packed_record[6]; + int ret; + + if (table_index >= NUMROWS_EGRESSCTLFRECORD) + return -EINVAL; + + /* If the row that we want to read is odd, first read the previous even + * row, throw that value away, and finally read the desired row. + */ + if ((table_index % 2) > 0) { + ret = get_raw_egress_record(hw, packed_record, 6, 0, + ROWOFFSET_EGRESSCTLFRECORD + + table_index - 1); + if (unlikely(ret)) + return ret; + } + + ret = get_raw_egress_record(hw, packed_record, 6, 0, + ROWOFFSET_EGRESSCTLFRECORD + table_index); + if (unlikely(ret)) + return ret; + + rec->sa_da[0] = (rec->sa_da[0] & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->sa_da[0] = (rec->sa_da[0] & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->sa_da[1] = (rec->sa_da[1] & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + + rec->eth_type = (rec->eth_type & 0xFFFF0000) | + (((packed_record[3] >> 0) & 0xFFFF) << 0); + + rec->match_mask = (rec->match_mask & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + + rec->match_type = (rec->match_type & 0xFFFFFFF0) | + (((packed_record[5] >> 0) & 0xF) << 0); + + rec->action = (rec->action & 0xFFFFFFFE) | + (((packed_record[5] >> 4) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_egress_ctlf_record(struct aq_hw_s *hw, + struct aq_mss_egress_ctlf_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_egress_ctlf_record, hw, rec, table_index); +} + +static int set_egress_class_record(struct aq_hw_s *hw, + const struct aq_mss_egress_class_record *rec, + u16 table_index) +{ + u16 packed_record[28]; + + if (table_index >= NUMROWS_EGRESSCLASSRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 28); + + packed_record[0] = (packed_record[0] & 0xF000) | + (((rec->vlan_id >> 0) & 0xFFF) << 0); + + packed_record[0] = (packed_record[0] & 0x8FFF) | + (((rec->vlan_up >> 0) & 0x7) << 12); + + packed_record[0] = (packed_record[0] & 0x7FFF) | + (((rec->vlan_valid >> 0) & 0x1) << 15); + + packed_record[1] = + (packed_record[1] & 0xFF00) | (((rec->byte3 >> 0) & 0xFF) << 0); + + packed_record[1] = + (packed_record[1] & 0x00FF) | (((rec->byte2 >> 0) & 0xFF) << 8); + + packed_record[2] = + (packed_record[2] & 0xFF00) | (((rec->byte1 >> 0) & 0xFF) << 0); + + packed_record[2] = + (packed_record[2] & 0x00FF) | (((rec->byte0 >> 0) & 0xFF) << 8); + + packed_record[3] = + (packed_record[3] & 0xFF00) | (((rec->tci >> 0) & 0xFF) << 0); + + packed_record[3] = (packed_record[3] & 0x00FF) | + (((rec->sci[0] >> 0) & 0xFF) << 8); + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->sci[0] >> 8) & 0xFFFF) << 0); + packed_record[5] = (packed_record[5] & 0xFF00) | + (((rec->sci[0] >> 24) & 0xFF) << 0); + + packed_record[5] = (packed_record[5] & 0x00FF) | + (((rec->sci[1] >> 0) & 0xFF) << 8); + packed_record[6] = (packed_record[6] & 0x0000) | + (((rec->sci[1] >> 8) & 0xFFFF) << 0); + packed_record[7] = (packed_record[7] & 0xFF00) | + (((rec->sci[1] >> 24) & 0xFF) << 0); + + packed_record[7] = (packed_record[7] & 0x00FF) | + (((rec->eth_type >> 0) & 0xFF) << 8); + packed_record[8] = (packed_record[8] & 0xFF00) | + (((rec->eth_type >> 8) & 0xFF) << 0); + + packed_record[8] = (packed_record[8] & 0x00FF) | + (((rec->snap[0] >> 0) & 0xFF) << 8); + packed_record[9] = (packed_record[9] & 0x0000) | + (((rec->snap[0] >> 8) & 0xFFFF) << 0); + packed_record[10] = (packed_record[10] & 0xFF00) | + (((rec->snap[0] >> 24) & 0xFF) << 0); + + packed_record[10] = (packed_record[10] & 0x00FF) | + (((rec->snap[1] >> 0) & 0xFF) << 8); + + packed_record[11] = (packed_record[11] & 0x0000) | + (((rec->llc >> 0) & 0xFFFF) << 0); + packed_record[12] = + (packed_record[12] & 0xFF00) | (((rec->llc >> 16) & 0xFF) << 0); + + packed_record[12] = (packed_record[12] & 0x00FF) | + (((rec->mac_sa[0] >> 0) & 0xFF) << 8); + packed_record[13] = (packed_record[13] & 0x0000) | + (((rec->mac_sa[0] >> 8) & 0xFFFF) << 0); + packed_record[14] = (packed_record[14] & 0xFF00) | + (((rec->mac_sa[0] >> 24) & 0xFF) << 0); + + packed_record[14] = (packed_record[14] & 0x00FF) | + (((rec->mac_sa[1] >> 0) & 0xFF) << 8); + packed_record[15] = (packed_record[15] & 0xFF00) | + (((rec->mac_sa[1] >> 8) & 0xFF) << 0); + + packed_record[15] = (packed_record[15] & 0x00FF) | + (((rec->mac_da[0] >> 0) & 0xFF) << 8); + packed_record[16] = (packed_record[16] & 0x0000) | + (((rec->mac_da[0] >> 8) & 0xFFFF) << 0); + packed_record[17] = (packed_record[17] & 0xFF00) | + (((rec->mac_da[0] >> 24) & 0xFF) << 0); + + packed_record[17] = (packed_record[17] & 0x00FF) | + (((rec->mac_da[1] >> 0) & 0xFF) << 8); + packed_record[18] = (packed_record[18] & 0xFF00) | + (((rec->mac_da[1] >> 8) & 0xFF) << 0); + + packed_record[18] = + (packed_record[18] & 0x00FF) | (((rec->pn >> 0) & 0xFF) << 8); + packed_record[19] = + (packed_record[19] & 0x0000) | (((rec->pn >> 8) & 0xFFFF) << 0); + packed_record[20] = + (packed_record[20] & 0xFF00) | (((rec->pn >> 24) & 0xFF) << 0); + + packed_record[20] = (packed_record[20] & 0xC0FF) | + (((rec->byte3_location >> 0) & 0x3F) << 8); + + packed_record[20] = (packed_record[20] & 0xBFFF) | + (((rec->byte3_mask >> 0) & 0x1) << 14); + + packed_record[20] = (packed_record[20] & 0x7FFF) | + (((rec->byte2_location >> 0) & 0x1) << 15); + packed_record[21] = (packed_record[21] & 0xFFE0) | + (((rec->byte2_location >> 1) & 0x1F) << 0); + + packed_record[21] = (packed_record[21] & 0xFFDF) | + (((rec->byte2_mask >> 0) & 0x1) << 5); + + packed_record[21] = (packed_record[21] & 0xF03F) | + (((rec->byte1_location >> 0) & 0x3F) << 6); + + packed_record[21] = (packed_record[21] & 0xEFFF) | + (((rec->byte1_mask >> 0) & 0x1) << 12); + + packed_record[21] = (packed_record[21] & 0x1FFF) | + (((rec->byte0_location >> 0) & 0x7) << 13); + packed_record[22] = (packed_record[22] & 0xFFF8) | + (((rec->byte0_location >> 3) & 0x7) << 0); + + packed_record[22] = (packed_record[22] & 0xFFF7) | + (((rec->byte0_mask >> 0) & 0x1) << 3); + + packed_record[22] = (packed_record[22] & 0xFFCF) | + (((rec->vlan_id_mask >> 0) & 0x3) << 4); + + packed_record[22] = (packed_record[22] & 0xFFBF) | + (((rec->vlan_up_mask >> 0) & 0x1) << 6); + + packed_record[22] = (packed_record[22] & 0xFF7F) | + (((rec->vlan_valid_mask >> 0) & 0x1) << 7); + + packed_record[22] = (packed_record[22] & 0x00FF) | + (((rec->tci_mask >> 0) & 0xFF) << 8); + + packed_record[23] = (packed_record[23] & 0xFF00) | + (((rec->sci_mask >> 0) & 0xFF) << 0); + + packed_record[23] = (packed_record[23] & 0xFCFF) | + (((rec->eth_type_mask >> 0) & 0x3) << 8); + + packed_record[23] = (packed_record[23] & 0x83FF) | + (((rec->snap_mask >> 0) & 0x1F) << 10); + + packed_record[23] = (packed_record[23] & 0x7FFF) | + (((rec->llc_mask >> 0) & 0x1) << 15); + packed_record[24] = (packed_record[24] & 0xFFFC) | + (((rec->llc_mask >> 1) & 0x3) << 0); + + packed_record[24] = (packed_record[24] & 0xFF03) | + (((rec->sa_mask >> 0) & 0x3F) << 2); + + packed_record[24] = (packed_record[24] & 0xC0FF) | + (((rec->da_mask >> 0) & 0x3F) << 8); + + packed_record[24] = (packed_record[24] & 0x3FFF) | + (((rec->pn_mask >> 0) & 0x3) << 14); + packed_record[25] = (packed_record[25] & 0xFFFC) | + (((rec->pn_mask >> 2) & 0x3) << 0); + + packed_record[25] = (packed_record[25] & 0xFFFB) | + (((rec->eight02dot2 >> 0) & 0x1) << 2); + + packed_record[25] = (packed_record[25] & 0xFFF7) | + (((rec->tci_sc >> 0) & 0x1) << 3); + + packed_record[25] = (packed_record[25] & 0xFFEF) | + (((rec->tci_87543 >> 0) & 0x1) << 4); + + packed_record[25] = (packed_record[25] & 0xFFDF) | + (((rec->exp_sectag_en >> 0) & 0x1) << 5); + + packed_record[25] = (packed_record[25] & 0xF83F) | + (((rec->sc_idx >> 0) & 0x1F) << 6); + + packed_record[25] = (packed_record[25] & 0xE7FF) | + (((rec->sc_sa >> 0) & 0x3) << 11); + + packed_record[25] = (packed_record[25] & 0xDFFF) | + (((rec->debug >> 0) & 0x1) << 13); + + packed_record[25] = (packed_record[25] & 0x3FFF) | + (((rec->action >> 0) & 0x3) << 14); + + packed_record[26] = + (packed_record[26] & 0xFFF7) | (((rec->valid >> 0) & 0x1) << 3); + + return set_raw_egress_record(hw, packed_record, 28, 1, + ROWOFFSET_EGRESSCLASSRECORD + table_index); +} + +int aq_mss_set_egress_class_record(struct aq_hw_s *hw, + const struct aq_mss_egress_class_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_egress_class_record, hw, rec, table_index); +} + +static int get_egress_class_record(struct aq_hw_s *hw, + struct aq_mss_egress_class_record *rec, + u16 table_index) +{ + u16 packed_record[28]; + int ret; + + if (table_index >= NUMROWS_EGRESSCLASSRECORD) + return -EINVAL; + + /* If the row that we want to read is odd, first read the previous even + * row, throw that value away, and finally read the desired row. + */ + if ((table_index % 2) > 0) { + ret = get_raw_egress_record(hw, packed_record, 28, 1, + ROWOFFSET_EGRESSCLASSRECORD + + table_index - 1); + if (unlikely(ret)) + return ret; + } + + ret = get_raw_egress_record(hw, packed_record, 28, 1, + ROWOFFSET_EGRESSCLASSRECORD + table_index); + if (unlikely(ret)) + return ret; + + rec->vlan_id = (rec->vlan_id & 0xFFFFF000) | + (((packed_record[0] >> 0) & 0xFFF) << 0); + + rec->vlan_up = (rec->vlan_up & 0xFFFFFFF8) | + (((packed_record[0] >> 12) & 0x7) << 0); + + rec->vlan_valid = (rec->vlan_valid & 0xFFFFFFFE) | + (((packed_record[0] >> 15) & 0x1) << 0); + + rec->byte3 = (rec->byte3 & 0xFFFFFF00) | + (((packed_record[1] >> 0) & 0xFF) << 0); + + rec->byte2 = (rec->byte2 & 0xFFFFFF00) | + (((packed_record[1] >> 8) & 0xFF) << 0); + + rec->byte1 = (rec->byte1 & 0xFFFFFF00) | + (((packed_record[2] >> 0) & 0xFF) << 0); + + rec->byte0 = (rec->byte0 & 0xFFFFFF00) | + (((packed_record[2] >> 8) & 0xFF) << 0); + + rec->tci = (rec->tci & 0xFFFFFF00) | + (((packed_record[3] >> 0) & 0xFF) << 0); + + rec->sci[0] = (rec->sci[0] & 0xFFFFFF00) | + (((packed_record[3] >> 8) & 0xFF) << 0); + rec->sci[0] = (rec->sci[0] & 0xFF0000FF) | + (((packed_record[4] >> 0) & 0xFFFF) << 8); + rec->sci[0] = (rec->sci[0] & 0x00FFFFFF) | + (((packed_record[5] >> 0) & 0xFF) << 24); + + rec->sci[1] = (rec->sci[1] & 0xFFFFFF00) | + (((packed_record[5] >> 8) & 0xFF) << 0); + rec->sci[1] = (rec->sci[1] & 0xFF0000FF) | + (((packed_record[6] >> 0) & 0xFFFF) << 8); + rec->sci[1] = (rec->sci[1] & 0x00FFFFFF) | + (((packed_record[7] >> 0) & 0xFF) << 24); + + rec->eth_type = (rec->eth_type & 0xFFFFFF00) | + (((packed_record[7] >> 8) & 0xFF) << 0); + rec->eth_type = (rec->eth_type & 0xFFFF00FF) | + (((packed_record[8] >> 0) & 0xFF) << 8); + + rec->snap[0] = (rec->snap[0] & 0xFFFFFF00) | + (((packed_record[8] >> 8) & 0xFF) << 0); + rec->snap[0] = (rec->snap[0] & 0xFF0000FF) | + (((packed_record[9] >> 0) & 0xFFFF) << 8); + rec->snap[0] = (rec->snap[0] & 0x00FFFFFF) | + (((packed_record[10] >> 0) & 0xFF) << 24); + + rec->snap[1] = (rec->snap[1] & 0xFFFFFF00) | + (((packed_record[10] >> 8) & 0xFF) << 0); + + rec->llc = (rec->llc & 0xFFFF0000) | + (((packed_record[11] >> 0) & 0xFFFF) << 0); + rec->llc = (rec->llc & 0xFF00FFFF) | + (((packed_record[12] >> 0) & 0xFF) << 16); + + rec->mac_sa[0] = (rec->mac_sa[0] & 0xFFFFFF00) | + (((packed_record[12] >> 8) & 0xFF) << 0); + rec->mac_sa[0] = (rec->mac_sa[0] & 0xFF0000FF) | + (((packed_record[13] >> 0) & 0xFFFF) << 8); + rec->mac_sa[0] = (rec->mac_sa[0] & 0x00FFFFFF) | + (((packed_record[14] >> 0) & 0xFF) << 24); + + rec->mac_sa[1] = (rec->mac_sa[1] & 0xFFFFFF00) | + (((packed_record[14] >> 8) & 0xFF) << 0); + rec->mac_sa[1] = (rec->mac_sa[1] & 0xFFFF00FF) | + (((packed_record[15] >> 0) & 0xFF) << 8); + + rec->mac_da[0] = (rec->mac_da[0] & 0xFFFFFF00) | + (((packed_record[15] >> 8) & 0xFF) << 0); + rec->mac_da[0] = (rec->mac_da[0] & 0xFF0000FF) | + (((packed_record[16] >> 0) & 0xFFFF) << 8); + rec->mac_da[0] = (rec->mac_da[0] & 0x00FFFFFF) | + (((packed_record[17] >> 0) & 0xFF) << 24); + + rec->mac_da[1] = (rec->mac_da[1] & 0xFFFFFF00) | + (((packed_record[17] >> 8) & 0xFF) << 0); + rec->mac_da[1] = (rec->mac_da[1] & 0xFFFF00FF) | + (((packed_record[18] >> 0) & 0xFF) << 8); + + rec->pn = (rec->pn & 0xFFFFFF00) | + (((packed_record[18] >> 8) & 0xFF) << 0); + rec->pn = (rec->pn & 0xFF0000FF) | + (((packed_record[19] >> 0) & 0xFFFF) << 8); + rec->pn = (rec->pn & 0x00FFFFFF) | + (((packed_record[20] >> 0) & 0xFF) << 24); + + rec->byte3_location = (rec->byte3_location & 0xFFFFFFC0) | + (((packed_record[20] >> 8) & 0x3F) << 0); + + rec->byte3_mask = (rec->byte3_mask & 0xFFFFFFFE) | + (((packed_record[20] >> 14) & 0x1) << 0); + + rec->byte2_location = (rec->byte2_location & 0xFFFFFFFE) | + (((packed_record[20] >> 15) & 0x1) << 0); + rec->byte2_location = (rec->byte2_location & 0xFFFFFFC1) | + (((packed_record[21] >> 0) & 0x1F) << 1); + + rec->byte2_mask = (rec->byte2_mask & 0xFFFFFFFE) | + (((packed_record[21] >> 5) & 0x1) << 0); + + rec->byte1_location = (rec->byte1_location & 0xFFFFFFC0) | + (((packed_record[21] >> 6) & 0x3F) << 0); + + rec->byte1_mask = (rec->byte1_mask & 0xFFFFFFFE) | + (((packed_record[21] >> 12) & 0x1) << 0); + + rec->byte0_location = (rec->byte0_location & 0xFFFFFFF8) | + (((packed_record[21] >> 13) & 0x7) << 0); + rec->byte0_location = (rec->byte0_location & 0xFFFFFFC7) | + (((packed_record[22] >> 0) & 0x7) << 3); + + rec->byte0_mask = (rec->byte0_mask & 0xFFFFFFFE) | + (((packed_record[22] >> 3) & 0x1) << 0); + + rec->vlan_id_mask = (rec->vlan_id_mask & 0xFFFFFFFC) | + (((packed_record[22] >> 4) & 0x3) << 0); + + rec->vlan_up_mask = (rec->vlan_up_mask & 0xFFFFFFFE) | + (((packed_record[22] >> 6) & 0x1) << 0); + + rec->vlan_valid_mask = (rec->vlan_valid_mask & 0xFFFFFFFE) | + (((packed_record[22] >> 7) & 0x1) << 0); + + rec->tci_mask = (rec->tci_mask & 0xFFFFFF00) | + (((packed_record[22] >> 8) & 0xFF) << 0); + + rec->sci_mask = (rec->sci_mask & 0xFFFFFF00) | + (((packed_record[23] >> 0) & 0xFF) << 0); + + rec->eth_type_mask = (rec->eth_type_mask & 0xFFFFFFFC) | + (((packed_record[23] >> 8) & 0x3) << 0); + + rec->snap_mask = (rec->snap_mask & 0xFFFFFFE0) | + (((packed_record[23] >> 10) & 0x1F) << 0); + + rec->llc_mask = (rec->llc_mask & 0xFFFFFFFE) | + (((packed_record[23] >> 15) & 0x1) << 0); + rec->llc_mask = (rec->llc_mask & 0xFFFFFFF9) | + (((packed_record[24] >> 0) & 0x3) << 1); + + rec->sa_mask = (rec->sa_mask & 0xFFFFFFC0) | + (((packed_record[24] >> 2) & 0x3F) << 0); + + rec->da_mask = (rec->da_mask & 0xFFFFFFC0) | + (((packed_record[24] >> 8) & 0x3F) << 0); + + rec->pn_mask = (rec->pn_mask & 0xFFFFFFFC) | + (((packed_record[24] >> 14) & 0x3) << 0); + rec->pn_mask = (rec->pn_mask & 0xFFFFFFF3) | + (((packed_record[25] >> 0) & 0x3) << 2); + + rec->eight02dot2 = (rec->eight02dot2 & 0xFFFFFFFE) | + (((packed_record[25] >> 2) & 0x1) << 0); + + rec->tci_sc = (rec->tci_sc & 0xFFFFFFFE) | + (((packed_record[25] >> 3) & 0x1) << 0); + + rec->tci_87543 = (rec->tci_87543 & 0xFFFFFFFE) | + (((packed_record[25] >> 4) & 0x1) << 0); + + rec->exp_sectag_en = (rec->exp_sectag_en & 0xFFFFFFFE) | + (((packed_record[25] >> 5) & 0x1) << 0); + + rec->sc_idx = (rec->sc_idx & 0xFFFFFFE0) | + (((packed_record[25] >> 6) & 0x1F) << 0); + + rec->sc_sa = (rec->sc_sa & 0xFFFFFFFC) | + (((packed_record[25] >> 11) & 0x3) << 0); + + rec->debug = (rec->debug & 0xFFFFFFFE) | + (((packed_record[25] >> 13) & 0x1) << 0); + + rec->action = (rec->action & 0xFFFFFFFC) | + (((packed_record[25] >> 14) & 0x3) << 0); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[26] >> 3) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_egress_class_record(struct aq_hw_s *hw, + struct aq_mss_egress_class_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_egress_class_record, hw, rec, table_index); +} + +static int set_egress_sc_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sc_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + + if (table_index >= NUMROWS_EGRESSSCRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 8); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->start_time >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->start_time >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->stop_time >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->stop_time >> 16) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0xFFFC) | + (((rec->curr_an >> 0) & 0x3) << 0); + + packed_record[4] = (packed_record[4] & 0xFFFB) | + (((rec->an_roll >> 0) & 0x1) << 2); + + packed_record[4] = + (packed_record[4] & 0xFE07) | (((rec->tci >> 0) & 0x3F) << 3); + + packed_record[4] = (packed_record[4] & 0x01FF) | + (((rec->enc_off >> 0) & 0x7F) << 9); + packed_record[5] = (packed_record[5] & 0xFFFE) | + (((rec->enc_off >> 7) & 0x1) << 0); + + packed_record[5] = (packed_record[5] & 0xFFFD) | + (((rec->protect >> 0) & 0x1) << 1); + + packed_record[5] = + (packed_record[5] & 0xFFFB) | (((rec->recv >> 0) & 0x1) << 2); + + packed_record[5] = + (packed_record[5] & 0xFFF7) | (((rec->fresh >> 0) & 0x1) << 3); + + packed_record[5] = (packed_record[5] & 0xFFCF) | + (((rec->sak_len >> 0) & 0x3) << 4); + + packed_record[7] = + (packed_record[7] & 0x7FFF) | (((rec->valid >> 0) & 0x1) << 15); + + return set_raw_egress_record(hw, packed_record, 8, 2, + ROWOFFSET_EGRESSSCRECORD + table_index); +} + +int aq_mss_set_egress_sc_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sc_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_egress_sc_record, hw, rec, table_index); +} + +static int get_egress_sc_record(struct aq_hw_s *hw, + struct aq_mss_egress_sc_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + int ret; + + if (table_index >= NUMROWS_EGRESSSCRECORD) + return -EINVAL; + + ret = get_raw_egress_record(hw, packed_record, 8, 2, + ROWOFFSET_EGRESSSCRECORD + table_index); + if (unlikely(ret)) + return ret; + + rec->start_time = (rec->start_time & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->start_time = (rec->start_time & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->stop_time = (rec->stop_time & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->stop_time = (rec->stop_time & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->curr_an = (rec->curr_an & 0xFFFFFFFC) | + (((packed_record[4] >> 0) & 0x3) << 0); + + rec->an_roll = (rec->an_roll & 0xFFFFFFFE) | + (((packed_record[4] >> 2) & 0x1) << 0); + + rec->tci = (rec->tci & 0xFFFFFFC0) | + (((packed_record[4] >> 3) & 0x3F) << 0); + + rec->enc_off = (rec->enc_off & 0xFFFFFF80) | + (((packed_record[4] >> 9) & 0x7F) << 0); + rec->enc_off = (rec->enc_off & 0xFFFFFF7F) | + (((packed_record[5] >> 0) & 0x1) << 7); + + rec->protect = (rec->protect & 0xFFFFFFFE) | + (((packed_record[5] >> 1) & 0x1) << 0); + + rec->recv = (rec->recv & 0xFFFFFFFE) | + (((packed_record[5] >> 2) & 0x1) << 0); + + rec->fresh = (rec->fresh & 0xFFFFFFFE) | + (((packed_record[5] >> 3) & 0x1) << 0); + + rec->sak_len = (rec->sak_len & 0xFFFFFFFC) | + (((packed_record[5] >> 4) & 0x3) << 0); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[7] >> 15) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_egress_sc_record(struct aq_hw_s *hw, + struct aq_mss_egress_sc_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_egress_sc_record, hw, rec, table_index); +} + +static int set_egress_sa_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sa_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + + if (table_index >= NUMROWS_EGRESSSARECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 8); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->start_time >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->start_time >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->stop_time >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->stop_time >> 16) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->next_pn >> 0) & 0xFFFF) << 0); + packed_record[5] = (packed_record[5] & 0x0000) | + (((rec->next_pn >> 16) & 0xFFFF) << 0); + + packed_record[6] = + (packed_record[6] & 0xFFFE) | (((rec->sat_pn >> 0) & 0x1) << 0); + + packed_record[6] = + (packed_record[6] & 0xFFFD) | (((rec->fresh >> 0) & 0x1) << 1); + + packed_record[7] = + (packed_record[7] & 0x7FFF) | (((rec->valid >> 0) & 0x1) << 15); + + return set_raw_egress_record(hw, packed_record, 8, 2, + ROWOFFSET_EGRESSSARECORD + table_index); +} + +int aq_mss_set_egress_sa_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sa_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_egress_sa_record, hw, rec, table_index); +} + +static int get_egress_sa_record(struct aq_hw_s *hw, + struct aq_mss_egress_sa_record *rec, + u16 table_index) +{ + u16 packed_record[8]; + int ret; + + if (table_index >= NUMROWS_EGRESSSARECORD) + return -EINVAL; + + ret = get_raw_egress_record(hw, packed_record, 8, 2, + ROWOFFSET_EGRESSSARECORD + table_index); + if (unlikely(ret)) + return ret; + + rec->start_time = (rec->start_time & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->start_time = (rec->start_time & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->stop_time = (rec->stop_time & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->stop_time = (rec->stop_time & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->next_pn = (rec->next_pn & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + rec->next_pn = (rec->next_pn & 0x0000FFFF) | + (((packed_record[5] >> 0) & 0xFFFF) << 16); + + rec->sat_pn = (rec->sat_pn & 0xFFFFFFFE) | + (((packed_record[6] >> 0) & 0x1) << 0); + + rec->fresh = (rec->fresh & 0xFFFFFFFE) | + (((packed_record[6] >> 1) & 0x1) << 0); + + rec->valid = (rec->valid & 0xFFFFFFFE) | + (((packed_record[7] >> 15) & 0x1) << 0); + + return 0; +} + +int aq_mss_get_egress_sa_record(struct aq_hw_s *hw, + struct aq_mss_egress_sa_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_egress_sa_record, hw, rec, table_index); +} + +static int set_egress_sakey_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sakey_record *rec, + u16 table_index) +{ + u16 packed_record[16]; + int ret; + + if (table_index >= NUMROWS_EGRESSSAKEYRECORD) + return -EINVAL; + + memset(packed_record, 0, sizeof(u16) * 16); + + packed_record[0] = (packed_record[0] & 0x0000) | + (((rec->key[0] >> 0) & 0xFFFF) << 0); + packed_record[1] = (packed_record[1] & 0x0000) | + (((rec->key[0] >> 16) & 0xFFFF) << 0); + + packed_record[2] = (packed_record[2] & 0x0000) | + (((rec->key[1] >> 0) & 0xFFFF) << 0); + packed_record[3] = (packed_record[3] & 0x0000) | + (((rec->key[1] >> 16) & 0xFFFF) << 0); + + packed_record[4] = (packed_record[4] & 0x0000) | + (((rec->key[2] >> 0) & 0xFFFF) << 0); + packed_record[5] = (packed_record[5] & 0x0000) | + (((rec->key[2] >> 16) & 0xFFFF) << 0); + + packed_record[6] = (packed_record[6] & 0x0000) | + (((rec->key[3] >> 0) & 0xFFFF) << 0); + packed_record[7] = (packed_record[7] & 0x0000) | + (((rec->key[3] >> 16) & 0xFFFF) << 0); + + packed_record[8] = (packed_record[8] & 0x0000) | + (((rec->key[4] >> 0) & 0xFFFF) << 0); + packed_record[9] = (packed_record[9] & 0x0000) | + (((rec->key[4] >> 16) & 0xFFFF) << 0); + + packed_record[10] = (packed_record[10] & 0x0000) | + (((rec->key[5] >> 0) & 0xFFFF) << 0); + packed_record[11] = (packed_record[11] & 0x0000) | + (((rec->key[5] >> 16) & 0xFFFF) << 0); + + packed_record[12] = (packed_record[12] & 0x0000) | + (((rec->key[6] >> 0) & 0xFFFF) << 0); + packed_record[13] = (packed_record[13] & 0x0000) | + (((rec->key[6] >> 16) & 0xFFFF) << 0); + + packed_record[14] = (packed_record[14] & 0x0000) | + (((rec->key[7] >> 0) & 0xFFFF) << 0); + packed_record[15] = (packed_record[15] & 0x0000) | + (((rec->key[7] >> 16) & 0xFFFF) << 0); + + ret = set_raw_egress_record(hw, packed_record, 8, 2, + ROWOFFSET_EGRESSSAKEYRECORD + table_index); + if (unlikely(ret)) + return ret; + ret = set_raw_egress_record(hw, packed_record + 8, 8, 2, + ROWOFFSET_EGRESSSAKEYRECORD + table_index - + 32); + if (unlikely(ret)) + return ret; + + return 0; +} + +int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sakey_record *rec, + u16 table_index) +{ + return AQ_API_CALL_SAFE(set_egress_sakey_record, hw, rec, table_index); +} + +static int get_egress_sakey_record(struct aq_hw_s *hw, + struct aq_mss_egress_sakey_record *rec, + u16 table_index) +{ + u16 packed_record[16]; + int ret; + + if (table_index >= NUMROWS_EGRESSSAKEYRECORD) + return -EINVAL; + + ret = get_raw_egress_record(hw, packed_record, 8, 2, + ROWOFFSET_EGRESSSAKEYRECORD + table_index); + if (unlikely(ret)) + return ret; + ret = get_raw_egress_record(hw, packed_record + 8, 8, 2, + ROWOFFSET_EGRESSSAKEYRECORD + table_index - + 32); + if (unlikely(ret)) + return ret; + + rec->key[0] = (rec->key[0] & 0xFFFF0000) | + (((packed_record[0] >> 0) & 0xFFFF) << 0); + rec->key[0] = (rec->key[0] & 0x0000FFFF) | + (((packed_record[1] >> 0) & 0xFFFF) << 16); + + rec->key[1] = (rec->key[1] & 0xFFFF0000) | + (((packed_record[2] >> 0) & 0xFFFF) << 0); + rec->key[1] = (rec->key[1] & 0x0000FFFF) | + (((packed_record[3] >> 0) & 0xFFFF) << 16); + + rec->key[2] = (rec->key[2] & 0xFFFF0000) | + (((packed_record[4] >> 0) & 0xFFFF) << 0); + rec->key[2] = (rec->key[2] & 0x0000FFFF) | + (((packed_record[5] >> 0) & 0xFFFF) << 16); + + rec->key[3] = (rec->key[3] & 0xFFFF0000) | + (((packed_record[6] >> 0) & 0xFFFF) << 0); + rec->key[3] = (rec->key[3] & 0x0000FFFF) | + (((packed_record[7] >> 0) & 0xFFFF) << 16); + + rec->key[4] = (rec->key[4] & 0xFFFF0000) | + (((packed_record[8] >> 0) & 0xFFFF) << 0); + rec->key[4] = (rec->key[4] & 0x0000FFFF) | + (((packed_record[9] >> 0) & 0xFFFF) << 16); + + rec->key[5] = (rec->key[5] & 0xFFFF0000) | + (((packed_record[10] >> 0) & 0xFFFF) << 0); + rec->key[5] = (rec->key[5] & 0x0000FFFF) | + (((packed_record[11] >> 0) & 0xFFFF) << 16); + + rec->key[6] = (rec->key[6] & 0xFFFF0000) | + (((packed_record[12] >> 0) & 0xFFFF) << 0); + rec->key[6] = (rec->key[6] & 0x0000FFFF) | + (((packed_record[13] >> 0) & 0xFFFF) << 16); + + rec->key[7] = (rec->key[7] & 0xFFFF0000) | + (((packed_record[14] >> 0) & 0xFFFF) << 0); + rec->key[7] = (rec->key[7] & 0x0000FFFF) | + (((packed_record[15] >> 0) & 0xFFFF) << 16); + + return 0; +} + +int aq_mss_get_egress_sakey_record(struct aq_hw_s *hw, + struct aq_mss_egress_sakey_record *rec, + u16 table_index) +{ + memset(rec, 0, sizeof(*rec)); + + return AQ_API_CALL_SAFE(get_egress_sakey_record, hw, rec, table_index); +} + +static int get_egress_sa_expired(struct aq_hw_s *hw, u32 *expired) +{ + u16 val; + int ret; + + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_EXPIRED_STATUS_REGISTER_ADDR, + &val); + if (unlikely(ret)) + return ret; + + *expired = val; + + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_EXPIRED_STATUS_REGISTER_ADDR + 1, + &val); + if (unlikely(ret)) + return ret; + + *expired |= val << 16; + + return 0; +} + +int aq_mss_get_egress_sa_expired(struct aq_hw_s *hw, u32 *expired) +{ + *expired = 0; + + return AQ_API_CALL_SAFE(get_egress_sa_expired, hw, expired); +} + +static int get_egress_sa_threshold_expired(struct aq_hw_s *hw, + u32 *expired) +{ + u16 val; + int ret; + + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_THRESHOLD_EXPIRED_STATUS_REGISTER_ADDR, &val); + if (unlikely(ret)) + return ret; + + *expired = val; + + ret = aq_mss_mdio_read(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_THRESHOLD_EXPIRED_STATUS_REGISTER_ADDR + 1, &val); + if (unlikely(ret)) + return ret; + + *expired |= val << 16; + + return 0; +} + +int aq_mss_get_egress_sa_threshold_expired(struct aq_hw_s *hw, + u32 *expired) +{ + *expired = 0; + + return AQ_API_CALL_SAFE(get_egress_sa_threshold_expired, hw, expired); +} + +static int set_egress_sa_expired(struct aq_hw_s *hw, u32 expired) +{ + int ret; + + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_EXPIRED_STATUS_REGISTER_ADDR, + expired & 0xFFFF); + if (unlikely(ret)) + return ret; + + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_EXPIRED_STATUS_REGISTER_ADDR + 1, + expired >> 16); + if (unlikely(ret)) + return ret; + + return 0; +} + +int aq_mss_set_egress_sa_expired(struct aq_hw_s *hw, u32 expired) +{ + return AQ_API_CALL_SAFE(set_egress_sa_expired, hw, expired); +} + +static int set_egress_sa_threshold_expired(struct aq_hw_s *hw, u32 expired) +{ + int ret; + + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_THRESHOLD_EXPIRED_STATUS_REGISTER_ADDR, + expired & 0xFFFF); + if (unlikely(ret)) + return ret; + + ret = aq_mss_mdio_write(hw, MDIO_MMD_VEND1, + MSS_EGRESS_SA_THRESHOLD_EXPIRED_STATUS_REGISTER_ADDR + 1, + expired >> 16); + if (unlikely(ret)) + return ret; + + return 0; +} + +int aq_mss_set_egress_sa_threshold_expired(struct aq_hw_s *hw, u32 expired) +{ + return AQ_API_CALL_SAFE(set_egress_sa_threshold_expired, hw, expired); +} diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h new file mode 100644 index 000000000000..87f9f112e876 --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_api.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Atlantic Network Driver + * + * Copyright (C) 2020 Marvell International Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef __MACSEC_API_H__ +#define __MACSEC_API_H__ + +#include "aq_hw.h" +#include "macsec_struct.h" + +#define NUMROWS_EGRESSCTLFRECORD 24 +#define ROWOFFSET_EGRESSCTLFRECORD 0 + +#define NUMROWS_EGRESSCLASSRECORD 48 +#define ROWOFFSET_EGRESSCLASSRECORD 0 + +#define NUMROWS_EGRESSSCRECORD 32 +#define ROWOFFSET_EGRESSSCRECORD 0 + +#define NUMROWS_EGRESSSARECORD 32 +#define ROWOFFSET_EGRESSSARECORD 32 + +#define NUMROWS_EGRESSSAKEYRECORD 32 +#define ROWOFFSET_EGRESSSAKEYRECORD 96 + +/*! Read the raw table data from the specified row of the Egress CTL + * Filter table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 23). + */ +int aq_mss_get_egress_ctlf_record(struct aq_hw_s *hw, + struct aq_mss_egress_ctlf_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Egress CTL Filter table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write(max 23). + */ +int aq_mss_set_egress_ctlf_record(struct aq_hw_s *hw, + const struct aq_mss_egress_ctlf_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Egress + * Packet Classifier table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 47). + */ +int aq_mss_get_egress_class_record(struct aq_hw_s *hw, + struct aq_mss_egress_class_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Egress Packet Classifier table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write (max 47). + */ +int aq_mss_set_egress_class_record(struct aq_hw_s *hw, + const struct aq_mss_egress_class_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Egress SC + * Lookup table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 31). + */ +int aq_mss_get_egress_sc_record(struct aq_hw_s *hw, + struct aq_mss_egress_sc_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Egress SC Lookup table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write (max 31). + */ +int aq_mss_set_egress_sc_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sc_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Egress SA + * Lookup table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 31). + */ +int aq_mss_get_egress_sa_record(struct aq_hw_s *hw, + struct aq_mss_egress_sa_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Egress SA Lookup table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write (max 31). + */ +int aq_mss_set_egress_sa_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sa_record *rec, + u16 table_index); + +/*! Read the raw table data from the specified row of the Egress SA + * Key Lookup table, and unpack it into the fields of rec. + * rec - [OUT] The raw table row data will be unpacked into the fields of rec. + * table_index - The table row to read (max 31). + */ +int aq_mss_get_egress_sakey_record(struct aq_hw_s *hw, + struct aq_mss_egress_sakey_record *rec, + u16 table_index); + +/*! Pack the fields of rec, and write the packed data into the + * specified row of the Egress SA Key Lookup table. + * rec - [IN] The bitfield values to write to the table row. + * table_index - The table row to write (max 31). + */ +int aq_mss_set_egress_sakey_record(struct aq_hw_s *hw, + const struct aq_mss_egress_sakey_record *rec, + u16 table_index); + +/*! Get Egress SA expired. */ +int aq_mss_get_egress_sa_expired(struct aq_hw_s *hw, u32 *expired); +/*! Get Egress SA threshold expired. */ +int aq_mss_get_egress_sa_threshold_expired(struct aq_hw_s *hw, + u32 *expired); +/*! Set Egress SA expired. */ +int aq_mss_set_egress_sa_expired(struct aq_hw_s *hw, u32 expired); +/*! Set Egress SA threshold expired. */ +int aq_mss_set_egress_sa_threshold_expired(struct aq_hw_s *hw, + u32 expired); + +#endif /* __MACSEC_API_H__ */ diff --git a/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h new file mode 100644 index 000000000000..f1814b7f2f7b --- /dev/null +++ b/drivers/net/ethernet/aquantia/atlantic/macsec/macsec_struct.h @@ -0,0 +1,322 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Atlantic Network Driver + * + * Copyright (C) 2020 Marvell International Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef _MACSEC_STRUCT_H_ +#define _MACSEC_STRUCT_H_ + +/*! Represents the bitfields of a single row in the Egress CTL Filter + * table. + */ +struct aq_mss_egress_ctlf_record { + /*! This is used to store the 48 bit value used to compare SA, DA or + * halfDA+half SA value. + */ + u32 sa_da[2]; + /*! This is used to store the 16 bit ethertype value used for + * comparison. + */ + u32 eth_type; + /*! The match mask is per-nibble. 0 means don't care, i.e. every value + * will match successfully. The total data is 64 bit, i.e. 16 nibbles + * masks. + */ + u32 match_mask; + /*! 0: No compare, i.e. This entry is not used + * 1: compare DA only + * 2: compare SA only + * 3: compare half DA + half SA + * 4: compare ether type only + * 5: compare DA + ethertype + * 6: compare SA + ethertype + * 7: compare DA+ range. + */ + u32 match_type; + /*! 0: Bypass the remaining modules if matched. + * 1: Forward to next module for more classifications. + */ + u32 action; +}; + +/*! Represents the bitfields of a single row in the Egress Packet + * Classifier table. + */ +struct aq_mss_egress_class_record { + /*! VLAN ID field. */ + u32 vlan_id; + /*! VLAN UP field. */ + u32 vlan_up; + /*! VLAN Present in the Packet. */ + u32 vlan_valid; + /*! The 8 bit value used to compare with extracted value for byte 3. */ + u32 byte3; + /*! The 8 bit value used to compare with extracted value for byte 2. */ + u32 byte2; + /*! The 8 bit value used to compare with extracted value for byte 1. */ + u32 byte1; + /*! The 8 bit value used to compare with extracted value for byte 0. */ + u32 byte0; + /*! The 8 bit TCI field used to compare with extracted value. */ + u32 tci; + /*! The 64 bit SCI field in the SecTAG. */ + u32 sci[2]; + /*! The 16 bit Ethertype (in the clear) field used to compare with + * extracted value. + */ + u32 eth_type; + /*! This is to specify the 40bit SNAP header if the SNAP header's mask + * is enabled. + */ + u32 snap[2]; + /*! This is to specify the 24bit LLC header if the LLC header's mask is + * enabled. + */ + u32 llc; + /*! The 48 bit MAC_SA field used to compare with extracted value. */ + u32 mac_sa[2]; + /*! The 48 bit MAC_DA field used to compare with extracted value. */ + u32 mac_da[2]; + /*! The 32 bit Packet number used to compare with extracted value. */ + u32 pn; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte3_location; + /*! 0: don't care + * 1: enable comparison of extracted byte pointed by byte 3 location. + */ + u32 byte3_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte2_location; + /*! 0: don't care + * 1: enable comparison of extracted byte pointed by byte 2 location. + */ + u32 byte2_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte1_location; + /*! 0: don't care + * 1: enable comparison of extracted byte pointed by byte 1 location. + */ + u32 byte1_mask; + /*! 0~63: byte location used extracted by packets comparator, which + * can be anything from the first 64 bytes of the MAC packets. + * This byte location counted from MAC' DA address. i.e. set to 0 + * will point to byte 0 of DA address. + */ + u32 byte0_location; + /*! 0: don't care + * 1: enable comparison of extracted byte pointed by byte 0 location. + */ + u32 byte0_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of extracted VLAN ID field. + */ + u32 vlan_id_mask; + /*! 0: don't care + * 1: enable comparison of extracted VLAN UP field. + */ + u32 vlan_up_mask; + /*! 0: don't care + * 1: enable comparison of extracted VLAN Valid field. + */ + u32 vlan_valid_mask; + /*! This is bit mask to enable comparison the 8 bit TCI field, + * including the AN field. + * For explicit SECTAG, AN is hardware controlled. For sending + * packet w/ explicit SECTAG, rest of the TCI fields are directly + * from the SECTAG. + */ + u32 tci_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of SCI + * Note: If this field is not 0, this means the input packet's + * SECTAG is explicitly tagged and MACSEC module will only update + * the MSDU. + * PN number is hardware controlled. + */ + u32 sci_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of Ethertype. + */ + u32 eth_type_mask; + /*! Mask is per-byte. + * 0: don't care and no SNAP header exist. + * 1: compare the SNAP header. + * If this bit is set to 1, the extracted filed will assume the + * SNAP header exist as encapsulated in 802.3 (RFC 1042). I.E. the + * next 5 bytes after the the LLC header is SNAP header. + */ + u32 snap_mask; + /*! 0: don't care and no LLC header exist. + * 1: compare the LLC header. + * If this bit is set to 1, the extracted filed will assume the + * LLC header exist as encapsulated in 802.3 (RFC 1042). I.E. the + * next three bytes after the 802.3MAC header is LLC header. + */ + u32 llc_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of MAC_SA. + */ + u32 sa_mask; + /*! Mask is per-byte. + * 0: don't care + * 1: enable comparison of MAC_DA. + */ + u32 da_mask; + /*! Mask is per-byte. */ + u32 pn_mask; + /*! Reserved. This bit should be always 0. */ + u32 eight02dot2; + /*! 1: For explicit sectag case use TCI_SC from table + * 0: use TCI_SC from explicit sectag. + */ + u32 tci_sc; + /*! 1: For explicit sectag case,use TCI_V,ES,SCB,E,C from table + * 0: use TCI_V,ES,SCB,E,C from explicit sectag. + */ + u32 tci_87543; + /*! 1: indicates that incoming packet has explicit sectag. */ + u32 exp_sectag_en; + /*! If packet matches and tagged as controlled-packet, this SC/SA + * index is used for later SC and SA table lookup. + */ + u32 sc_idx; + /*! This field is used to specify how many SA entries are + * associated with 1 SC entry. + * 2'b00: 1 SC has 4 SA. + * SC index is equivalent to {SC_Index[4:2], 1'b0}. + * SA index is equivalent to {SC_Index[4:2], SC entry's current AN[1:0] + * 2'b10: 1 SC has 2 SA. + * SC index is equivalent to SC_Index[4:1] + * SA index is equivalent to {SC_Index[4:1], SC entry's current AN[0]} + * 2'b11: 1 SC has 1 SA. No SC entry exists for the specific SA. + * SA index is equivalent to SC_Index[4:0] + * Note: if specified as 2'b11, hardware AN roll over is not + * supported. + */ + u32 sc_sa; + /*! 0: the packets will be sent to MAC FIFO + * 1: The packets will be sent to Debug/Loopback FIFO. + * If the above's action is drop, this bit has no meaning. + */ + u32 debug; + /*! 0: forward to remaining modules + * 1: bypass the next encryption modules. This packet is considered + * un-control packet. + * 2: drop + * 3: Reserved. + */ + u32 action; + /*! 0: Not valid entry. This entry is not used + * 1: valid entry. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Egress SC Lookup table. */ +struct aq_mss_egress_sc_record { + /*! This is to specify when the SC was first used. Set by HW. */ + u32 start_time; + /*! This is to specify when the SC was last used. Set by HW. */ + u32 stop_time; + /*! This is to specify which of the SA entries are used by current HW. + * Note: This value need to be set by SW after reset. It will be + * automatically updated by HW, if AN roll over is enabled. + */ + u32 curr_an; + /*! 0: Clear the SA Valid Bit after PN expiry. + * 1: Do not Clear the SA Valid bit after PN expiry of the current SA. + * When the Enable AN roll over is set, S/W does not need to + * program the new SA's and the H/W will automatically roll over + * between the SA's without session expiry. + * For normal operation, Enable AN Roll over will be set to '0' + * and in which case, the SW needs to program the new SA values + * after the current PN expires. + */ + u32 an_roll; + /*! This is the TCI field used if packet is not explicitly tagged. */ + u32 tci; + /*! This value indicates the offset where the decryption will start. + * [[Values of 0, 4, 8-50]. + */ + u32 enc_off; + /*! 0: Do not protect frames, all the packets will be forwarded + * unchanged. MIB counter (OutPktsUntagged) will be updated. + * 1: Protect. + */ + u32 protect; + /*! 0: when none of the SA related to SC has inUse set. + * 1: when either of the SA related to the SC has inUse set. + * This bit is set by HW. + */ + u32 recv; + /*! 0: H/W Clears this bit on the first use. + * 1: SW updates this entry, when programming the SC Table. + */ + u32 fresh; + /*! AES Key size + * 00 - 128bits + * 01 - 192bits + * 10 - 256bits + * 11 - Reserved. + */ + u32 sak_len; + /*! 0: Invalid SC + * 1: Valid SC. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Egress SA Lookup table. */ +struct aq_mss_egress_sa_record { + /*! This is to specify when the SC was first used. Set by HW. */ + u32 start_time; + /*! This is to specify when the SC was last used. Set by HW. */ + u32 stop_time; + /*! This is set by SW and updated by HW to store the Next PN number + * used for encryption. + */ + u32 next_pn; + /*! The Next_PN number is going to wrapped around from 0xFFFF_FFFF + * to 0. set by HW. + */ + u32 sat_pn; + /*! 0: This SA is in use. + * 1: This SA is Fresh and set by SW. + */ + u32 fresh; + /*! 0: Invalid SA + * 1: Valid SA. + */ + u32 valid; +}; + +/*! Represents the bitfields of a single row in the Egress SA Key + * Lookup table. + */ +struct aq_mss_egress_sakey_record { + /*! Key for AES-GCM processing. */ + u32 key[8]; +}; + +#endif From patchwork Tue Mar 10 15:03:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C4FC18E5B for ; Tue, 10 Mar 2020 15:04:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1C23121D7E for ; Tue, 10 Mar 2020 15:04:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="ynX2pr9p" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727820AbgCJPE1 (ORCPT ); Tue, 10 Mar 2020 11:04:27 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:34996 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727693AbgCJPE1 (ORCPT ); Tue, 10 Mar 2020 11:04:27 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AF1dGJ007661; Tue, 10 Mar 2020 08:04:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=hfnyZFVXahrNqW0xb0oSaC3rczA/0rKMkF+7eq3pYVM=; b=ynX2pr9p7Ma0Y0cPrhCpVlmYDU4MJ26wWVN7Gwtn2oXJvJvRhinbFajm0sjtOfzSDbT+ hwEEqShq8pqJZtosNlY8m8236e46L4S8bDUNBDo4ClZN78bS3bbEsO0N9aSKlny964iW Gjin4MtQ1F6eU2CPbJNCdSMwb0GRS5JS1IugwkiZF9jEje2+RtT2QCeIUGZdz8SaQ7S0 ToqBuftZeVaSfBOxS50gDGsPaOBkIL/sssM8blQ1tZZ4WFuM6CfRR5UoQpJ1/vsvQ/M2 XE9Jh5bTm8SHrZu+HD/U9eZXamAOys6eUYwhaYa8DTG0Fg9kVySMYRxgdLmQPHfAQPDS hA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2ym9uwpm8g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:23 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:21 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:21 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:20 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id AF81A3F7040; Tue, 10 Mar 2020 08:04:19 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh Subject: [RFC v2 14/16] net: atlantic: MACSec ingress offload implementation Date: Tue, 10 Mar 2020 18:03:40 +0300 Message-ID: <20200310150342.1701-15-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Mark Starovoytov This patch adds support for MACSec ingress HW offloading on Atlantic network cards. Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../ethernet/aquantia/atlantic/aq_macsec.c | 463 +++++++++++++++++- .../ethernet/aquantia/atlantic/aq_macsec.h | 5 + 2 files changed, 462 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index 5eb66649250a..4401202f9c3f 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -30,6 +30,10 @@ static int aq_clear_txsc(struct aq_nic_s *nic, const int txsc_idx, enum aq_clear_type clear_type); static int aq_clear_txsa(struct aq_nic_s *nic, struct aq_macsec_txsc *aq_txsc, const int sa_num, enum aq_clear_type clear_type); +static int aq_clear_rxsc(struct aq_nic_s *nic, const int rxsc_idx, + enum aq_clear_type clear_type); +static int aq_clear_rxsa(struct aq_nic_s *nic, struct aq_macsec_rxsc *aq_rxsc, + const int sa_num, enum aq_clear_type clear_type); static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, enum aq_clear_type clear_type); static int aq_apply_macsec_cfg(struct aq_nic_s *nic); @@ -62,6 +66,22 @@ static int aq_get_txsc_idx_from_secy(struct aq_macsec_cfg *macsec_cfg, return -1; } +static int aq_get_rxsc_idx_from_rxsc(struct aq_macsec_cfg *macsec_cfg, + const struct macsec_rx_sc *rxsc) +{ + int i; + + if (unlikely(!rxsc)) + return -1; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (macsec_cfg->aq_rxsc[i].sw_rxsc == rxsc) + return i; + } + + return -1; +} + static int aq_get_txsc_idx_from_sc_idx(const enum aq_macsec_sc_sa sc_sa, const unsigned int sc_idx) { @@ -532,34 +552,393 @@ static int aq_mdo_del_txsa(struct macsec_context *ctx) return ret; } +static int aq_rxsc_validate_frames(const enum macsec_validation_type validate) +{ + switch (validate) { + case MACSEC_VALIDATE_DISABLED: + return 2; + case MACSEC_VALIDATE_CHECK: + return 1; + case MACSEC_VALIDATE_STRICT: + return 0; + default: + break; + } + + /* should never be here */ + WARN_ON(true); + return 0; +} + +static int aq_set_rxsc(struct aq_nic_s *nic, const u32 rxsc_idx) +{ + const struct aq_macsec_rxsc *aq_rxsc = + &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + struct aq_mss_ingress_preclass_record pre_class_record; + const struct macsec_rx_sc *rx_sc = aq_rxsc->sw_rxsc; + const struct macsec_secy *secy = aq_rxsc->sw_secy; + const u32 hw_sc_idx = aq_rxsc->hw_sc_idx; + struct aq_mss_ingress_sc_record sc_record; + struct aq_hw_s *hw = nic->aq_hw; + __be64 nsci; + int ret = 0; + + netdev_dbg(nic->ndev, + "set rx_sc: rxsc_idx=%d, sci %#llx, hw_sc_idx=%d\n", + rxsc_idx, rx_sc->sci, hw_sc_idx); + + memset(&pre_class_record, 0, sizeof(pre_class_record)); + nsci = cpu_to_be64((__force u64)rx_sc->sci); + memcpy(pre_class_record.sci, &nsci, sizeof(nsci)); + pre_class_record.sci_mask = 0xff; + /* match all MACSEC ethertype packets */ + pre_class_record.eth_type = ETH_P_MACSEC; + pre_class_record.eth_type_mask = 0x3; + + aq_ether_addr_to_mac(pre_class_record.mac_sa, (char *)&rx_sc->sci); + pre_class_record.sa_mask = 0x3f; + + pre_class_record.an_mask = nic->macsec_cfg->sc_sa; + pre_class_record.sc_idx = hw_sc_idx; + /* strip SecTAG & forward for decryption */ + pre_class_record.action = 0x0; + pre_class_record.valid = 1; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx + 1); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + /* If SCI is absent, then match by SA alone */ + pre_class_record.sci_mask = 0; + pre_class_record.sci_from_table = 1; + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + memset(&sc_record, 0, sizeof(sc_record)); + sc_record.validate_frames = + aq_rxsc_validate_frames(secy->validate_frames); + if (secy->replay_protect) { + sc_record.replay_protect = 1; + sc_record.anti_replay_window = secy->replay_window; + } + sc_record.valid = 1; + sc_record.fresh = 1; + + ret = aq_mss_set_ingress_sc_record(hw, &sc_record, hw_sc_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_sc_record failed with %d\n", + ret); + return ret; + } + + return ret; +} + static int aq_mdo_add_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const u32 rxsc_idx_max = aq_sc_idx_max(cfg->sc_sa); + u32 rxsc_idx; + int ret = 0; + + if (hweight32(cfg->rxsc_idx_busy) >= rxsc_idx_max) + return -ENOSPC; + + rxsc_idx = ffz(cfg->rxsc_idx_busy); + if (rxsc_idx >= rxsc_idx_max) + return -ENOSPC; + + if (ctx->prepare) + return 0; + + cfg->aq_rxsc[rxsc_idx].hw_sc_idx = aq_to_hw_sc_idx(rxsc_idx, + cfg->sc_sa); + cfg->aq_rxsc[rxsc_idx].sw_secy = ctx->secy; + cfg->aq_rxsc[rxsc_idx].sw_rxsc = ctx->rx_sc; + netdev_dbg(nic->ndev, "add rxsc: rxsc_idx=%u, hw_sc_idx=%u, rxsc=%p\n", + rxsc_idx, cfg->aq_rxsc[rxsc_idx].hw_sc_idx, + cfg->aq_rxsc[rxsc_idx].sw_rxsc); + + if (netif_carrier_ok(nic->ndev) && netif_running(ctx->secy->netdev)) + ret = aq_set_rxsc(nic, rxsc_idx); + + if (ret < 0) + return ret; + + set_bit(rxsc_idx, &cfg->rxsc_idx_busy); + + return 0; } static int aq_mdo_upd_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(ctx->secy->netdev)) + ret = aq_set_rxsc(nic, rxsc_idx); + + return ret; +} + +static int aq_clear_rxsc(struct aq_nic_s *nic, const int rxsc_idx, + enum aq_clear_type clear_type) +{ + struct aq_macsec_rxsc *rx_sc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + int sa_num; + + for_each_set_bit (sa_num, &rx_sc->rx_sa_idx_busy, AQ_MACSEC_MAX_SA) { + ret = aq_clear_rxsa(nic, rx_sc, sa_num, clear_type); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_HW) { + struct aq_mss_ingress_preclass_record pre_class_record; + struct aq_mss_ingress_sc_record sc_record; + + memset(&pre_class_record, 0, sizeof(pre_class_record)); + memset(&sc_record, 0, sizeof(sc_record)); + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + ret = aq_mss_set_ingress_preclass_record(hw, &pre_class_record, + 2 * rxsc_idx + 1); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_preclass_record failed with %d\n", + ret); + return ret; + } + + sc_record.fresh = 1; + ret = aq_mss_set_ingress_sc_record(hw, &sc_record, + rx_sc->hw_sc_idx); + if (ret) + return ret; + } + + if (clear_type & AQ_CLEAR_SW) { + clear_bit(rxsc_idx, &nic->macsec_cfg->rxsc_idx_busy); + rx_sc->sw_secy = NULL; + rx_sc->sw_rxsc = NULL; + } + + return ret; } static int aq_mdo_del_rxsc(struct macsec_context *ctx) { - return -EOPNOTSUPP; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + enum aq_clear_type clear_type = AQ_CLEAR_SW; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev)) + clear_type = AQ_CLEAR_ALL; + + ret = aq_clear_rxsc(nic, rxsc_idx, clear_type); + + return ret; +} + +static int aq_update_rxsa(struct aq_nic_s *nic, const unsigned int sc_idx, + const struct macsec_secy *secy, + const struct macsec_rx_sa *rx_sa, + const unsigned char *key, const unsigned char an) +{ + struct aq_mss_ingress_sakey_record sa_key_record; + struct aq_mss_ingress_sa_record sa_record; + struct aq_hw_s *hw = nic->aq_hw; + const int sa_idx = sc_idx | an; + int ret = 0; + + netdev_dbg(nic->ndev, "set rx_sa %d: active=%d, next_pn=%d\n", an, + rx_sa->active, rx_sa->next_pn); + + memset(&sa_record, 0, sizeof(sa_record)); + sa_record.valid = rx_sa->active; + sa_record.fresh = 1; + sa_record.next_pn = rx_sa->next_pn; + + ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); + if (ret) { + netdev_err(nic->ndev, + "aq_mss_set_ingress_sa_record failed with %d\n", + ret); + return ret; + } + + if (!key) + return ret; + + memset(&sa_key_record, 0, sizeof(sa_key_record)); + memcpy(&sa_key_record.key, key, secy->key_len); + + switch (secy->key_len) { + case AQ_MACSEC_KEY_LEN_128_BIT: + sa_key_record.key_len = 0; + break; + case AQ_MACSEC_KEY_LEN_192_BIT: + sa_key_record.key_len = 1; + break; + case AQ_MACSEC_KEY_LEN_256_BIT: + sa_key_record.key_len = 2; + break; + default: + return -1; + } + + aq_rotate_keys(&sa_key_record.key, secy->key_len); + + ret = aq_mss_set_ingress_sakey_record(hw, &sa_key_record, sa_idx); + if (ret) + netdev_err(nic->ndev, + "aq_mss_set_ingress_sakey_record failed with %d\n", + ret); + + return ret; } static int aq_mdo_add_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + const struct macsec_secy *secy = ctx->secy; + struct aq_macsec_rxsc *aq_rxsc; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_rxsc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + set_bit(ctx->sa.assoc_num, &aq_rxsc->rx_sa_idx_busy); + + memcpy(aq_rxsc->rx_sa_key[ctx->sa.assoc_num], ctx->sa.key, + secy->key_len); + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_rxsa(nic, aq_rxsc->hw_sc_idx, secy, + ctx->sa.rx_sa, ctx->sa.key, + ctx->sa.assoc_num); + + return ret; } static int aq_mdo_upd_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + const struct macsec_secy *secy = ctx->secy; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + if (netif_carrier_ok(nic->ndev) && netif_running(secy->netdev)) + ret = aq_update_rxsa(nic, cfg->aq_rxsc[rxsc_idx].hw_sc_idx, + secy, ctx->sa.rx_sa, NULL, + ctx->sa.assoc_num); + + return ret; +} + +static int aq_clear_rxsa(struct aq_nic_s *nic, struct aq_macsec_rxsc *aq_rxsc, + const int sa_num, enum aq_clear_type clear_type) +{ + int sa_idx = aq_rxsc->hw_sc_idx | sa_num; + struct aq_hw_s *hw = nic->aq_hw; + int ret = 0; + + if (clear_type & AQ_CLEAR_SW) + clear_bit(sa_num, &aq_rxsc->rx_sa_idx_busy); + + if ((clear_type & AQ_CLEAR_HW) && netif_carrier_ok(nic->ndev)) { + struct aq_mss_ingress_sakey_record sa_key_record; + struct aq_mss_ingress_sa_record sa_record; + + memset(&sa_key_record, 0, sizeof(sa_key_record)); + memset(&sa_record, 0, sizeof(sa_record)); + sa_record.fresh = 1; + ret = aq_mss_set_ingress_sa_record(hw, &sa_record, sa_idx); + if (ret) + return ret; + + return aq_mss_set_ingress_sakey_record(hw, &sa_key_record, + sa_idx); + } + + return ret; } static int aq_mdo_del_rxsa(struct macsec_context *ctx) { - return -EOPNOTSUPP; + const struct macsec_rx_sc *rx_sc = ctx->sa.rx_sa->sc; + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int rxsc_idx; + int ret = 0; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + ret = aq_clear_rxsa(nic, &cfg->aq_rxsc[rxsc_idx], ctx->sa.assoc_num, + AQ_CLEAR_ALL); + + return ret; } static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) @@ -590,10 +969,40 @@ static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) return ret; } +static int apply_rxsc_cfg(struct aq_nic_s *nic, const int rxsc_idx) +{ + struct aq_macsec_rxsc *aq_rxsc = &nic->macsec_cfg->aq_rxsc[rxsc_idx]; + const struct macsec_secy *secy = aq_rxsc->sw_secy; + struct macsec_rx_sa *rx_sa; + int ret = 0; + int i; + + if (!netif_running(secy->netdev)) + return ret; + + ret = aq_set_rxsc(nic, rxsc_idx); + if (ret) + return ret; + + for (i = 0; i < MACSEC_NUM_AN; i++) { + rx_sa = rcu_dereference_bh(aq_rxsc->sw_rxsc->sa[i]); + if (rx_sa) { + ret = aq_update_rxsa(nic, aq_rxsc->hw_sc_idx, secy, + rx_sa, aq_rxsc->rx_sa_key[i], i); + if (ret) + return ret; + } + } + + return ret; +} + static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, enum aq_clear_type clear_type) { + struct macsec_rx_sc *rx_sc; int txsc_idx; + int rxsc_idx; int ret = 0; txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); @@ -603,19 +1012,43 @@ static int aq_clear_secy(struct aq_nic_s *nic, const struct macsec_secy *secy, return ret; } + for (rx_sc = rcu_dereference_bh(secy->rx_sc); rx_sc; + rx_sc = rcu_dereference_bh(rx_sc->next)) { + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (rxsc_idx < 0) + continue; + + ret = aq_clear_rxsc(nic, rxsc_idx, clear_type); + if (ret) + return ret; + } + return ret; } static int aq_apply_secy_cfg(struct aq_nic_s *nic, const struct macsec_secy *secy) { + struct macsec_rx_sc *rx_sc; int txsc_idx; + int rxsc_idx; int ret = 0; txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, secy); if (txsc_idx >= 0) apply_txsc_cfg(nic, txsc_idx); + for (rx_sc = rcu_dereference_bh(secy->rx_sc); rx_sc && rx_sc->active; + rx_sc = rcu_dereference_bh(rx_sc->next)) { + rxsc_idx = aq_get_rxsc_idx_from_rxsc(nic->macsec_cfg, rx_sc); + if (unlikely(rxsc_idx < 0)) + continue; + + ret = apply_rxsc_cfg(nic, rxsc_idx); + if (ret) + return ret; + } + return ret; } @@ -632,6 +1065,14 @@ static int aq_apply_macsec_cfg(struct aq_nic_s *nic) } } + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (nic->macsec_cfg->rxsc_idx_busy & BIT(i)) { + ret = apply_rxsc_cfg(nic, i); + if (ret) + return ret; + } + } + return ret; } @@ -807,6 +1248,7 @@ int aq_macsec_enable(struct aq_nic_s *nic) /* Init Ethertype bypass filters */ for (index = 0; index < ARRAY_SIZE(ctl_ether_types); index++) { + struct aq_mss_ingress_prectlf_record rx_prectlf_rec; struct aq_mss_egress_ctlf_record tx_ctlf_rec; if (ctl_ether_types[index] == 0) @@ -820,6 +1262,15 @@ int aq_macsec_enable(struct aq_nic_s *nic) tbl_idx = NUMROWS_EGRESSCTLFRECORD - num_ctl_ether_types - 1; aq_mss_set_egress_ctlf_record(hw, &tx_ctlf_rec, tbl_idx); + memset(&rx_prectlf_rec, 0, sizeof(rx_prectlf_rec)); + rx_prectlf_rec.eth_type = ctl_ether_types[index]; + rx_prectlf_rec.match_type = 4; /* Match eth_type only */ + rx_prectlf_rec.match_mask = 0xf; /* match for eth_type */ + rx_prectlf_rec.action = 0; /* Bypass MACSEC modules */ + tbl_idx = + NUMROWS_INGRESSPRECTLFRECORD - num_ctl_ether_types - 1; + aq_mss_set_ingress_prectlf_record(hw, &rx_prectlf_rec, tbl_idx); + num_ctl_ether_types++; } diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h index 1378e0fb9b1d..ca4640a44848 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h @@ -36,6 +36,11 @@ struct aq_macsec_txsc { }; struct aq_macsec_rxsc { + u32 hw_sc_idx; + unsigned long rx_sa_idx_busy; + const struct macsec_secy *sw_secy; + const struct macsec_rx_sc *sw_rxsc; + u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; }; struct aq_macsec_cfg { From patchwork Tue Mar 10 15:03:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Russkikh X-Patchwork-Id: 222758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DDF0C10F27 for ; Tue, 10 Mar 2020 15:04:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4C09B20675 for ; Tue, 10 Mar 2020 15:04:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="Y5ggH9Fz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727855AbgCJPEi (ORCPT ); Tue, 10 Mar 2020 11:04:38 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:63252 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727838AbgCJPEh (ORCPT ); Tue, 10 Mar 2020 11:04:37 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02AEtvqH011753; Tue, 10 Mar 2020 08:04:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=+UkZaLB7hdMGyYfgwP4JCfOdBaGI8EHF4b/4hy55mHY=; b=Y5ggH9FzdIY3G4CeQqxcLFgQKHT8yu0BoBiGxBMPAc2+KaxgEqv/YkS3TsZfnSRE4b52 dnMbUSRV0h5vJpjEB1Bs+bTxJzOi6vDuwRHU6VF6jiSEAq+EGvP/C1i1z0Zn9lyZNNVE fVsNN8ZmtJDvBEeB67G2qN4jJ3AcfPcXucRnT3wGsnPDfeN1219pfKcHFpwexbHKCOVH Z5oGxLN5COMDYYacC3//Hjh3k6ncdHs0bIwf4ZClnwLk2Byv68yhbOk0y9jUxyH9y2d5 04WVIdkBgCQccpF7SYEVqKuemmXYSZlDYRuXt1NlctAI0hvzd5oK5QtpiYW6XJq+cM5M Iw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2yp04fm0s9-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 10 Mar 2020 08:04:33 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 08:04:24 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 10 Mar 2020 08:04:24 -0700 Received: from NN-LT0019.rdc.aquantia.com (nn-lt0019.marvell.com [10.9.16.69]) by maili.marvell.com (Postfix) with ESMTP id 391B83F703F; Tue, 10 Mar 2020 08:04:23 -0700 (PDT) From: Igor Russkikh To: CC: Mark Starovoytov , Sabrina Dubroca , Igor Russkikh , Dmitry Bogdanov Subject: [RFC v2 16/16] net: atlantic: MACSec offload statistics implementation Date: Tue, 10 Mar 2020 18:03:42 +0300 Message-ID: <20200310150342.1701-17-irusskikh@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200310150342.1701-1-irusskikh@marvell.com> References: <20200310150342.1701-1-irusskikh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-03-10_08:2020-03-10,2020-03-10 signatures=0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dmitry Bogdanov This patch adds support for MACSec statistics on Atlantic network cards. Signed-off-by: Dmitry Bogdanov Signed-off-by: Mark Starovoytov Signed-off-by: Igor Russkikh --- .../ethernet/aquantia/atlantic/aq_ethtool.c | 160 ++++- .../ethernet/aquantia/atlantic/aq_macsec.c | 545 ++++++++++++++++++ .../ethernet/aquantia/atlantic/aq_macsec.h | 73 +++ .../net/ethernet/aquantia/atlantic/aq_nic.c | 5 +- .../net/ethernet/aquantia/atlantic/aq_nic.h | 2 +- 5 files changed, 766 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c index 0bdaa0d785b7..652ecb093117 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c @@ -11,6 +11,7 @@ #include "aq_vec.h" #include "aq_ptp.h" #include "aq_filters.h" +#include "aq_macsec.h" #include @@ -96,6 +97,62 @@ static const char aq_ethtool_queue_stat_names[][ETH_GSTRING_LEN] = { "Queue[%d] InErrors", }; +#if IS_ENABLED(CONFIG_MACSEC) +static const char aq_macsec_stat_names[][ETH_GSTRING_LEN] = { + "MACSec InCtlPackets", + "MACSec InTaggedMissPackets", + "MACSec InUntaggedMissPackets", + "MACSec InNotagPackets", + "MACSec InUntaggedPackets", + "MACSec InBadTagPackets", + "MACSec InNoSciPackets", + "MACSec InUnknownSciPackets", + "MACSec InCtrlPortPassPackets", + "MACSec InUnctrlPortPassPackets", + "MACSec InCtrlPortFailPackets", + "MACSec InUnctrlPortFailPackets", + "MACSec InTooLongPackets", + "MACSec InIgpocCtlPackets", + "MACSec InEccErrorPackets", + "MACSec InUnctrlHitDropRedir", + "MACSec OutCtlPackets", + "MACSec OutUnknownSaPackets", + "MACSec OutUntaggedPackets", + "MACSec OutTooLong", + "MACSec OutEccErrorPackets", + "MACSec OutUnctrlHitDropRedir", +}; + +static const char aq_macsec_txsc_stat_names[][ETH_GSTRING_LEN + 1] = { + "MACSecTXSC%d ProtectedPkts\0", + "MACSecTXSC%d EncryptedPkts\0", + "MACSecTXSC%d ProtectedOctets\0", + "MACSecTXSC%d EncryptedOctets\0", +}; + +static const char aq_macsec_txsa_stat_names[][ETH_GSTRING_LEN + 1] = { + "MACSecTXSC%dSA%d HitDropRedirect\0", + "MACSecTXSC%dSA%d Protected2Pkts\0", + "MACSecTXSC%dSA%d ProtectedPkts\0", + "MACSecTXSC%dSA%d EncryptedPkts\0", +}; + +static const char aq_macsec_rxsa_stat_names[][ETH_GSTRING_LEN + 1] = { + "MACSecRXSC%dSA%d UntaggedHitPkts\0", + "MACSecRXSC%dSA%d CtrlHitDrpRedir\0", + "MACSecRXSC%dSA%d NotUsingSa\0", + "MACSecRXSC%dSA%d UnusedSa\0", + "MACSecRXSC%dSA%d NotValidPkts\0", + "MACSecRXSC%dSA%d InvalidPkts\0", + "MACSecRXSC%dSA%d OkPkts\0", + "MACSecRXSC%dSA%d LatePkts\0", + "MACSecRXSC%dSA%d DelayedPkts\0", + "MACSecRXSC%dSA%d UncheckedPkts\0", + "MACSecRXSC%dSA%d ValidatedOctets\0", + "MACSecRXSC%dSA%d DecryptedOctets\0", +}; +#endif + static const char aq_ethtool_priv_flag_names[][ETH_GSTRING_LEN] = { "DMASystemLoopback", "PKTSystemLoopback", @@ -104,18 +161,38 @@ static const char aq_ethtool_priv_flag_names[][ETH_GSTRING_LEN] = { "PHYExternalLoopback", }; +static u32 aq_ethtool_n_stats(struct net_device *ndev) +{ + struct aq_nic_s *nic = netdev_priv(ndev); + struct aq_nic_cfg_s *cfg = aq_nic_get_cfg(nic); + u32 n_stats = ARRAY_SIZE(aq_ethtool_stat_names) + + ARRAY_SIZE(aq_ethtool_queue_stat_names) * cfg->vecs; + +#if IS_ENABLED(CONFIG_MACSEC) + if (nic->macsec_cfg) { + n_stats += ARRAY_SIZE(aq_macsec_stat_names) + + ARRAY_SIZE(aq_macsec_txsc_stat_names) * + aq_macsec_tx_sc_cnt(nic) + + ARRAY_SIZE(aq_macsec_txsa_stat_names) * + aq_macsec_tx_sa_cnt(nic) + + ARRAY_SIZE(aq_macsec_rxsa_stat_names) * + aq_macsec_rx_sa_cnt(nic); + } +#endif + + return n_stats; +} + static void aq_ethtool_stats(struct net_device *ndev, struct ethtool_stats *stats, u64 *data) { struct aq_nic_s *aq_nic = netdev_priv(ndev); - struct aq_nic_cfg_s *cfg; - cfg = aq_nic_get_cfg(aq_nic); - - memset(data, 0, (ARRAY_SIZE(aq_ethtool_stat_names) + - ARRAY_SIZE(aq_ethtool_queue_stat_names) * - cfg->vecs) * sizeof(u64)); - aq_nic_get_stats(aq_nic, data); + memset(data, 0, aq_ethtool_n_stats(ndev) * sizeof(u64)); + data = aq_nic_get_stats(aq_nic, data); +#if IS_ENABLED(CONFIG_MACSEC) + data = aq_macsec_get_stats(aq_nic, data); +#endif } static void aq_ethtool_get_drvinfo(struct net_device *ndev, @@ -123,11 +200,9 @@ static void aq_ethtool_get_drvinfo(struct net_device *ndev, { struct pci_dev *pdev = to_pci_dev(ndev->dev.parent); struct aq_nic_s *aq_nic = netdev_priv(ndev); - struct aq_nic_cfg_s *cfg; u32 firmware_version; u32 regs_count; - cfg = aq_nic_get_cfg(aq_nic); firmware_version = aq_nic_get_fw_version(aq_nic); regs_count = aq_nic_get_regs_count(aq_nic); @@ -139,8 +214,7 @@ static void aq_ethtool_get_drvinfo(struct net_device *ndev, strlcpy(drvinfo->bus_info, pdev ? pci_name(pdev) : "", sizeof(drvinfo->bus_info)); - drvinfo->n_stats = ARRAY_SIZE(aq_ethtool_stat_names) + - cfg->vecs * ARRAY_SIZE(aq_ethtool_queue_stat_names); + drvinfo->n_stats = aq_ethtool_n_stats(ndev); drvinfo->testinfo_len = 0; drvinfo->regdump_len = regs_count; drvinfo->eedump_len = 0; @@ -153,6 +227,9 @@ static void aq_ethtool_get_strings(struct net_device *ndev, struct aq_nic_cfg_s *cfg; u8 *p = data; int i, si; +#if IS_ENABLED(CONFIG_MACSEC) + int sa; +#endif cfg = aq_nic_get_cfg(aq_nic); @@ -170,6 +247,60 @@ static void aq_ethtool_get_strings(struct net_device *ndev, p += ETH_GSTRING_LEN; } } +#if IS_ENABLED(CONFIG_MACSEC) + if (!aq_nic->macsec_cfg) + break; + + memcpy(p, aq_macsec_stat_names, sizeof(aq_macsec_stat_names)); + p = p + sizeof(aq_macsec_stat_names); + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + struct aq_macsec_txsc *aq_txsc; + + if (!(test_bit(i, &aq_nic->macsec_cfg->txsc_idx_busy))) + continue; + + for (si = 0; + si < ARRAY_SIZE(aq_macsec_txsc_stat_names); + si++) { + snprintf(p, ETH_GSTRING_LEN, + aq_macsec_txsc_stat_names[si], i); + p += ETH_GSTRING_LEN; + } + aq_txsc = &aq_nic->macsec_cfg->aq_txsc[i]; + for (sa = 0; sa < MACSEC_NUM_AN; sa++) { + if (!(test_bit(sa, &aq_txsc->tx_sa_idx_busy))) + continue; + for (si = 0; + si < ARRAY_SIZE(aq_macsec_txsa_stat_names); + si++) { + snprintf(p, ETH_GSTRING_LEN, + aq_macsec_txsa_stat_names[si], + i, sa); + p += ETH_GSTRING_LEN; + } + } + } + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + struct aq_macsec_rxsc *aq_rxsc; + + if (!(test_bit(i, &aq_nic->macsec_cfg->rxsc_idx_busy))) + continue; + + aq_rxsc = &aq_nic->macsec_cfg->aq_rxsc[i]; + for (sa = 0; sa < MACSEC_NUM_AN; sa++) { + if (!(test_bit(sa, &aq_rxsc->rx_sa_idx_busy))) + continue; + for (si = 0; + si < ARRAY_SIZE(aq_macsec_rxsa_stat_names); + si++) { + snprintf(p, ETH_GSTRING_LEN, + aq_macsec_rxsa_stat_names[si], + i, sa); + p += ETH_GSTRING_LEN; + } + } + } +#endif break; case ETH_SS_PRIV_FLAGS: memcpy(p, aq_ethtool_priv_flag_names, @@ -209,16 +340,11 @@ static int aq_ethtool_set_phys_id(struct net_device *ndev, static int aq_ethtool_get_sset_count(struct net_device *ndev, int stringset) { - struct aq_nic_s *aq_nic = netdev_priv(ndev); - struct aq_nic_cfg_s *cfg; int ret = 0; - cfg = aq_nic_get_cfg(aq_nic); - switch (stringset) { case ETH_SS_STATS: - ret = ARRAY_SIZE(aq_ethtool_stat_names) + - cfg->vecs * ARRAY_SIZE(aq_ethtool_queue_stat_names); + ret = aq_ethtool_n_stats(ndev); break; case ETH_SS_PRIV_FLAGS: ret = ARRAY_SIZE(aq_ethtool_priv_flag_names); diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c index 4401202f9c3f..496502f3b2f3 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.c @@ -132,6 +132,166 @@ static void aq_rotate_keys(u32 (*key)[8], const int key_len) } } +#define STATS_2x32_TO_64(stat_field) \ + (((u64)stat_field[1] << 32) | stat_field[0]) + +static int aq_get_macsec_common_stats(struct aq_hw_s *hw, + struct aq_macsec_common_stats *stats) +{ + struct aq_mss_ingress_common_counters ingress_counters; + struct aq_mss_egress_common_counters egress_counters; + int ret; + + /* MACSEC counters */ + ret = aq_mss_get_ingress_common_counters(hw, &ingress_counters); + if (unlikely(ret)) + return ret; + + stats->in.ctl_pkts = STATS_2x32_TO_64(ingress_counters.ctl_pkts); + stats->in.tagged_miss_pkts = + STATS_2x32_TO_64(ingress_counters.tagged_miss_pkts); + stats->in.untagged_miss_pkts = + STATS_2x32_TO_64(ingress_counters.untagged_miss_pkts); + stats->in.notag_pkts = STATS_2x32_TO_64(ingress_counters.notag_pkts); + stats->in.untagged_pkts = + STATS_2x32_TO_64(ingress_counters.untagged_pkts); + stats->in.bad_tag_pkts = + STATS_2x32_TO_64(ingress_counters.bad_tag_pkts); + stats->in.no_sci_pkts = STATS_2x32_TO_64(ingress_counters.no_sci_pkts); + stats->in.unknown_sci_pkts = + STATS_2x32_TO_64(ingress_counters.unknown_sci_pkts); + stats->in.ctrl_prt_pass_pkts = + STATS_2x32_TO_64(ingress_counters.ctrl_prt_pass_pkts); + stats->in.unctrl_prt_pass_pkts = + STATS_2x32_TO_64(ingress_counters.unctrl_prt_pass_pkts); + stats->in.ctrl_prt_fail_pkts = + STATS_2x32_TO_64(ingress_counters.ctrl_prt_fail_pkts); + stats->in.unctrl_prt_fail_pkts = + STATS_2x32_TO_64(ingress_counters.unctrl_prt_fail_pkts); + stats->in.too_long_pkts = + STATS_2x32_TO_64(ingress_counters.too_long_pkts); + stats->in.igpoc_ctl_pkts = + STATS_2x32_TO_64(ingress_counters.igpoc_ctl_pkts); + stats->in.ecc_error_pkts = + STATS_2x32_TO_64(ingress_counters.ecc_error_pkts); + stats->in.unctrl_hit_drop_redir = + STATS_2x32_TO_64(ingress_counters.unctrl_hit_drop_redir); + + ret = aq_mss_get_egress_common_counters(hw, &egress_counters); + if (unlikely(ret)) + return ret; + stats->out.ctl_pkts = STATS_2x32_TO_64(egress_counters.ctl_pkt); + stats->out.unknown_sa_pkts = + STATS_2x32_TO_64(egress_counters.unknown_sa_pkts); + stats->out.untagged_pkts = + STATS_2x32_TO_64(egress_counters.untagged_pkts); + stats->out.too_long = STATS_2x32_TO_64(egress_counters.too_long); + stats->out.ecc_error_pkts = + STATS_2x32_TO_64(egress_counters.ecc_error_pkts); + stats->out.unctrl_hit_drop_redir = + STATS_2x32_TO_64(egress_counters.unctrl_hit_drop_redir); + + return 0; +} + +static int aq_get_rxsa_stats(struct aq_hw_s *hw, const int sa_idx, + struct aq_macsec_rx_sa_stats *stats) +{ + struct aq_mss_ingress_sa_counters i_sa_counters; + int ret; + + ret = aq_mss_get_ingress_sa_counters(hw, &i_sa_counters, sa_idx); + if (unlikely(ret)) + return ret; + + stats->untagged_hit_pkts = + STATS_2x32_TO_64(i_sa_counters.untagged_hit_pkts); + stats->ctrl_hit_drop_redir_pkts = + STATS_2x32_TO_64(i_sa_counters.ctrl_hit_drop_redir_pkts); + stats->not_using_sa = STATS_2x32_TO_64(i_sa_counters.not_using_sa); + stats->unused_sa = STATS_2x32_TO_64(i_sa_counters.unused_sa); + stats->not_valid_pkts = STATS_2x32_TO_64(i_sa_counters.not_valid_pkts); + stats->invalid_pkts = STATS_2x32_TO_64(i_sa_counters.invalid_pkts); + stats->ok_pkts = STATS_2x32_TO_64(i_sa_counters.ok_pkts); + stats->late_pkts = STATS_2x32_TO_64(i_sa_counters.late_pkts); + stats->delayed_pkts = STATS_2x32_TO_64(i_sa_counters.delayed_pkts); + stats->unchecked_pkts = STATS_2x32_TO_64(i_sa_counters.unchecked_pkts); + stats->validated_octets = + STATS_2x32_TO_64(i_sa_counters.validated_octets); + stats->decrypted_octets = + STATS_2x32_TO_64(i_sa_counters.decrypted_octets); + + return 0; +} + +static int aq_get_txsa_stats(struct aq_hw_s *hw, const int sa_idx, + struct aq_macsec_tx_sa_stats *stats) +{ + struct aq_mss_egress_sa_counters e_sa_counters; + int ret; + + ret = aq_mss_get_egress_sa_counters(hw, &e_sa_counters, sa_idx); + if (unlikely(ret)) + return ret; + + stats->sa_hit_drop_redirect = + STATS_2x32_TO_64(e_sa_counters.sa_hit_drop_redirect); + stats->sa_protected2_pkts = + STATS_2x32_TO_64(e_sa_counters.sa_protected2_pkts); + stats->sa_protected_pkts = + STATS_2x32_TO_64(e_sa_counters.sa_protected_pkts); + stats->sa_encrypted_pkts = + STATS_2x32_TO_64(e_sa_counters.sa_encrypted_pkts); + + return 0; +} + +static int aq_get_txsa_next_pn(struct aq_hw_s *hw, const int sa_idx, u32 *pn) +{ + struct aq_mss_egress_sa_record sa_rec; + int ret; + + ret = aq_mss_get_egress_sa_record(hw, &sa_rec, sa_idx); + if (likely(!ret)) + *pn = sa_rec.next_pn; + + return ret; +} + +static int aq_get_rxsa_next_pn(struct aq_hw_s *hw, const int sa_idx, u32 *pn) +{ + struct aq_mss_ingress_sa_record sa_rec; + int ret; + + ret = aq_mss_get_ingress_sa_record(hw, &sa_rec, sa_idx); + if (likely(!ret)) + *pn = (!sa_rec.sat_nextpn) ? sa_rec.next_pn : 0; + + return ret; +} + +static int aq_get_txsc_stats(struct aq_hw_s *hw, const int sc_idx, + struct aq_macsec_tx_sc_stats *stats) +{ + struct aq_mss_egress_sc_counters e_sc_counters; + int ret; + + ret = aq_mss_get_egress_sc_counters(hw, &e_sc_counters, sc_idx); + if (unlikely(ret)) + return ret; + + stats->sc_protected_pkts = + STATS_2x32_TO_64(e_sc_counters.sc_protected_pkts); + stats->sc_encrypted_pkts = + STATS_2x32_TO_64(e_sc_counters.sc_encrypted_pkts); + stats->sc_protected_octets = + STATS_2x32_TO_64(e_sc_counters.sc_protected_octets); + stats->sc_encrypted_octets = + STATS_2x32_TO_64(e_sc_counters.sc_encrypted_octets); + + return 0; +} + static int aq_mdo_dev_open(struct macsec_context *ctx) { struct aq_nic_s *nic = netdev_priv(ctx->netdev); @@ -941,6 +1101,191 @@ static int aq_mdo_del_rxsa(struct macsec_context *ctx) return ret; } +static int aq_mdo_get_dev_stats(struct macsec_context *ctx) +{ + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_common_stats *stats = &nic->macsec_cfg->stats; + struct aq_hw_s *hw = nic->aq_hw; + + if (ctx->prepare) + return 0; + + aq_get_macsec_common_stats(hw, stats); + + ctx->stats.dev_stats->OutPktsUntagged = stats->out.untagged_pkts; + ctx->stats.dev_stats->InPktsUntagged = stats->in.untagged_pkts; + ctx->stats.dev_stats->OutPktsTooLong = stats->out.too_long; + ctx->stats.dev_stats->InPktsNoTag = stats->in.notag_pkts; + ctx->stats.dev_stats->InPktsBadTag = stats->in.bad_tag_pkts; + ctx->stats.dev_stats->InPktsUnknownSCI = stats->in.unknown_sci_pkts; + ctx->stats.dev_stats->InPktsNoSCI = stats->in.no_sci_pkts; + ctx->stats.dev_stats->InPktsOverrun = 0; + + return 0; +} + +static int aq_mdo_get_tx_sc_stats(struct macsec_context *ctx) +{ + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_tx_sc_stats *stats; + struct aq_hw_s *hw = nic->aq_hw; + struct aq_macsec_txsc *aq_txsc; + int txsc_idx; + + txsc_idx = aq_get_txsc_idx_from_secy(nic->macsec_cfg, ctx->secy); + if (txsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + aq_txsc = &nic->macsec_cfg->aq_txsc[txsc_idx]; + stats = &aq_txsc->stats; + aq_get_txsc_stats(hw, aq_txsc->hw_sc_idx, stats); + + ctx->stats.tx_sc_stats->OutPktsProtected = stats->sc_protected_pkts; + ctx->stats.tx_sc_stats->OutPktsEncrypted = stats->sc_encrypted_pkts; + ctx->stats.tx_sc_stats->OutOctetsProtected = stats->sc_protected_octets; + ctx->stats.tx_sc_stats->OutOctetsEncrypted = stats->sc_encrypted_octets; + + return 0; +} + +static int aq_mdo_get_tx_sa_stats(struct macsec_context *ctx) +{ + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_macsec_tx_sa_stats *stats; + struct aq_hw_s *hw = nic->aq_hw; + const struct macsec_secy *secy; + struct aq_macsec_txsc *aq_txsc; + struct macsec_tx_sa *tx_sa; + unsigned int sa_idx; + int txsc_idx; + u32 next_pn; + int ret; + + txsc_idx = aq_get_txsc_idx_from_secy(cfg, ctx->secy); + if (txsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_txsc = &cfg->aq_txsc[txsc_idx]; + sa_idx = aq_txsc->hw_sc_idx | ctx->sa.assoc_num; + stats = &aq_txsc->tx_sa_stats[ctx->sa.assoc_num]; + ret = aq_get_txsa_stats(hw, sa_idx, stats); + if (ret) + return ret; + + ctx->stats.tx_sa_stats->OutPktsProtected = stats->sa_protected_pkts; + ctx->stats.tx_sa_stats->OutPktsEncrypted = stats->sa_encrypted_pkts; + + secy = aq_txsc->sw_secy; + tx_sa = rcu_dereference_bh(secy->tx_sc.sa[ctx->sa.assoc_num]); + ret = aq_get_txsa_next_pn(hw, sa_idx, &next_pn); + if (ret == 0) { + spin_lock_bh(&tx_sa->lock); + tx_sa->next_pn = next_pn; + spin_unlock_bh(&tx_sa->lock); + } + + return ret; +} + +static int aq_mdo_get_rx_sc_stats(struct macsec_context *ctx) +{ + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_macsec_rx_sa_stats *stats; + struct aq_hw_s *hw = nic->aq_hw; + struct aq_macsec_rxsc *aq_rxsc; + unsigned int sa_idx; + int rxsc_idx; + int ret = 0; + int i; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -ENOENT; + + if (ctx->prepare) + return 0; + + aq_rxsc = &cfg->aq_rxsc[rxsc_idx]; + for (i = 0; i < MACSEC_NUM_AN; i++) { + if (!test_bit(i, &aq_rxsc->rx_sa_idx_busy)) + continue; + + stats = &aq_rxsc->rx_sa_stats[i]; + sa_idx = aq_rxsc->hw_sc_idx | i; + ret = aq_get_rxsa_stats(hw, sa_idx, stats); + if (ret) + break; + + ctx->stats.rx_sc_stats->InOctetsValidated += + stats->validated_octets; + ctx->stats.rx_sc_stats->InOctetsDecrypted += + stats->decrypted_octets; + ctx->stats.rx_sc_stats->InPktsUnchecked += + stats->unchecked_pkts; + ctx->stats.rx_sc_stats->InPktsDelayed += stats->delayed_pkts; + ctx->stats.rx_sc_stats->InPktsOK += stats->ok_pkts; + ctx->stats.rx_sc_stats->InPktsInvalid += stats->invalid_pkts; + ctx->stats.rx_sc_stats->InPktsLate += stats->late_pkts; + ctx->stats.rx_sc_stats->InPktsNotValid += stats->not_valid_pkts; + ctx->stats.rx_sc_stats->InPktsNotUsingSA += stats->not_using_sa; + ctx->stats.rx_sc_stats->InPktsUnusedSA += stats->unused_sa; + } + + return ret; +} + +static int aq_mdo_get_rx_sa_stats(struct macsec_context *ctx) +{ + struct aq_nic_s *nic = netdev_priv(ctx->netdev); + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_macsec_rx_sa_stats *stats; + struct aq_hw_s *hw = nic->aq_hw; + struct aq_macsec_rxsc *aq_rxsc; + struct macsec_rx_sa *rx_sa; + unsigned int sa_idx; + int rxsc_idx; + u32 next_pn; + int ret; + + rxsc_idx = aq_get_rxsc_idx_from_rxsc(cfg, ctx->rx_sc); + if (rxsc_idx < 0) + return -EINVAL; + + if (ctx->prepare) + return 0; + + aq_rxsc = &cfg->aq_rxsc[rxsc_idx]; + stats = &aq_rxsc->rx_sa_stats[ctx->sa.assoc_num]; + sa_idx = aq_rxsc->hw_sc_idx | ctx->sa.assoc_num; + ret = aq_get_rxsa_stats(hw, sa_idx, stats); + if (ret) + return ret; + + ctx->stats.rx_sa_stats->InPktsOK = stats->ok_pkts; + ctx->stats.rx_sa_stats->InPktsInvalid = stats->invalid_pkts; + ctx->stats.rx_sa_stats->InPktsNotValid = stats->not_valid_pkts; + ctx->stats.rx_sa_stats->InPktsNotUsingSA = stats->not_using_sa; + ctx->stats.rx_sa_stats->InPktsUnusedSA = stats->unused_sa; + + rx_sa = rcu_dereference_bh(aq_rxsc->sw_rxsc->sa[ctx->sa.assoc_num]); + ret = aq_get_rxsa_next_pn(hw, sa_idx, &next_pn); + if (ret == 0) { + spin_lock_bh(&rx_sa->lock); + rx_sa->next_pn = next_pn; + spin_unlock_bh(&rx_sa->lock); + } + + return ret; +} + static int apply_txsc_cfg(struct aq_nic_s *nic, const int txsc_idx) { struct aq_macsec_txsc *aq_txsc = &nic->macsec_cfg->aq_txsc[txsc_idx]; @@ -1184,6 +1529,11 @@ const struct macsec_ops aq_macsec_ops = { .mdo_add_txsa = aq_mdo_add_txsa, .mdo_upd_txsa = aq_mdo_upd_txsa, .mdo_del_txsa = aq_mdo_del_txsa, + .mdo_get_dev_stats = aq_mdo_get_dev_stats, + .mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats, + .mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats, + .mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats, + .mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats, }; int aq_macsec_init(struct aq_nic_s *nic) @@ -1293,3 +1643,198 @@ void aq_macsec_work(struct aq_nic_s *nic) aq_check_txsa_expiration(nic); rtnl_unlock(); } + +int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic) +{ + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int i, cnt = 0; + + if (!cfg) + return 0; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (!test_bit(i, &cfg->rxsc_idx_busy)) + continue; + cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy); + } + + return cnt; +} + +int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic) +{ + if (!nic->macsec_cfg) + return 0; + + return hweight_long(nic->macsec_cfg->txsc_idx_busy); +} + +int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic) +{ + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + int i, cnt = 0; + + if (!cfg) + return 0; + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (!test_bit(i, &cfg->txsc_idx_busy)) + continue; + cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy); + } + + return cnt; +} + +static int aq_macsec_update_stats(struct aq_nic_s *nic) +{ + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_hw_s *hw = nic->aq_hw; + struct aq_macsec_txsc *aq_txsc; + struct aq_macsec_rxsc *aq_rxsc; + int i, sa_idx, assoc_num; + int ret = 0; + + aq_get_macsec_common_stats(hw, &cfg->stats); + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (!(cfg->txsc_idx_busy & BIT(i))) + continue; + aq_txsc = &cfg->aq_txsc[i]; + + ret = aq_get_txsc_stats(hw, aq_txsc->hw_sc_idx, + &aq_txsc->stats); + if (ret) + return ret; + + for (assoc_num = 0; assoc_num < MACSEC_NUM_AN; assoc_num++) { + if (!test_bit(assoc_num, &aq_txsc->tx_sa_idx_busy)) + continue; + sa_idx = aq_txsc->hw_sc_idx | assoc_num; + ret = aq_get_txsa_stats(hw, sa_idx, + &aq_txsc->tx_sa_stats[assoc_num]); + if (ret) + return ret; + } + } + + for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { + if (!(test_bit(i, &cfg->rxsc_idx_busy))) + continue; + aq_rxsc = &cfg->aq_rxsc[i]; + + for (assoc_num = 0; assoc_num < MACSEC_NUM_AN; assoc_num++) { + if (!test_bit(assoc_num, &aq_rxsc->rx_sa_idx_busy)) + continue; + sa_idx = aq_rxsc->hw_sc_idx | assoc_num; + + ret = aq_get_rxsa_stats(hw, sa_idx, + &aq_rxsc->rx_sa_stats[assoc_num]); + if (ret) + return ret; + } + } + + return ret; +} + +u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data) +{ + struct aq_macsec_cfg *cfg = nic->macsec_cfg; + struct aq_macsec_common_stats *common_stats; + struct aq_macsec_tx_sc_stats *txsc_stats; + struct aq_macsec_tx_sa_stats *txsa_stats; + struct aq_macsec_rx_sa_stats *rxsa_stats; + struct aq_macsec_txsc *aq_txsc; + struct aq_macsec_rxsc *aq_rxsc; + unsigned int assoc_num; + unsigned int sc_num; + unsigned int i = 0U; + + if (!cfg) + return data; + + aq_macsec_update_stats(nic); + + common_stats = &cfg->stats; + data[i] = common_stats->in.ctl_pkts; + data[++i] = common_stats->in.tagged_miss_pkts; + data[++i] = common_stats->in.untagged_miss_pkts; + data[++i] = common_stats->in.notag_pkts; + data[++i] = common_stats->in.untagged_pkts; + data[++i] = common_stats->in.bad_tag_pkts; + data[++i] = common_stats->in.no_sci_pkts; + data[++i] = common_stats->in.unknown_sci_pkts; + data[++i] = common_stats->in.ctrl_prt_pass_pkts; + data[++i] = common_stats->in.unctrl_prt_pass_pkts; + data[++i] = common_stats->in.ctrl_prt_fail_pkts; + data[++i] = common_stats->in.unctrl_prt_fail_pkts; + data[++i] = common_stats->in.too_long_pkts; + data[++i] = common_stats->in.igpoc_ctl_pkts; + data[++i] = common_stats->in.ecc_error_pkts; + data[++i] = common_stats->in.unctrl_hit_drop_redir; + data[++i] = common_stats->out.ctl_pkts; + data[++i] = common_stats->out.unknown_sa_pkts; + data[++i] = common_stats->out.untagged_pkts; + data[++i] = common_stats->out.too_long; + data[++i] = common_stats->out.ecc_error_pkts; + data[++i] = common_stats->out.unctrl_hit_drop_redir; + + for (sc_num = 0; sc_num < AQ_MACSEC_MAX_SC; sc_num++) { + if (!(test_bit(sc_num, &cfg->txsc_idx_busy))) + continue; + + aq_txsc = &cfg->aq_txsc[sc_num]; + txsc_stats = &aq_txsc->stats; + + data[++i] = txsc_stats->sc_protected_pkts; + data[++i] = txsc_stats->sc_encrypted_pkts; + data[++i] = txsc_stats->sc_protected_octets; + data[++i] = txsc_stats->sc_encrypted_octets; + + for (assoc_num = 0; assoc_num < MACSEC_NUM_AN; assoc_num++) { + if (!test_bit(assoc_num, &aq_txsc->tx_sa_idx_busy)) + continue; + + txsa_stats = &aq_txsc->tx_sa_stats[assoc_num]; + + data[++i] = txsa_stats->sa_hit_drop_redirect; + data[++i] = txsa_stats->sa_protected2_pkts; + data[++i] = txsa_stats->sa_protected_pkts; + data[++i] = txsa_stats->sa_encrypted_pkts; + } + } + + for (sc_num = 0; sc_num < AQ_MACSEC_MAX_SC; sc_num++) { + if (!(test_bit(sc_num, &cfg->rxsc_idx_busy))) + continue; + + aq_rxsc = &cfg->aq_rxsc[sc_num]; + + for (assoc_num = 0; assoc_num < MACSEC_NUM_AN; assoc_num++) { + if (!test_bit(assoc_num, &aq_rxsc->rx_sa_idx_busy)) + continue; + + rxsa_stats = &aq_rxsc->rx_sa_stats[assoc_num]; + + data[++i] = rxsa_stats->untagged_hit_pkts; + data[++i] = rxsa_stats->ctrl_hit_drop_redir_pkts; + data[++i] = rxsa_stats->not_using_sa; + data[++i] = rxsa_stats->unused_sa; + data[++i] = rxsa_stats->not_valid_pkts; + data[++i] = rxsa_stats->invalid_pkts; + data[++i] = rxsa_stats->ok_pkts; + data[++i] = rxsa_stats->late_pkts; + data[++i] = rxsa_stats->delayed_pkts; + data[++i] = rxsa_stats->unchecked_pkts; + data[++i] = rxsa_stats->validated_octets; + data[++i] = rxsa_stats->decrypted_octets; + } + } + + i++; + + data += i; + + return data; +} diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h index ca4640a44848..1996de766675 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h @@ -28,11 +28,77 @@ enum aq_macsec_sc_sa { aq_macsec_sa_sc_1sa_32sc, }; +struct aq_macsec_common_stats { + /* Ingress Common Counters */ + struct { + u64 ctl_pkts; + u64 tagged_miss_pkts; + u64 untagged_miss_pkts; + u64 notag_pkts; + u64 untagged_pkts; + u64 bad_tag_pkts; + u64 no_sci_pkts; + u64 unknown_sci_pkts; + u64 ctrl_prt_pass_pkts; + u64 unctrl_prt_pass_pkts; + u64 ctrl_prt_fail_pkts; + u64 unctrl_prt_fail_pkts; + u64 too_long_pkts; + u64 igpoc_ctl_pkts; + u64 ecc_error_pkts; + u64 unctrl_hit_drop_redir; + } in; + + /* Egress Common Counters */ + struct { + u64 ctl_pkts; + u64 unknown_sa_pkts; + u64 untagged_pkts; + u64 too_long; + u64 ecc_error_pkts; + u64 unctrl_hit_drop_redir; + } out; +}; + +/* Ingress SA Counters */ +struct aq_macsec_rx_sa_stats { + u64 untagged_hit_pkts; + u64 ctrl_hit_drop_redir_pkts; + u64 not_using_sa; + u64 unused_sa; + u64 not_valid_pkts; + u64 invalid_pkts; + u64 ok_pkts; + u64 late_pkts; + u64 delayed_pkts; + u64 unchecked_pkts; + u64 validated_octets; + u64 decrypted_octets; +}; + +/* Egress SA Counters */ +struct aq_macsec_tx_sa_stats { + u64 sa_hit_drop_redirect; + u64 sa_protected2_pkts; + u64 sa_protected_pkts; + u64 sa_encrypted_pkts; +}; + +/* Egress SC Counters */ +struct aq_macsec_tx_sc_stats { + u64 sc_protected_pkts; + u64 sc_encrypted_pkts; + u64 sc_protected_octets; + u64 sc_encrypted_octets; +}; + struct aq_macsec_txsc { u32 hw_sc_idx; unsigned long tx_sa_idx_busy; const struct macsec_secy *sw_secy; u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; + struct aq_macsec_tx_sc_stats stats; + struct aq_macsec_tx_sa_stats tx_sa_stats[MACSEC_NUM_AN]; }; struct aq_macsec_rxsc { @@ -41,6 +107,7 @@ struct aq_macsec_rxsc { const struct macsec_secy *sw_secy; const struct macsec_rx_sc *sw_rxsc; u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN]; + struct aq_macsec_rx_sa_stats rx_sa_stats[MACSEC_NUM_AN]; }; struct aq_macsec_cfg { @@ -51,6 +118,8 @@ struct aq_macsec_cfg { /* Ingress channel configuration */ unsigned long rxsc_idx_busy; struct aq_macsec_rxsc aq_rxsc[AQ_MACSEC_MAX_SC]; + /* Statistics / counters */ + struct aq_macsec_common_stats stats; }; extern const struct macsec_ops aq_macsec_ops; @@ -59,6 +128,10 @@ int aq_macsec_init(struct aq_nic_s *nic); void aq_macsec_free(struct aq_nic_s *nic); int aq_macsec_enable(struct aq_nic_s *nic); void aq_macsec_work(struct aq_nic_s *nic); +u64 *aq_macsec_get_stats(struct aq_nic_s *nic, u64 *data); +int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic); +int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic); +int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic); #endif diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c index 5d4c16d637c7..a369705a786a 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c @@ -781,7 +781,7 @@ int aq_nic_get_regs_count(struct aq_nic_s *self) return self->aq_nic_cfg.aq_hw_caps->mac_regs_count; } -void aq_nic_get_stats(struct aq_nic_s *self, u64 *data) +u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data) { struct aq_vec_s *aq_vec = NULL; struct aq_stats_s *stats; @@ -831,7 +831,10 @@ void aq_nic_get_stats(struct aq_nic_s *self, u64 *data) aq_vec_get_sw_stats(aq_vec, data, &count); } + data += count; + err_exit:; + return data; } static void aq_nic_update_ndev_stats(struct aq_nic_s *self) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h index 011db4094c93..0663b8d0220d 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h @@ -158,7 +158,7 @@ unsigned int aq_nic_map_skb(struct aq_nic_s *self, struct sk_buff *skb, int aq_nic_xmit(struct aq_nic_s *self, struct sk_buff *skb); int aq_nic_get_regs(struct aq_nic_s *self, struct ethtool_regs *regs, void *p); int aq_nic_get_regs_count(struct aq_nic_s *self); -void aq_nic_get_stats(struct aq_nic_s *self, u64 *data); +u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data); int aq_nic_stop(struct aq_nic_s *self); void aq_nic_deinit(struct aq_nic_s *self, bool link_down); void aq_nic_set_power(struct aq_nic_s *self);