From patchwork Sat Apr 18 06:47:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 221055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69C6FC2BA19 for ; Sat, 18 Apr 2020 06:48:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A4DB221F7 for ; Sat, 18 Apr 2020 06:48:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726284AbgDRGsk (ORCPT ); Sat, 18 Apr 2020 02:48:40 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:54280 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726025AbgDRGs0 (ORCPT ); Sat, 18 Apr 2020 02:48:26 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 1C1733D0F3A361891DAD; Sat, 18 Apr 2020 14:48:20 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Sat, 18 Apr 2020 14:48:09 +0800 From: Huazhong Tan To: CC: , , , , , , Jian Shen , Huazhong Tan Subject: [PATCH net-next 02/10] net: hns3: split out hclge_get_fd_rule_info() Date: Sat, 18 Apr 2020 14:47:01 +0800 Message-ID: <1587192429-11463-3-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> References: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jian Shen hclge_get_fd_rule_info() is bloated, this patch separates it into several standalone functions for readability and maintainability. Signed-off-by: Jian Shen Signed-off-by: Huazhong Tan --- .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 303 +++++++++++---------- 1 file changed, 159 insertions(+), 144 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 6381c0f..0aa8db1 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -5938,6 +5938,149 @@ static int hclge_get_fd_rule_cnt(struct hnae3_handle *handle, return 0; } +static void hclge_fd_get_tcpip4_info(struct hclge_fd_rule *rule, + struct ethtool_tcpip4_spec *spec, + struct ethtool_tcpip4_spec *spec_mask) +{ + spec->ip4src = cpu_to_be32(rule->tuples.src_ip[IPV4_INDEX]); + spec_mask->ip4src = rule->unused_tuple & BIT(INNER_SRC_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.src_ip[IPV4_INDEX]); + + spec->ip4dst = cpu_to_be32(rule->tuples.dst_ip[IPV4_INDEX]); + spec_mask->ip4dst = rule->unused_tuple & BIT(INNER_DST_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.dst_ip[IPV4_INDEX]); + + spec->psrc = cpu_to_be16(rule->tuples.src_port); + spec_mask->psrc = rule->unused_tuple & BIT(INNER_SRC_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.src_port); + + spec->pdst = cpu_to_be16(rule->tuples.dst_port); + spec_mask->pdst = rule->unused_tuple & BIT(INNER_DST_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.dst_port); + + spec->tos = rule->tuples.ip_tos; + spec_mask->tos = rule->unused_tuple & BIT(INNER_IP_TOS) ? + 0 : rule->tuples_mask.ip_tos; +} + +static void hclge_fd_get_ip4_info(struct hclge_fd_rule *rule, + struct ethtool_usrip4_spec *spec, + struct ethtool_usrip4_spec *spec_mask) +{ + spec->ip4src = cpu_to_be32(rule->tuples.src_ip[IPV4_INDEX]); + spec_mask->ip4src = rule->unused_tuple & BIT(INNER_SRC_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.src_ip[IPV4_INDEX]); + + spec->ip4dst = cpu_to_be32(rule->tuples.dst_ip[IPV4_INDEX]); + spec_mask->ip4dst = rule->unused_tuple & BIT(INNER_DST_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.dst_ip[IPV4_INDEX]); + + spec->tos = rule->tuples.ip_tos; + spec_mask->tos = rule->unused_tuple & BIT(INNER_IP_TOS) ? + 0 : rule->tuples_mask.ip_tos; + + spec->proto = rule->tuples.ip_proto; + spec_mask->proto = rule->unused_tuple & BIT(INNER_IP_PROTO) ? + 0 : rule->tuples_mask.ip_proto; + + spec->ip_ver = ETH_RX_NFC_IP4; +} + +static void hclge_fd_get_tcpip6_info(struct hclge_fd_rule *rule, + struct ethtool_tcpip6_spec *spec, + struct ethtool_tcpip6_spec *spec_mask) +{ + cpu_to_be32_array(spec->ip6src, + rule->tuples.src_ip, IPV6_SIZE); + cpu_to_be32_array(spec->ip6dst, + rule->tuples.dst_ip, IPV6_SIZE); + if (rule->unused_tuple & BIT(INNER_SRC_IP)) + memset(spec_mask->ip6src, 0, sizeof(spec_mask->ip6src)); + else + cpu_to_be32_array(spec_mask->ip6src, rule->tuples_mask.src_ip, + IPV6_SIZE); + + if (rule->unused_tuple & BIT(INNER_DST_IP)) + memset(spec_mask->ip6dst, 0, sizeof(spec_mask->ip6dst)); + else + cpu_to_be32_array(spec_mask->ip6dst, rule->tuples_mask.dst_ip, + IPV6_SIZE); + + spec->psrc = cpu_to_be16(rule->tuples.src_port); + spec_mask->psrc = rule->unused_tuple & BIT(INNER_SRC_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.src_port); + + spec->pdst = cpu_to_be16(rule->tuples.dst_port); + spec_mask->pdst = rule->unused_tuple & BIT(INNER_DST_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.dst_port); +} + +static void hclge_fd_get_ip6_info(struct hclge_fd_rule *rule, + struct ethtool_usrip6_spec *spec, + struct ethtool_usrip6_spec *spec_mask) +{ + cpu_to_be32_array(spec->ip6src, rule->tuples.src_ip, IPV6_SIZE); + cpu_to_be32_array(spec->ip6dst, rule->tuples.dst_ip, IPV6_SIZE); + if (rule->unused_tuple & BIT(INNER_SRC_IP)) + memset(spec_mask->ip6src, 0, sizeof(spec_mask->ip6src)); + else + cpu_to_be32_array(spec_mask->ip6src, + rule->tuples_mask.src_ip, IPV6_SIZE); + + if (rule->unused_tuple & BIT(INNER_DST_IP)) + memset(spec_mask->ip6dst, 0, sizeof(spec_mask->ip6dst)); + else + cpu_to_be32_array(spec_mask->ip6dst, + rule->tuples_mask.dst_ip, IPV6_SIZE); + + spec->l4_proto = rule->tuples.ip_proto; + spec_mask->l4_proto = rule->unused_tuple & BIT(INNER_IP_PROTO) ? + 0 : rule->tuples_mask.ip_proto; +} + +static void hclge_fd_get_ether_info(struct hclge_fd_rule *rule, + struct ethhdr *spec, + struct ethhdr *spec_mask) +{ + ether_addr_copy(spec->h_source, rule->tuples.src_mac); + ether_addr_copy(spec->h_dest, rule->tuples.dst_mac); + + if (rule->unused_tuple & BIT(INNER_SRC_MAC)) + eth_zero_addr(spec_mask->h_source); + else + ether_addr_copy(spec_mask->h_source, rule->tuples_mask.src_mac); + + if (rule->unused_tuple & BIT(INNER_DST_MAC)) + eth_zero_addr(spec_mask->h_dest); + else + ether_addr_copy(spec_mask->h_dest, rule->tuples_mask.dst_mac); + + spec->h_proto = cpu_to_be16(rule->tuples.ether_proto); + spec_mask->h_proto = rule->unused_tuple & BIT(INNER_ETH_TYPE) ? + 0 : cpu_to_be16(rule->tuples_mask.ether_proto); +} + +static void hclge_fd_get_ext_info(struct ethtool_rx_flow_spec *fs, + struct hclge_fd_rule *rule) +{ + if (fs->flow_type & FLOW_EXT) { + fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1); + fs->m_ext.vlan_tci = + rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ? + cpu_to_be16(VLAN_VID_MASK) : + cpu_to_be16(rule->tuples_mask.vlan_tag1); + } + + if (fs->flow_type & FLOW_MAC_EXT) { + ether_addr_copy(fs->h_ext.h_dest, rule->tuples.dst_mac); + if (rule->unused_tuple & BIT(INNER_DST_MAC)) + eth_zero_addr(fs->m_u.ether_spec.h_dest); + else + ether_addr_copy(fs->m_u.ether_spec.h_dest, + rule->tuples_mask.dst_mac); + } +} + static int hclge_get_fd_rule_info(struct hnae3_handle *handle, struct ethtool_rxnfc *cmd) { @@ -5970,162 +6113,34 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle, case SCTP_V4_FLOW: case TCP_V4_FLOW: case UDP_V4_FLOW: - fs->h_u.tcp_ip4_spec.ip4src = - cpu_to_be32(rule->tuples.src_ip[IPV4_INDEX]); - fs->m_u.tcp_ip4_spec.ip4src = - rule->unused_tuple & BIT(INNER_SRC_IP) ? - 0 : cpu_to_be32(rule->tuples_mask.src_ip[IPV4_INDEX]); - - fs->h_u.tcp_ip4_spec.ip4dst = - cpu_to_be32(rule->tuples.dst_ip[IPV4_INDEX]); - fs->m_u.tcp_ip4_spec.ip4dst = - rule->unused_tuple & BIT(INNER_DST_IP) ? - 0 : cpu_to_be32(rule->tuples_mask.dst_ip[IPV4_INDEX]); - - fs->h_u.tcp_ip4_spec.psrc = cpu_to_be16(rule->tuples.src_port); - fs->m_u.tcp_ip4_spec.psrc = - rule->unused_tuple & BIT(INNER_SRC_PORT) ? - 0 : cpu_to_be16(rule->tuples_mask.src_port); - - fs->h_u.tcp_ip4_spec.pdst = cpu_to_be16(rule->tuples.dst_port); - fs->m_u.tcp_ip4_spec.pdst = - rule->unused_tuple & BIT(INNER_DST_PORT) ? - 0 : cpu_to_be16(rule->tuples_mask.dst_port); - - fs->h_u.tcp_ip4_spec.tos = rule->tuples.ip_tos; - fs->m_u.tcp_ip4_spec.tos = - rule->unused_tuple & BIT(INNER_IP_TOS) ? - 0 : rule->tuples_mask.ip_tos; - + hclge_fd_get_tcpip4_info(rule, &fs->h_u.tcp_ip4_spec, + &fs->m_u.tcp_ip4_spec); break; case IP_USER_FLOW: - fs->h_u.usr_ip4_spec.ip4src = - cpu_to_be32(rule->tuples.src_ip[IPV4_INDEX]); - fs->m_u.tcp_ip4_spec.ip4src = - rule->unused_tuple & BIT(INNER_SRC_IP) ? - 0 : cpu_to_be32(rule->tuples_mask.src_ip[IPV4_INDEX]); - - fs->h_u.usr_ip4_spec.ip4dst = - cpu_to_be32(rule->tuples.dst_ip[IPV4_INDEX]); - fs->m_u.usr_ip4_spec.ip4dst = - rule->unused_tuple & BIT(INNER_DST_IP) ? - 0 : cpu_to_be32(rule->tuples_mask.dst_ip[IPV4_INDEX]); - - fs->h_u.usr_ip4_spec.tos = rule->tuples.ip_tos; - fs->m_u.usr_ip4_spec.tos = - rule->unused_tuple & BIT(INNER_IP_TOS) ? - 0 : rule->tuples_mask.ip_tos; - - fs->h_u.usr_ip4_spec.proto = rule->tuples.ip_proto; - fs->m_u.usr_ip4_spec.proto = - rule->unused_tuple & BIT(INNER_IP_PROTO) ? - 0 : rule->tuples_mask.ip_proto; - - fs->h_u.usr_ip4_spec.ip_ver = ETH_RX_NFC_IP4; - + hclge_fd_get_ip4_info(rule, &fs->h_u.usr_ip4_spec, + &fs->m_u.usr_ip4_spec); break; case SCTP_V6_FLOW: case TCP_V6_FLOW: case UDP_V6_FLOW: - cpu_to_be32_array(fs->h_u.tcp_ip6_spec.ip6src, - rule->tuples.src_ip, IPV6_SIZE); - if (rule->unused_tuple & BIT(INNER_SRC_IP)) - memset(fs->m_u.tcp_ip6_spec.ip6src, 0, - sizeof(int) * IPV6_SIZE); - else - cpu_to_be32_array(fs->m_u.tcp_ip6_spec.ip6src, - rule->tuples_mask.src_ip, IPV6_SIZE); - - cpu_to_be32_array(fs->h_u.tcp_ip6_spec.ip6dst, - rule->tuples.dst_ip, IPV6_SIZE); - if (rule->unused_tuple & BIT(INNER_DST_IP)) - memset(fs->m_u.tcp_ip6_spec.ip6dst, 0, - sizeof(int) * IPV6_SIZE); - else - cpu_to_be32_array(fs->m_u.tcp_ip6_spec.ip6dst, - rule->tuples_mask.dst_ip, IPV6_SIZE); - - fs->h_u.tcp_ip6_spec.psrc = cpu_to_be16(rule->tuples.src_port); - fs->m_u.tcp_ip6_spec.psrc = - rule->unused_tuple & BIT(INNER_SRC_PORT) ? - 0 : cpu_to_be16(rule->tuples_mask.src_port); - - fs->h_u.tcp_ip6_spec.pdst = cpu_to_be16(rule->tuples.dst_port); - fs->m_u.tcp_ip6_spec.pdst = - rule->unused_tuple & BIT(INNER_DST_PORT) ? - 0 : cpu_to_be16(rule->tuples_mask.dst_port); - + hclge_fd_get_tcpip6_info(rule, &fs->h_u.tcp_ip6_spec, + &fs->m_u.tcp_ip6_spec); break; case IPV6_USER_FLOW: - cpu_to_be32_array(fs->h_u.usr_ip6_spec.ip6src, - rule->tuples.src_ip, IPV6_SIZE); - if (rule->unused_tuple & BIT(INNER_SRC_IP)) - memset(fs->m_u.usr_ip6_spec.ip6src, 0, - sizeof(int) * IPV6_SIZE); - else - cpu_to_be32_array(fs->m_u.usr_ip6_spec.ip6src, - rule->tuples_mask.src_ip, IPV6_SIZE); - - cpu_to_be32_array(fs->h_u.usr_ip6_spec.ip6dst, - rule->tuples.dst_ip, IPV6_SIZE); - if (rule->unused_tuple & BIT(INNER_DST_IP)) - memset(fs->m_u.usr_ip6_spec.ip6dst, 0, - sizeof(int) * IPV6_SIZE); - else - cpu_to_be32_array(fs->m_u.usr_ip6_spec.ip6dst, - rule->tuples_mask.dst_ip, IPV6_SIZE); - - fs->h_u.usr_ip6_spec.l4_proto = rule->tuples.ip_proto; - fs->m_u.usr_ip6_spec.l4_proto = - rule->unused_tuple & BIT(INNER_IP_PROTO) ? - 0 : rule->tuples_mask.ip_proto; - - break; - case ETHER_FLOW: - ether_addr_copy(fs->h_u.ether_spec.h_source, - rule->tuples.src_mac); - if (rule->unused_tuple & BIT(INNER_SRC_MAC)) - eth_zero_addr(fs->m_u.ether_spec.h_source); - else - ether_addr_copy(fs->m_u.ether_spec.h_source, - rule->tuples_mask.src_mac); - - ether_addr_copy(fs->h_u.ether_spec.h_dest, - rule->tuples.dst_mac); - if (rule->unused_tuple & BIT(INNER_DST_MAC)) - eth_zero_addr(fs->m_u.ether_spec.h_dest); - else - ether_addr_copy(fs->m_u.ether_spec.h_dest, - rule->tuples_mask.dst_mac); - - fs->h_u.ether_spec.h_proto = - cpu_to_be16(rule->tuples.ether_proto); - fs->m_u.ether_spec.h_proto = - rule->unused_tuple & BIT(INNER_ETH_TYPE) ? - 0 : cpu_to_be16(rule->tuples_mask.ether_proto); - + hclge_fd_get_ip6_info(rule, &fs->h_u.usr_ip6_spec, + &fs->m_u.usr_ip6_spec); break; + /* The flow type of fd rule has been checked before adding in to rule + * list. As other flow types have been handled, it must be ETHER_FLOW + * for the default case + */ default: - spin_unlock_bh(&hdev->fd_rule_lock); - return -EOPNOTSUPP; - } - - if (fs->flow_type & FLOW_EXT) { - fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1); - fs->m_ext.vlan_tci = - rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ? - cpu_to_be16(VLAN_VID_MASK) : - cpu_to_be16(rule->tuples_mask.vlan_tag1); + hclge_fd_get_ether_info(rule, &fs->h_u.ether_spec, + &fs->m_u.ether_spec); + break; } - if (fs->flow_type & FLOW_MAC_EXT) { - ether_addr_copy(fs->h_ext.h_dest, rule->tuples.dst_mac); - if (rule->unused_tuple & BIT(INNER_DST_MAC)) - eth_zero_addr(fs->m_u.ether_spec.h_dest); - else - ether_addr_copy(fs->m_u.ether_spec.h_dest, - rule->tuples_mask.dst_mac); - } + hclge_fd_get_ext_info(fs, rule); if (rule->action == HCLGE_FD_ACTION_DROP_PACKET) { fs->ring_cookie = RX_CLS_FLOW_DISC; From patchwork Sat Apr 18 06:47:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 221057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41ED7C38A2C for ; Sat, 18 Apr 2020 06:48:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C190221F7 for ; Sat, 18 Apr 2020 06:48:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726036AbgDRGsW (ORCPT ); Sat, 18 Apr 2020 02:48:22 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:54148 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725969AbgDRGsV (ORCPT ); Sat, 18 Apr 2020 02:48:21 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 05E794ED5D5154712248; Sat, 18 Apr 2020 14:48:20 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Sat, 18 Apr 2020 14:48:09 +0800 From: Huazhong Tan To: CC: , , , , , , Huazhong Tan Subject: [PATCH net-next 03/10] net: hns3: remove an unnecessary case 0 in hclge_fd_convert_tuple() Date: Sat, 18 Apr 2020 14:47:02 +0800 Message-ID: <1587192429-11463-4-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> References: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Since case default has included case 0, so removes this redundant case 0. Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 0aa8db1..5f1bea3 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -5006,8 +5006,6 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y, return true; switch (tuple_bit) { - case 0: - return false; case BIT(INNER_DST_MAC): for (i = 0; i < ETH_ALEN; i++) { calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.dst_mac[i], From patchwork Sat Apr 18 06:47:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 221056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4253CC38A29 for ; Sat, 18 Apr 2020 06:48:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D96B221F7 for ; Sat, 18 Apr 2020 06:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726168AbgDRGs2 (ORCPT ); Sat, 18 Apr 2020 02:48:28 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:54288 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726049AbgDRGsY (ORCPT ); Sat, 18 Apr 2020 02:48:24 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 26E4CABB9F7D3D80417B; Sat, 18 Apr 2020 14:48:20 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Sat, 18 Apr 2020 14:48:10 +0800 From: Huazhong Tan To: CC: , , , , , , Guojia Liao , Huazhong Tan Subject: [PATCH net-next 04/10] net: hns3: remove useless proto_support field in struct hclge_fd_cfg Date: Sat, 18 Apr 2020 14:47:03 +0800 Message-ID: <1587192429-11463-5-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> References: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Guojia Liao proto_support field in struct hclge_fd_cfg shows what protocols in flow direct table are supported now. It is unnecessary since checking which one is unsupported will be more efficient, so this patch removes it. Signed-off-by: Guojia Liao Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 17 ++++++----------- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 1 - 2 files changed, 6 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 5f1bea3..90d2c77 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -4876,9 +4876,6 @@ static int hclge_init_fd_config(struct hclge_dev *hdev) return -EOPNOTSUPP; } - hdev->fd_cfg.proto_support = - TCP_V4_FLOW | UDP_V4_FLOW | SCTP_V4_FLOW | TCP_V6_FLOW | - UDP_V6_FLOW | SCTP_V6_FLOW | IPV4_USER_FLOW | IPV6_USER_FLOW; key_cfg = &hdev->fd_cfg.key_cfg[HCLGE_FD_STAGE_1]; key_cfg->key_sel = HCLGE_FD_KEY_BASE_ON_TUPLE, key_cfg->inner_sipv6_word_en = LOW_2_WORDS; @@ -4892,11 +4889,9 @@ static int hclge_init_fd_config(struct hclge_dev *hdev) BIT(INNER_SRC_PORT) | BIT(INNER_DST_PORT); /* If use max 400bit key, we can support tuples for ether type */ - if (hdev->fd_cfg.max_key_length == MAX_KEY_LENGTH) { - hdev->fd_cfg.proto_support |= ETHER_FLOW; + if (hdev->fd_cfg.fd_mode == HCLGE_FD_MODE_DEPTH_2K_WIDTH_400B_STAGE_1) key_cfg->tuple_active |= BIT(INNER_DST_MAC) | BIT(INNER_SRC_MAC); - } /* roce_type is used to filter roce frames * dst_vport is used to specify the rule @@ -5397,7 +5392,8 @@ static int hclge_fd_check_ext_tuple(struct hclge_dev *hdev, } if (fs->flow_type & FLOW_MAC_EXT) { - if (!(hdev->fd_cfg.proto_support & ETHER_FLOW)) + if (hdev->fd_cfg.fd_mode != + HCLGE_FD_MODE_DEPTH_2K_WIDTH_400B_STAGE_1) return -EOPNOTSUPP; if (is_zero_ether_addr(fs->h_ext.h_dest)) @@ -5413,21 +5409,20 @@ static int hclge_fd_check_spec(struct hclge_dev *hdev, struct ethtool_rx_flow_spec *fs, u32 *unused_tuple) { + u32 flow_type; int ret = 0; if (fs->location >= hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]) return -EINVAL; - if (!(fs->flow_type & hdev->fd_cfg.proto_support)) - return -EOPNOTSUPP; - if ((fs->flow_type & FLOW_EXT) && (fs->h_ext.data[0] != 0 || fs->h_ext.data[1] != 0)) { dev_err(&hdev->pdev->dev, "user-def bytes are not supported\n"); return -EOPNOTSUPP; } - switch (fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT)) { + flow_type = fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT); + switch (flow_type) { case SCTP_V4_FLOW: case TCP_V4_FLOW: case UDP_V4_FLOW: diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h index 71df23d..a58c262 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h @@ -580,7 +580,6 @@ struct hclge_fd_key_cfg { struct hclge_fd_cfg { u8 fd_mode; u16 max_key_length; /* use bit as unit */ - u32 proto_support; u32 rule_num[MAX_STAGE_NUM]; /* rule entry number */ u16 cnt_num[MAX_STAGE_NUM]; /* rule hit counter number */ struct hclge_fd_key_cfg key_cfg[MAX_STAGE_NUM]; From patchwork Sat Apr 18 06:47:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 221054 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2888AC2BA19 for ; Sat, 18 Apr 2020 06:48:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0EC5E221F7 for ; Sat, 18 Apr 2020 06:48:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726327AbgDRGsq (ORCPT ); Sat, 18 Apr 2020 02:48:46 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:54286 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726048AbgDRGs0 (ORCPT ); Sat, 18 Apr 2020 02:48:26 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 2BD041BD36E313F0A85C; Sat, 18 Apr 2020 14:48:20 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Sat, 18 Apr 2020 14:48:11 +0800 From: Huazhong Tan To: CC: , , , , , , Guojia Liao , Huazhong Tan Subject: [PATCH net-next 08/10] net: hns3: add debug information for flow table when failed Date: Sat, 18 Apr 2020 14:47:07 +0800 Message-ID: <1587192429-11463-9-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> References: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Guojia Liao Add some debug information for processing flow table if failed. Signed-off-by: Guojia Liao Signed-off-by: Huazhong Tan --- .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 40 ++++++++++++++++------ 1 file changed, 29 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 74efd95..20216e1 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -5381,22 +5381,31 @@ static int hclge_fd_check_ext_tuple(struct hclge_dev *hdev, u32 *unused_tuple) { if (fs->flow_type & FLOW_EXT) { - if (fs->h_ext.vlan_etype) + if (fs->h_ext.vlan_etype) { + dev_err(&hdev->pdev->dev, "vlan-etype is not supported!\n"); return -EOPNOTSUPP; + } + if (!fs->h_ext.vlan_tci) *unused_tuple |= BIT(INNER_VLAN_TAG_FST); if (fs->m_ext.vlan_tci && - be16_to_cpu(fs->h_ext.vlan_tci) >= VLAN_N_VID) + be16_to_cpu(fs->h_ext.vlan_tci) >= VLAN_N_VID) { + dev_err(&hdev->pdev->dev, "failed to config vlan_tci, invalid vlan_tci: %u, max is %u.\n", + ntohs(fs->h_ext.vlan_tci), VLAN_N_VID - 1); return -EINVAL; + } } else { *unused_tuple |= BIT(INNER_VLAN_TAG_FST); } if (fs->flow_type & FLOW_MAC_EXT) { if (hdev->fd_cfg.fd_mode != - HCLGE_FD_MODE_DEPTH_2K_WIDTH_400B_STAGE_1) + HCLGE_FD_MODE_DEPTH_2K_WIDTH_400B_STAGE_1) { + dev_err(&hdev->pdev->dev, + "FLOW_MAC_EXT is not supported in current fd mode!\n"); return -EOPNOTSUPP; + } if (is_zero_ether_addr(fs->h_ext.h_dest)) *unused_tuple |= BIT(INNER_DST_MAC); @@ -5414,8 +5423,12 @@ static int hclge_fd_check_spec(struct hclge_dev *hdev, u32 flow_type; int ret = 0; - if (fs->location >= hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]) + if (fs->location >= hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]) { + dev_err(&hdev->pdev->dev, "failed to config fd rules, invalid rule location: %u, max is %u\n.", + fs->location, + hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1] - 1); return -EINVAL; + } if ((fs->flow_type & FLOW_EXT) && (fs->h_ext.data[0] != 0 || fs->h_ext.data[1] != 0)) { @@ -5457,11 +5470,16 @@ static int hclge_fd_check_spec(struct hclge_dev *hdev, unused_tuple); break; default: + dev_err(&hdev->pdev->dev, "unsupported protocol type, protocol type = %#x\n", + flow_type); return -EOPNOTSUPP; } - if (ret) + if (ret) { + dev_err(&hdev->pdev->dev, "failed to check flow union tuple, ret = %d\n", + ret); return ret; + } return hclge_fd_check_ext_tuple(hdev, fs, unused_tuple); } @@ -5729,22 +5747,22 @@ static int hclge_add_fd_entry(struct hnae3_handle *handle, u8 action; int ret; - if (!hnae3_dev_fd_supported(hdev)) + if (!hnae3_dev_fd_supported(hdev)) { + dev_err(&hdev->pdev->dev, "flow table director is not supported\n"); return -EOPNOTSUPP; + } if (!hdev->fd_en) { - dev_warn(&hdev->pdev->dev, - "Please enable flow director first\n"); + dev_err(&hdev->pdev->dev, + "please enable flow director first\n"); return -EOPNOTSUPP; } fs = (struct ethtool_rx_flow_spec *)&cmd->fs; ret = hclge_fd_check_spec(hdev, fs, &unused); - if (ret) { - dev_err(&hdev->pdev->dev, "Check fd spec failed\n"); + if (ret) return ret; - } if (fs->ring_cookie == RX_CLS_FLOW_DISC) { action = HCLGE_FD_ACTION_DROP_PACKET; From patchwork Sat Apr 18 06:47:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 221053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EDFFC38A29 for ; Sat, 18 Apr 2020 06:49:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2451A21D91 for ; Sat, 18 Apr 2020 06:49:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726447AbgDRGtF (ORCPT ); Sat, 18 Apr 2020 02:49:05 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:54156 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725970AbgDRGsV (ORCPT ); Sat, 18 Apr 2020 02:48:21 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 0ABA9546C26ADB52C2C9; Sat, 18 Apr 2020 14:48:20 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Sat, 18 Apr 2020 14:48:11 +0800 From: Huazhong Tan To: CC: , , , , , , Yufeng Mo , Huazhong Tan Subject: [PATCH net-next 09/10] net: hns3: add support of dumping MAC reg in debugfs Date: Sat, 18 Apr 2020 14:47:08 +0800 Message-ID: <1587192429-11463-10-git-send-email-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> References: <1587192429-11463-1-git-send-email-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yufeng Mo This patch adds support of dumping MAC reg in debugfs, which will be helpful for debugging. Signed-off-by: Yufeng Mo Signed-off-by: Huazhong Tan --- drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c | 2 +- .../ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c | 113 +++++++++++++++++++++ 2 files changed, 114 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c index e1d8809..c934f32 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c @@ -270,7 +270,7 @@ static void hns3_dbg_help(struct hnae3_handle *h) " [igu egu ] [rpu ]", HNS3_DBG_BUF_LEN - strlen(printf_buf) - 1); strncat(printf_buf + strlen(printf_buf), - " [rtc] [ppp] [rcb] [tqp ]]\n", + " [rtc] [ppp] [rcb] [tqp ] [mac]]\n", HNS3_DBG_BUF_LEN - strlen(printf_buf) - 1); dev_info(&h->pdev->dev, "%s", printf_buf); diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c index cfc9300..66c1ad3 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c @@ -173,6 +173,114 @@ static void hclge_dbg_dump_reg_common(struct hclge_dev *hdev, kfree(desc_src); } +static void hclge_dbg_dump_mac_enable_status(struct hclge_dev *hdev) +{ + struct hclge_config_mac_mode_cmd *req; + struct hclge_desc desc; + u32 loop_en; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_MAC_MODE, true); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed to dump mac enable status, ret = %d\n", ret); + return; + } + + req = (struct hclge_config_mac_mode_cmd *)desc.data; + loop_en = le32_to_cpu(req->txrx_pad_fcs_loop_en); + + dev_info(&hdev->pdev->dev, "config_mac_trans_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_TX_EN_B)); + dev_info(&hdev->pdev->dev, "config_mac_rcv_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_RX_EN_B)); + dev_info(&hdev->pdev->dev, "config_pad_trans_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_PAD_TX_B)); + dev_info(&hdev->pdev->dev, "config_pad_rcv_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_PAD_RX_B)); + dev_info(&hdev->pdev->dev, "config_1588_trans_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_1588_TX_B)); + dev_info(&hdev->pdev->dev, "config_1588_rcv_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_1588_RX_B)); + dev_info(&hdev->pdev->dev, "config_mac_app_loop_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_APP_LP_B)); + dev_info(&hdev->pdev->dev, "config_mac_line_loop_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_LINE_LP_B)); + dev_info(&hdev->pdev->dev, "config_mac_fcs_tx_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_FCS_TX_B)); + dev_info(&hdev->pdev->dev, "config_mac_rx_oversize_truncate_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_RX_OVERSIZE_TRUNCATE_B)); + dev_info(&hdev->pdev->dev, "config_mac_rx_fcs_strip_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_RX_FCS_STRIP_B)); + dev_info(&hdev->pdev->dev, "config_mac_rx_fcs_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_RX_FCS_B)); + dev_info(&hdev->pdev->dev, "config_mac_tx_under_min_err_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_TX_UNDER_MIN_ERR_B)); + dev_info(&hdev->pdev->dev, "config_mac_tx_oversize_truncate_en: %#x\n", + hnae3_get_bit(loop_en, HCLGE_MAC_TX_OVERSIZE_TRUNCATE_B)); +} + +static void hclge_dbg_dump_mac_frame_size(struct hclge_dev *hdev) +{ + struct hclge_config_max_frm_size_cmd *req; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_MAX_FRM_SIZE, true); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed to dump mac frame size, ret = %d\n", ret); + return; + } + + req = (struct hclge_config_max_frm_size_cmd *)desc.data; + + dev_info(&hdev->pdev->dev, "max_frame_size: %u\n", + le16_to_cpu(req->max_frm_size)); + dev_info(&hdev->pdev->dev, "min_frame_size: %u\n", req->min_frm_size); +} + +static void hclge_dbg_dump_mac_speed_duplex(struct hclge_dev *hdev) +{ +#define HCLGE_MAC_SPEED_SHIFT 0 +#define HCLGE_MAC_SPEED_MASK GENMASK(5, 0) +#define HCLGE_MAC_DUPLEX_SHIFT 7 + + struct hclge_config_mac_speed_dup_cmd *req; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CONFIG_SPEED_DUP, true); + + ret = hclge_cmd_send(&hdev->hw, &desc, 1); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed to dump mac speed duplex, ret = %d\n", ret); + return; + } + + req = (struct hclge_config_mac_speed_dup_cmd *)desc.data; + + dev_info(&hdev->pdev->dev, "speed: %#lx\n", + hnae3_get_field(req->speed_dup, HCLGE_MAC_SPEED_MASK, + HCLGE_MAC_SPEED_SHIFT)); + dev_info(&hdev->pdev->dev, "duplex: %#x\n", + hnae3_get_bit(req->speed_dup, HCLGE_MAC_DUPLEX_SHIFT)); +} + +static void hclge_dbg_dump_mac(struct hclge_dev *hdev) +{ + hclge_dbg_dump_mac_enable_status(hdev); + + hclge_dbg_dump_mac_frame_size(hdev); + + hclge_dbg_dump_mac_speed_duplex(hdev); +} + static void hclge_dbg_dump_dcb(struct hclge_dev *hdev, const char *cmd_buf) { struct device *dev = &hdev->pdev->dev; @@ -304,6 +412,11 @@ static void hclge_dbg_dump_reg_cmd(struct hclge_dev *hdev, const char *cmd_buf) } } + if (strncmp(cmd_buf, "mac", strlen("mac")) == 0) { + hclge_dbg_dump_mac(hdev); + has_dump = true; + } + if (strncmp(cmd_buf, "dcb", 3) == 0) { hclge_dbg_dump_dcb(hdev, &cmd_buf[sizeof("dcb")]); has_dump = true;