From patchwork Wed Feb 10 23:24:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 380674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB8F9C433DB for ; Wed, 10 Feb 2021 23:24:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D2FB64EBB for ; Wed, 10 Feb 2021 23:24:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233939AbhBJXYo (ORCPT ); Wed, 10 Feb 2021 18:24:44 -0500 Received: from mga01.intel.com ([192.55.52.88]:20960 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233417AbhBJXYZ (ORCPT ); Wed, 10 Feb 2021 18:24:25 -0500 IronPort-SDR: PX1XWHcUTipskVw/S+fHLid/G7w7XqrhJ5xs0SnzQTfE2su4iVyBtJaBLAa/1wb5cL0Jn4d4T/ WA8gop8Pohqw== X-IronPort-AV: E=McAfee;i="6000,8403,9891"; a="201287983" X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="201287983" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2021 15:23:43 -0800 IronPort-SDR: pD8Ydx9xtdyDCccljoQhEyR4VZAf8A+Itge5C3CIz9DzVBKEuB+ys6bntm/dpjrqGNXHt4Qabc Q+7ZY08Nhjag== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="361512343" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga007.fm.intel.com with ESMTP; 10 Feb 2021 15:23:43 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Arkadiusz Kubalewski , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Aleksandr Loktionov , Tony Brelinski Subject: [PATCH net-next 2/7] i40e: Add init and default config of software based DCB Date: Wed, 10 Feb 2021 15:24:31 -0800 Message-Id: <20210210232436.4084373-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> References: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Arkadiusz Kubalewski Add extra handling on changing the "disable-fw-lldp" private flag to properly initialize software based DCB feature. Add default configuration of DCB functionality when Firmware LLDP agent is turned off, in case of driver probe and device reset on reconfiguration. Update copyright dates as appropriate. Software based DCB is a brand-new feature in i40e driver. Before, DCB was implemented by Firmware LLDP agent only. The agent was responsible for handling incoming DCB-related LLDP frames and applying received DCB configuration to hardware. Default configuration and new initialization flow for software based DCB is required. If LLDP agent is turned off in BIOS, or after setting private flag ("disable-fw-lldp on"). The driver initializes DCB functionality with default values, one traffic class with 100% bandwidth allocated. Signed-off-by: Aleksandr Loktionov Signed-off-by: Arkadiusz Kubalewski Tested-by: Tony Brelinski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e.h | 15 +- .../net/ethernet/intel/i40e/i40e_ethtool.c | 20 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 520 ++++++++++++++++-- 3 files changed, 497 insertions(+), 58 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index 118473dfdcbd..b2303f86ad1d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2021 Intel Corporation. */ #ifndef _I40E_H_ #define _I40E_H_ @@ -289,6 +289,9 @@ struct i40e_cloud_filter { u8 tunnel_type; }; +#define I40E_DCB_PRIO_TYPE_STRICT 0 +#define I40E_DCB_PRIO_TYPE_ETS 1 +#define I40E_DCB_STRICT_PRIO_CREDITS 127 /* DCB per TC information data structure */ struct i40e_tc_info { u16 qoffset; /* Queue offset from base queue */ @@ -626,6 +629,8 @@ struct i40e_pf { u16 dcbx_cap; struct i40e_filter_control_settings filter_settings; + struct i40e_rx_pb_config pb_cfg; /* Current Rx packet buffer config */ + struct i40e_dcbx_config tmp_cfg; struct ptp_clock *ptp_clock; struct ptp_clock_info ptp_caps; @@ -1122,6 +1127,12 @@ bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi); int i40e_count_filters(struct i40e_vsi *vsi); struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, const u8 *macaddr); void i40e_vlan_stripping_enable(struct i40e_vsi *vsi); +static inline bool i40e_is_sw_dcb(struct i40e_pf *pf) +{ + return !!(pf->flags & I40E_FLAG_DISABLE_FW_LLDP); +} + +void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable); #ifdef CONFIG_I40E_DCB void i40e_dcbnl_flush_apps(struct i40e_pf *pf, struct i40e_dcbx_config *old_cfg, @@ -1131,6 +1142,8 @@ void i40e_dcbnl_setup(struct i40e_vsi *vsi); bool i40e_dcb_need_reconfig(struct i40e_pf *pf, struct i40e_dcbx_config *old_cfg, struct i40e_dcbx_config *new_cfg); +int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg); +int i40e_dcb_sw_default_config(struct i40e_pf *pf); #endif /* CONFIG_I40E_DCB */ void i40e_ptp_rx_hang(struct i40e_pf *pf); void i40e_ptp_tx_hang(struct i40e_pf *pf); diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 26ba1f3eb2d8..204bd4116aa4 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -5033,23 +5033,13 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags) if (changed_flags & I40E_FLAG_DISABLE_FW_LLDP) { if (new_flags & I40E_FLAG_DISABLE_FW_LLDP) { - struct i40e_dcbx_config *dcbcfg; - +#ifdef CONFIG_I40E_DCB + i40e_dcb_sw_default_config(pf); +#endif /* CONFIG_I40E_DCB */ + i40e_aq_cfg_lldp_mib_change_event(&pf->hw, false, NULL); i40e_aq_stop_lldp(&pf->hw, true, false, NULL); - i40e_aq_set_dcb_parameters(&pf->hw, true, NULL); - /* reset local_dcbx_config to default */ - dcbcfg = &pf->hw.local_dcbx_config; - dcbcfg->etscfg.willing = 1; - dcbcfg->etscfg.maxtcs = 0; - dcbcfg->etscfg.tcbwtable[0] = 100; - for (i = 1; i < I40E_MAX_TRAFFIC_CLASS; i++) - dcbcfg->etscfg.tcbwtable[i] = 0; - for (i = 0; i < I40E_MAX_USER_PRIORITY; i++) - dcbcfg->etscfg.prioritytable[i] = 0; - dcbcfg->etscfg.tsatable[0] = I40E_IEEE_TSA_ETS; - dcbcfg->pfc.willing = 1; - dcbcfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; } else { + i40e_set_lldp_forwarding(pf, false); status = i40e_aq_start_lldp(&pf->hw, false, NULL); if (status) { adq_err = pf->hw.aq.asq_last_status; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 3e241681ddd5..cfb3b74ee58c 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 -/* Copyright(c) 2013 - 2018 Intel Corporation. */ +/* Copyright(c) 2013 - 2021 Intel Corporation. */ #include #include @@ -35,7 +35,7 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit); static int i40e_setup_misc_vector(struct i40e_pf *pf); static void i40e_determine_queue_usage(struct i40e_pf *pf); static int i40e_setup_pf_filter_control(struct i40e_pf *pf); -static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired); +static void i40e_prep_for_reset(struct i40e_pf *pf); static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired); static int i40e_reset(struct i40e_pf *pf); @@ -5291,6 +5291,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc, vsi->seid); return ret; } + memset(&bw_data, 0, sizeof(bw_data)); bw_data.tc_valid_bits = enabled_tc; for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) bw_data.tc_bw_credits[i] = bw_share[i]; @@ -5943,6 +5944,7 @@ static int i40e_channel_config_bw(struct i40e_vsi *vsi, struct i40e_channel *ch, i40e_status ret; int i; + memset(&bw_data, 0, sizeof(bw_data)); bw_data.tc_valid_bits = ch->enabled_tc; for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) bw_data.tc_bw_credits[i] = bw_share[i]; @@ -6398,6 +6400,9 @@ static void i40e_dcb_reconfigure(struct i40e_pf *pf) /* Enable the TCs available on PF to all VEBs */ tc_map = i40e_pf_get_tc_map(pf); + if (tc_map == I40E_DEFAULT_TRAFFIC_CLASS) + return; + for (v = 0; v < I40E_MAX_VEB; v++) { if (!pf->veb[v]) continue; @@ -6464,6 +6469,316 @@ static int i40e_resume_port_tx(struct i40e_pf *pf) return ret; } +/** + * i40e_suspend_port_tx - Suspend port Tx + * @pf: PF struct + * + * Suspend a port's Tx and issue a PF reset in case of failure. + **/ +static int i40e_suspend_port_tx(struct i40e_pf *pf) +{ + struct i40e_hw *hw = &pf->hw; + int ret; + + ret = i40e_aq_suspend_port_tx(hw, pf->mac_seid, NULL); + if (ret) { + dev_info(&pf->pdev->dev, + "Suspend Port Tx failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + /* Schedule PF reset to recover */ + set_bit(__I40E_PF_RESET_REQUESTED, pf->state); + i40e_service_event_schedule(pf); + } + + return ret; +} + +/** + * i40e_hw_set_dcb_config - Program new DCBX settings into HW + * @pf: PF being configured + * @new_cfg: New DCBX configuration + * + * Program DCB settings into HW and reconfigure VEB/VSIs on + * given PF. Uses "Set LLDP MIB" AQC to program the hardware. + **/ +static int i40e_hw_set_dcb_config(struct i40e_pf *pf, + struct i40e_dcbx_config *new_cfg) +{ + struct i40e_dcbx_config *old_cfg = &pf->hw.local_dcbx_config; + int ret; + + /* Check if need reconfiguration */ + if (!memcmp(&new_cfg, &old_cfg, sizeof(new_cfg))) { + dev_dbg(&pf->pdev->dev, "No Change in DCB Config required.\n"); + return 0; + } + + /* Config change disable all VSIs */ + i40e_pf_quiesce_all_vsi(pf); + + /* Copy the new config to the current config */ + *old_cfg = *new_cfg; + old_cfg->etsrec = old_cfg->etscfg; + ret = i40e_set_dcb_config(&pf->hw); + if (ret) { + dev_info(&pf->pdev->dev, + "Set DCB Config failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + goto out; + } + + /* Changes in configuration update VEB/VSI */ + i40e_dcb_reconfigure(pf); +out: + /* In case of reset do not try to resume anything */ + if (!test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state)) { + /* Re-start the VSIs if disabled */ + ret = i40e_resume_port_tx(pf); + /* In case of error no point in resuming VSIs */ + if (ret) + goto err; + i40e_pf_unquiesce_all_vsi(pf); + } +err: + return ret; +} + +/** + * i40e_hw_dcb_config - Program new DCBX settings into HW + * @pf: PF being configured + * @new_cfg: New DCBX configuration + * + * Program DCB settings into HW and reconfigure VEB/VSIs on + * given PF + **/ +int i40e_hw_dcb_config(struct i40e_pf *pf, struct i40e_dcbx_config *new_cfg) +{ + struct i40e_aqc_configure_switching_comp_ets_data ets_data; + u8 prio_type[I40E_MAX_TRAFFIC_CLASS] = {0}; + u32 mfs_tc[I40E_MAX_TRAFFIC_CLASS]; + struct i40e_dcbx_config *old_cfg; + u8 mode[I40E_MAX_TRAFFIC_CLASS]; + struct i40e_rx_pb_config pb_cfg; + struct i40e_hw *hw = &pf->hw; + u8 num_ports = hw->num_ports; + bool need_reconfig; + int ret = -EINVAL; + u8 lltc_map = 0; + u8 tc_map = 0; + u8 new_numtc; + u8 i; + + dev_dbg(&pf->pdev->dev, "Configuring DCB registers directly\n"); + /* Un-pack information to Program ETS HW via shared API + * numtc, tcmap + * LLTC map + * ETS/NON-ETS arbiter mode + * max exponent (credit refills) + * Total number of ports + * PFC priority bit-map + * Priority Table + * BW % per TC + * Arbiter mode between UPs sharing same TC + * TSA table (ETS or non-ETS) + * EEE enabled or not + * MFS TC table + */ + + new_numtc = i40e_dcb_get_num_tc(new_cfg); + + memset(&ets_data, 0, sizeof(ets_data)); + for (i = 0; i < new_numtc; i++) { + tc_map |= BIT(i); + switch (new_cfg->etscfg.tsatable[i]) { + case I40E_IEEE_TSA_ETS: + prio_type[i] = I40E_DCB_PRIO_TYPE_ETS; + ets_data.tc_bw_share_credits[i] = + new_cfg->etscfg.tcbwtable[i]; + break; + case I40E_IEEE_TSA_STRICT: + prio_type[i] = I40E_DCB_PRIO_TYPE_STRICT; + lltc_map |= BIT(i); + ets_data.tc_bw_share_credits[i] = + I40E_DCB_STRICT_PRIO_CREDITS; + break; + default: + /* Invalid TSA type */ + need_reconfig = false; + goto out; + } + } + + old_cfg = &hw->local_dcbx_config; + /* Check if need reconfiguration */ + need_reconfig = i40e_dcb_need_reconfig(pf, old_cfg, new_cfg); + + /* If needed, enable/disable frame tagging, disable all VSIs + * and suspend port tx + */ + if (need_reconfig) { + /* Enable DCB tagging only when more than one TC */ + if (new_numtc > 1) + pf->flags |= I40E_FLAG_DCB_ENABLED; + else + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + + set_bit(__I40E_PORT_SUSPENDED, pf->state); + /* Reconfiguration needed quiesce all VSIs */ + i40e_pf_quiesce_all_vsi(pf); + ret = i40e_suspend_port_tx(pf); + if (ret) + goto err; + } + + /* Configure Port ETS Tx Scheduler */ + ets_data.tc_valid_bits = tc_map; + ets_data.tc_strict_priority_flags = lltc_map; + ret = i40e_aq_config_switch_comp_ets + (hw, pf->mac_seid, &ets_data, + i40e_aqc_opc_modify_switching_comp_ets, NULL); + if (ret) { + dev_info(&pf->pdev->dev, + "Modify Port ETS failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + goto out; + } + + /* Configure Rx ETS HW */ + memset(&mode, I40E_DCB_ARB_MODE_ROUND_ROBIN, sizeof(mode)); + i40e_dcb_hw_set_num_tc(hw, new_numtc); + i40e_dcb_hw_rx_fifo_config(hw, I40E_DCB_ARB_MODE_ROUND_ROBIN, + I40E_DCB_ARB_MODE_STRICT_PRIORITY, + I40E_DCB_DEFAULT_MAX_EXPONENT, + lltc_map); + i40e_dcb_hw_rx_cmd_monitor_config(hw, new_numtc, num_ports); + i40e_dcb_hw_rx_ets_bw_config(hw, new_cfg->etscfg.tcbwtable, mode, + prio_type); + i40e_dcb_hw_pfc_config(hw, new_cfg->pfc.pfcenable, + new_cfg->etscfg.prioritytable); + i40e_dcb_hw_rx_up2tc_config(hw, new_cfg->etscfg.prioritytable); + + /* Configure Rx Packet Buffers in HW */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + mfs_tc[i] = pf->vsi[pf->lan_vsi]->netdev->mtu; + mfs_tc[i] += I40E_PACKET_HDR_PAD; + } + + i40e_dcb_hw_calculate_pool_sizes(hw, num_ports, + false, new_cfg->pfc.pfcenable, + mfs_tc, &pb_cfg); + i40e_dcb_hw_rx_pb_config(hw, &pf->pb_cfg, &pb_cfg); + + /* Update the local Rx Packet buffer config */ + pf->pb_cfg = pb_cfg; + + /* Inform the FW about changes to DCB configuration */ + ret = i40e_aq_dcb_updated(&pf->hw, NULL); + if (ret) { + dev_info(&pf->pdev->dev, + "DCB Updated failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + goto out; + } + + /* Update the port DCBx configuration */ + *old_cfg = *new_cfg; + + /* Changes in configuration update VEB/VSI */ + i40e_dcb_reconfigure(pf); +out: + /* Re-start the VSIs if disabled */ + if (need_reconfig) { + ret = i40e_resume_port_tx(pf); + + clear_bit(__I40E_PORT_SUSPENDED, pf->state); + /* In case of error no point in resuming VSIs */ + if (ret) + goto err; + + /* Wait for the PF's queues to be disabled */ + ret = i40e_pf_wait_queues_disabled(pf); + if (ret) { + /* Schedule PF reset to recover */ + set_bit(__I40E_PF_RESET_REQUESTED, pf->state); + i40e_service_event_schedule(pf); + goto err; + } else { + i40e_pf_unquiesce_all_vsi(pf); + set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state); + set_bit(__I40E_CLIENT_L2_CHANGE, pf->state); + } + /* registers are set, lets apply */ + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) + ret = i40e_hw_set_dcb_config(pf, new_cfg); + } + +err: + return ret; +} + +/** + * i40e_dcb_sw_default_config - Set default DCB configuration when DCB in SW + * @pf: PF being queried + * + * Set default DCB configuration in case DCB is to be done in SW. + **/ +int i40e_dcb_sw_default_config(struct i40e_pf *pf) +{ + struct i40e_dcbx_config *dcb_cfg = &pf->hw.local_dcbx_config; + struct i40e_aqc_configure_switching_comp_ets_data ets_data; + struct i40e_hw *hw = &pf->hw; + int err; + + if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB) { + /* Update the local cached instance with TC0 ETS */ + memset(&pf->tmp_cfg, 0, sizeof(struct i40e_dcbx_config)); + pf->tmp_cfg.etscfg.willing = I40E_IEEE_DEFAULT_ETS_WILLING; + pf->tmp_cfg.etscfg.maxtcs = 0; + pf->tmp_cfg.etscfg.tcbwtable[0] = I40E_IEEE_DEFAULT_ETS_TCBW; + pf->tmp_cfg.etscfg.tsatable[0] = I40E_IEEE_TSA_ETS; + pf->tmp_cfg.pfc.willing = I40E_IEEE_DEFAULT_PFC_WILLING; + pf->tmp_cfg.pfc.pfccap = I40E_MAX_TRAFFIC_CLASS; + /* FW needs one App to configure HW */ + pf->tmp_cfg.numapps = I40E_IEEE_DEFAULT_NUM_APPS; + pf->tmp_cfg.app[0].selector = I40E_APP_SEL_ETHTYPE; + pf->tmp_cfg.app[0].priority = I40E_IEEE_DEFAULT_APP_PRIO; + pf->tmp_cfg.app[0].protocolid = I40E_APP_PROTOID_FCOE; + + return i40e_hw_set_dcb_config(pf, &pf->tmp_cfg); + } + + memset(&ets_data, 0, sizeof(ets_data)); + ets_data.tc_valid_bits = I40E_DEFAULT_TRAFFIC_CLASS; /* TC0 only */ + ets_data.tc_strict_priority_flags = 0; /* ETS */ + ets_data.tc_bw_share_credits[0] = I40E_IEEE_DEFAULT_ETS_TCBW; /* 100% to TC0 */ + + /* Enable ETS on the Physical port */ + err = i40e_aq_config_switch_comp_ets + (hw, pf->mac_seid, &ets_data, + i40e_aqc_opc_enable_switching_comp_ets, NULL); + if (err) { + dev_info(&pf->pdev->dev, + "Enable Port ETS failed, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, err), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + err = -ENOENT; + goto out; + } + + /* Update the local cached instance with TC0 ETS */ + dcb_cfg->etscfg.willing = I40E_IEEE_DEFAULT_ETS_WILLING; + dcb_cfg->etscfg.cbs = 0; + dcb_cfg->etscfg.maxtcs = I40E_MAX_TRAFFIC_CLASS; + dcb_cfg->etscfg.tcbwtable[0] = I40E_IEEE_DEFAULT_ETS_TCBW; + +out: + return err; +} + /** * i40e_init_pf_dcb - Initialize DCB configuration * @pf: PF being configured @@ -6474,18 +6789,31 @@ static int i40e_resume_port_tx(struct i40e_pf *pf) static int i40e_init_pf_dcb(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; - int err = 0; + int err; /* Do not enable DCB for SW1 and SW2 images even if the FW is capable * Also do not enable DCBx if FW LLDP agent is disabled */ - if ((pf->hw_features & I40E_HW_NO_DCB_SUPPORT) || - (pf->flags & I40E_FLAG_DISABLE_FW_LLDP)) { - dev_info(&pf->pdev->dev, "DCB is not supported or FW LLDP is disabled\n"); + if (pf->hw_features & I40E_HW_NO_DCB_SUPPORT) { + dev_info(&pf->pdev->dev, "DCB is not supported.\n"); err = I40E_NOT_SUPPORTED; goto out; } - + if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) { + dev_info(&pf->pdev->dev, "FW LLDP is disabled, attempting SW DCB\n"); + err = i40e_dcb_sw_default_config(pf); + if (err) { + dev_info(&pf->pdev->dev, "Could not initialize SW DCB\n"); + goto out; + } + dev_info(&pf->pdev->dev, "SW DCB initialization succeeded.\n"); + pf->dcbx_cap = DCB_CAP_DCBX_HOST | + DCB_CAP_DCBX_VER_IEEE; + /* at init capable but disabled */ + pf->flags |= I40E_FLAG_DCB_CAPABLE; + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + goto out; + } err = i40e_init_dcb(hw, true); if (!err) { /* Device/Function is not DCBX capable */ @@ -6524,6 +6852,40 @@ static int i40e_init_pf_dcb(struct i40e_pf *pf) } #endif /* CONFIG_I40E_DCB */ +/** + * i40e_set_lldp_forwarding - set forwarding of lldp frames + * @pf: PF being configured + * @enable: if forwarding to OS shall be enabled + * + * Toggle forwarding of lldp frames behavior, + * When passing DCB control from firmware to software + * lldp frames must be forwarded to the software based + * lldp agent. + */ +void i40e_set_lldp_forwarding(struct i40e_pf *pf, bool enable) +{ + if (pf->lan_vsi == I40E_NO_VSI) + return; + + if (!pf->vsi[pf->lan_vsi]) + return; + + /* No need to check the outcome, commands may fail + * if desired value is already set + */ + i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP, + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX | + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC, + pf->vsi[pf->lan_vsi]->seid, 0, + enable, NULL, NULL); + + i40e_aq_add_rem_control_packet_filter(&pf->hw, NULL, ETH_P_LLDP, + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX | + I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC, + pf->vsi[pf->lan_vsi]->seid, 0, + enable, NULL, NULL); +} + /** * i40e_print_link_message - print link up or down * @vsi: the VSI for which link needs a message @@ -8287,7 +8649,6 @@ int i40e_open(struct net_device *netdev) TCP_FLAG_FIN | TCP_FLAG_CWR) >> 16); wr32(&pf->hw, I40E_GLLAN_TSOMSK_L, be32_to_cpu(TCP_FLAG_CWR) >> 16); - udp_tunnel_get_rx_info(netdev); return 0; @@ -8543,7 +8904,7 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired) * * Resets PF and reinitializes PFs VSI. */ - i40e_prep_for_reset(pf, lock_acquired); + i40e_prep_for_reset(pf); i40e_reset_and_rebuild(pf, true, lock_acquired); } else if (reset_flags & BIT_ULL(__I40E_REINIT_REQUESTED)) { @@ -8653,6 +9014,14 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf, int ret = 0; u8 type; + /* X710-T*L 2.5G and 5G speeds don't support DCB */ + if (I40E_IS_X710TL_DEVICE(hw->device_id) && + (hw->phy.link_info.link_speed & + ~(I40E_LINK_SPEED_2_5GB | I40E_LINK_SPEED_5GB)) && + !(pf->flags & I40E_FLAG_DCB_CAPABLE)) + /* let firmware decide if the DCB should be disabled */ + pf->flags |= I40E_FLAG_DCB_CAPABLE; + /* Not DCB capable or capability disabled */ if (!(pf->flags & I40E_FLAG_DCB_CAPABLE)) return ret; @@ -8684,10 +9053,20 @@ static int i40e_handle_lldp_event(struct i40e_pf *pf, /* Get updated DCBX data from firmware */ ret = i40e_get_dcb_config(&pf->hw); if (ret) { - dev_info(&pf->pdev->dev, - "Failed querying DCB configuration data from firmware, err %s aq_err %s\n", - i40e_stat_str(&pf->hw, ret), - i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + /* X710-T*L 2.5G and 5G speeds don't support DCB */ + if (I40E_IS_X710TL_DEVICE(hw->device_id) && + (hw->phy.link_info.link_speed & + (I40E_LINK_SPEED_2_5GB | I40E_LINK_SPEED_5GB))) { + dev_warn(&pf->pdev->dev, + "DCB is not supported for X710-T*L 2.5/5G speeds\n"); + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; + } else { + dev_info(&pf->pdev->dev, + "Failed querying DCB configuration data from firmware, err %s aq_err %s\n", + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, + pf->hw.aq.asq_last_status)); + } goto exit; } @@ -9109,6 +9488,9 @@ static void i40e_link_event(struct i40e_pf *pf) u8 new_link_speed, old_link_speed; i40e_status status; bool new_link, old_link; +#ifdef CONFIG_I40E_DCB + int err; +#endif /* CONFIG_I40E_DCB */ /* set this to force the get_link_status call to refresh state */ pf->hw.phy.get_link_info = true; @@ -9152,6 +9534,31 @@ static void i40e_link_event(struct i40e_pf *pf) if (pf->flags & I40E_FLAG_PTP) i40e_ptp_set_increment(pf); +#ifdef CONFIG_I40E_DCB + if (new_link == old_link) + return; + /* Not SW DCB so firmware will take care of default settings */ + if (pf->dcbx_cap & DCB_CAP_DCBX_LLD_MANAGED) + return; + + /* We cover here only link down, as after link up in case of SW DCB + * SW LLDP agent will take care of setting it up + */ + if (!new_link) { + dev_dbg(&pf->pdev->dev, "Reconfig DCB to single TC as result of Link Down\n"); + memset(&pf->tmp_cfg, 0, sizeof(pf->tmp_cfg)); + err = i40e_dcb_sw_default_config(pf); + if (err) { + pf->flags &= ~(I40E_FLAG_DCB_CAPABLE | + I40E_FLAG_DCB_ENABLED); + } else { + pf->dcbx_cap = DCB_CAP_DCBX_HOST | + DCB_CAP_DCBX_VER_IEEE; + pf->flags |= I40E_FLAG_DCB_CAPABLE; + pf->flags &= ~I40E_FLAG_DCB_ENABLED; + } + } +#endif /* CONFIG_I40E_DCB */ } /** @@ -9228,7 +9635,7 @@ static void i40e_reset_subtask(struct i40e_pf *pf) * precedence before starting a new reset sequence. */ if (test_bit(__I40E_RESET_INTR_RECEIVED, pf->state)) { - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); i40e_reset(pf); i40e_rebuild(pf, false, false); } @@ -9360,7 +9767,9 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf) switch (opcode) { case i40e_aqc_opc_get_link_status: + rtnl_lock(); i40e_handle_link_event(pf, &event); + rtnl_unlock(); break; case i40e_aqc_opc_send_msg_to_pf: ret = i40e_vc_process_vf_msg(pf, @@ -9860,12 +10269,10 @@ static int i40e_rebuild_channels(struct i40e_vsi *vsi) /** * i40e_prep_for_reset - prep for the core to reset * @pf: board private structure - * @lock_acquired: indicates whether or not the lock has been acquired - * before this function was called. * * Close up the VFs and other things in prep for PF Reset. **/ -static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired) +static void i40e_prep_for_reset(struct i40e_pf *pf) { struct i40e_hw *hw = &pf->hw; i40e_status ret = 0; @@ -9880,12 +10287,7 @@ static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired) dev_dbg(&pf->pdev->dev, "Tearing down internal switch for reset\n"); /* quiesce the VSIs and their queues that are not already DOWN */ - /* pf_quiesce_all_vsi modifies netdev structures -rtnl_lock needed */ - if (!lock_acquired) - rtnl_lock(); i40e_pf_quiesce_all_vsi(pf); - if (!lock_acquired) - rtnl_unlock(); for (v = 0; v < pf->num_alloc_vsi; v++) { if (pf->vsi[v]) @@ -10097,24 +10499,41 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) goto end_core_reset; } - /* Enable FW to write a default DCB config on link-up */ - i40e_aq_set_dcb_parameters(hw, true, NULL); - -#ifdef CONFIG_I40E_DCB - ret = i40e_init_pf_dcb(pf); - if (ret) { - dev_info(&pf->pdev->dev, "DCB init failed %d, disabled\n", ret); - pf->flags &= ~I40E_FLAG_DCB_CAPABLE; - /* Continue without DCB enabled */ - } -#endif /* CONFIG_I40E_DCB */ - /* do basic switch setup */ if (!lock_acquired) rtnl_lock(); ret = i40e_setup_pf_switch(pf, reinit); if (ret) goto end_unlock; +#ifdef CONFIG_I40E_DCB + /* Enable FW to write a default DCB config on link-up + * unless I40E_FLAG_TC_MQPRIO was enabled or DCB + * is not supported with new link speed + */ + if (pf->flags & I40E_FLAG_TC_MQPRIO) { + i40e_aq_set_dcb_parameters(hw, false, NULL); + } else { + if (I40E_IS_X710TL_DEVICE(hw->device_id) && + (hw->phy.link_info.link_speed & + (I40E_LINK_SPEED_2_5GB | I40E_LINK_SPEED_5GB))) { + i40e_aq_set_dcb_parameters(hw, false, NULL); + dev_warn(&pf->pdev->dev, + "DCB is not supported for X710-T*L 2.5/5G speeds\n"); + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; + } else { + i40e_aq_set_dcb_parameters(hw, true, NULL); + ret = i40e_init_pf_dcb(pf); + if (ret) { + dev_info(&pf->pdev->dev, "DCB init failed %d, disabled\n", + ret); + pf->flags &= ~I40E_FLAG_DCB_CAPABLE; + /* Continue without DCB enabled */ + } + } + } + +#endif /* CONFIG_I40E_DCB */ + /* The driver only wants link up/down and module qualification * reports from firmware. Note the negative logic. */ @@ -10251,6 +10670,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) */ i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw, pf->main_vsi_seid); +#ifdef CONFIG_I40E_DCB + if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) + i40e_set_lldp_forwarding(pf, true); +#endif /* CONFIG_I40E_DCB */ /* restart the VSIs that were rebuilt and running before the reset */ i40e_pf_unquiesce_all_vsi(pf); @@ -10317,7 +10740,7 @@ static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit, **/ static void i40e_handle_reset_warning(struct i40e_pf *pf, bool lock_acquired) { - i40e_prep_for_reset(pf, lock_acquired); + i40e_prep_for_reset(pf); i40e_reset_and_rebuild(pf, false, lock_acquired); } @@ -11651,7 +12074,7 @@ int i40e_reconfig_rss_queues(struct i40e_pf *pf, int queue_count) u16 qcount; vsi->req_queue_pairs = queue_count; - i40e_prep_for_reset(pf, true); + i40e_prep_for_reset(pf); pf->alloc_rss_size = new_rss_size; @@ -12472,7 +12895,7 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog, need_reset = (i40e_enabled_xdp_vsi(vsi) != !!prog); if (need_reset) - i40e_prep_for_reset(pf, true); + i40e_prep_for_reset(pf); old_prog = xchg(&vsi->xdp_prog, prog); @@ -14707,6 +15130,10 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw) static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { struct i40e_aq_get_phy_abilities_resp abilities; +#ifdef CONFIG_I40E_DCB + enum i40e_get_fw_lldp_status_resp lldp_status; + i40e_status status; +#endif /* CONFIG_I40E_DCB */ struct i40e_pf *pf; struct i40e_hw *hw; static u16 pfs_found; @@ -14963,6 +15390,12 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_drvdata(pdev, pf); pci_save_state(pdev); +#ifdef CONFIG_I40E_DCB + status = i40e_get_fw_lldp_status(&pf->hw, &lldp_status); + (!status && + lldp_status == I40E_GET_FW_LLDP_STATUS_ENABLED) ? + (pf->flags &= ~I40E_FLAG_DISABLE_FW_LLDP) : + (pf->flags |= I40E_FLAG_DISABLE_FW_LLDP); dev_info(&pdev->dev, (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) ? "FW LLDP is disabled\n" : @@ -14971,7 +15404,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) /* Enable FW to write default DCB config on link-up */ i40e_aq_set_dcb_parameters(hw, true, NULL); -#ifdef CONFIG_I40E_DCB err = i40e_init_pf_dcb(pf); if (err) { dev_info(&pdev->dev, "DCB init failed %d, disabled\n", err); @@ -15264,6 +15696,10 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) */ i40e_add_filter_to_drop_tx_flow_control_frames(&pf->hw, pf->main_vsi_seid); +#ifdef CONFIG_I40E_DCB + if (pf->flags & I40E_FLAG_DISABLE_FW_LLDP) + i40e_set_lldp_forwarding(pf, true); +#endif /* CONFIG_I40E_DCB */ if ((pf->hw.device_id == I40E_DEV_ID_10G_BASE_T) || (pf->hw.device_id == I40E_DEV_ID_10G_BASE_T4)) @@ -15466,7 +15902,7 @@ static pci_ers_result_t i40e_pci_error_detected(struct pci_dev *pdev, /* shutdown all operations */ if (!test_bit(__I40E_SUSPENDED, pf->state)) - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); /* Request a slot reset */ return PCI_ERS_RESULT_NEED_RESET; @@ -15516,7 +15952,7 @@ static void i40e_pci_error_reset_prepare(struct pci_dev *pdev) { struct i40e_pf *pf = pci_get_drvdata(pdev); - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); } /** @@ -15620,7 +16056,7 @@ static void i40e_shutdown(struct pci_dev *pdev) if (pf->wol_en && (pf->hw_features & I40E_HW_WOL_MC_MAGIC_PKT_WAKE)) i40e_enable_mc_magic_wake(pf); - i40e_prep_for_reset(pf, false); + i40e_prep_for_reset(pf); wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); @@ -15679,7 +16115,7 @@ static int __maybe_unused i40e_suspend(struct device *dev) */ rtnl_lock(); - i40e_prep_for_reset(pf, true); + i40e_prep_for_reset(pf); wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); wr32(hw, I40E_PFPM_WUFC, (pf->wol_en ? I40E_PFPM_WUFC_MAG_MASK : 0)); From patchwork Wed Feb 10 23:24:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 380673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FE53C433E6 for ; Wed, 10 Feb 2021 23:25:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 111FA64ECF for ; Wed, 10 Feb 2021 23:25:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234081AbhBJXZL (ORCPT ); Wed, 10 Feb 2021 18:25:11 -0500 Received: from mga01.intel.com ([192.55.52.88]:20916 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233912AbhBJXYc (ORCPT ); Wed, 10 Feb 2021 18:24:32 -0500 IronPort-SDR: WaCOV/yjP/JdJiyFYR4ZFKokQ1nHJ+trodU8pQrsLmLFBBS4PkXy+OrMiynDwoQsgsC1+xRACq XEF+/moApjXw== X-IronPort-AV: E=McAfee;i="6000,8403,9891"; a="201287986" X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="201287986" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2021 15:23:44 -0800 IronPort-SDR: X5jq026MzGtdYJ8ILI57TEPkmRtUNS465LiAULPpkK9dMZRr7hAJKntq0c1eX905Xh4to2y5zB 54/YDJ/nw8cw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="361512350" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga007.fm.intel.com with ESMTP; 10 Feb 2021 15:23:44 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Aleksandr Loktionov , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Tony Brelinski Subject: [PATCH net-next 4/7] i40e: Add EEE status getting & setting implementation Date: Wed, 10 Feb 2021 15:24:33 -0800 Message-Id: <20210210232436.4084373-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> References: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Aleksandr Loktionov Implement Energy Efficient Ethernet (EEE) status getting & setting. The i40e_get_eee() requesting PHY EEE capabilities from firmware. The i40e_set_eee() function requests PHY EEE capabilities from firmware and sets PHY EEE advertising to full abilities or 0 depending whether EEE is to be enabled or disabled. Signed-off-by: Aleksandr Loktionov Tested-by: Tony Brelinski Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/i40e/i40e_ethtool.c | 123 +++++++++++++++++- .../net/ethernet/intel/i40e/i40e_register.h | 2 + 2 files changed, 123 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 204bd4116aa4..4e57d3f38ce7 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -5242,12 +5242,131 @@ static int i40e_get_module_eeprom(struct net_device *netdev, static int i40e_get_eee(struct net_device *netdev, struct ethtool_eee *edata) { - return -EOPNOTSUPP; + struct i40e_netdev_priv *np = netdev_priv(netdev); + struct i40e_aq_get_phy_abilities_resp phy_cfg; + enum i40e_status_code status = 0; + struct i40e_vsi *vsi = np->vsi; + struct i40e_pf *pf = vsi->back; + struct i40e_hw *hw = &pf->hw; + + /* Get initial PHY capabilities */ + status = i40e_aq_get_phy_capabilities(hw, false, true, &phy_cfg, NULL); + if (status) + return -EAGAIN; + + /* Check whether NIC configuration is compatible with Energy Efficient + * Ethernet (EEE) mode. + */ + if (phy_cfg.eee_capability == 0) + return -EOPNOTSUPP; + + edata->supported = SUPPORTED_Autoneg; + edata->lp_advertised = edata->supported; + + /* Get current configuration */ + status = i40e_aq_get_phy_capabilities(hw, false, false, &phy_cfg, NULL); + if (status) + return -EAGAIN; + + edata->advertised = phy_cfg.eee_capability ? SUPPORTED_Autoneg : 0U; + edata->eee_enabled = !!edata->advertised; + edata->tx_lpi_enabled = pf->stats.tx_lpi_status; + + edata->eee_active = pf->stats.tx_lpi_status && pf->stats.rx_lpi_status; + + return 0; +} + +static int i40e_is_eee_param_supported(struct net_device *netdev, + struct ethtool_eee *edata) +{ + struct i40e_netdev_priv *np = netdev_priv(netdev); + struct i40e_vsi *vsi = np->vsi; + struct i40e_pf *pf = vsi->back; + struct i40e_ethtool_not_used { + u32 value; + const char *name; + } param[] = { + {edata->advertised & ~SUPPORTED_Autoneg, "advertise"}, + {edata->tx_lpi_timer, "tx-timer"}, + {edata->tx_lpi_enabled != pf->stats.tx_lpi_status, "tx-lpi"} + }; + int i; + + for (i = 0; i < ARRAY_SIZE(param); i++) { + if (param[i].value) { + netdev_info(netdev, + "EEE setting %s not supported\n", + param[i].name); + return -EOPNOTSUPP; + } + } + + return 0; } static int i40e_set_eee(struct net_device *netdev, struct ethtool_eee *edata) { - return -EOPNOTSUPP; + struct i40e_netdev_priv *np = netdev_priv(netdev); + struct i40e_aq_get_phy_abilities_resp abilities; + enum i40e_status_code status = I40E_SUCCESS; + struct i40e_aq_set_phy_config config; + struct i40e_vsi *vsi = np->vsi; + struct i40e_pf *pf = vsi->back; + struct i40e_hw *hw = &pf->hw; + __le16 eee_capability; + + /* Deny parameters we don't support */ + if (i40e_is_eee_param_supported(netdev, edata)) + return -EOPNOTSUPP; + + /* Get initial PHY capabilities */ + status = i40e_aq_get_phy_capabilities(hw, false, true, &abilities, + NULL); + if (status) + return -EAGAIN; + + /* Check whether NIC configuration is compatible with Energy Efficient + * Ethernet (EEE) mode. + */ + if (abilities.eee_capability == 0) + return -EOPNOTSUPP; + + /* Cache initial EEE capability */ + eee_capability = abilities.eee_capability; + + /* Get current PHY configuration */ + status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities, + NULL); + if (status) + return -EAGAIN; + + /* Cache current PHY configuration */ + config.phy_type = abilities.phy_type; + config.phy_type_ext = abilities.phy_type_ext; + config.link_speed = abilities.link_speed; + config.abilities = abilities.abilities | + I40E_AQ_PHY_ENABLE_ATOMIC_LINK; + config.eeer = abilities.eeer_val; + config.low_power_ctrl = abilities.d3_lpan; + config.fec_config = abilities.fec_cfg_curr_mod_ext_info & + I40E_AQ_PHY_FEC_CONFIG_MASK; + + /* Set desired EEE state */ + if (edata->eee_enabled) { + config.eee_capability = eee_capability; + config.eeer |= cpu_to_le32(I40E_PRTPM_EEER_TX_LPI_EN_MASK); + } else { + config.eee_capability = 0; + config.eeer &= cpu_to_le32(~I40E_PRTPM_EEER_TX_LPI_EN_MASK); + } + + /* Apply modified PHY configuration */ + status = i40e_aq_set_phy_config(hw, &config, NULL); + if (status) + return -EAGAIN; + + return 0; } static const struct ethtool_ops i40e_ethtool_recovery_mode_ops = { diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h index f79355b1dde6..36f7b27a04ae 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_register.h +++ b/drivers/net/ethernet/intel/i40e/i40e_register.h @@ -544,6 +544,8 @@ #define I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT) #define I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT 31 #define I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT) +#define I40E_PRTPM_EEER_TX_LPI_EN_SHIFT 16 +#define I40E_PRTPM_EEER_TX_LPI_EN_MASK I40E_MASK(0x1, I40E_PRTPM_EEER_TX_LPI_EN_SHIFT) #define I40E_PRTPM_RLPIC 0x001E43A0 /* Reset: GLOBR */ #define I40E_PRTPM_TLPIC 0x001E43C0 /* Reset: GLOBR */ #define I40E_PRTRPB_DHW(_i) (0x000AC100 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ From patchwork Wed Feb 10 23:24:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 380672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51373C433E0 for ; Wed, 10 Feb 2021 23:25:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0540B64ED3 for ; Wed, 10 Feb 2021 23:25:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234095AbhBJXZk (ORCPT ); Wed, 10 Feb 2021 18:25:40 -0500 Received: from mga01.intel.com ([192.55.52.88]:20960 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233927AbhBJXYn (ORCPT ); Wed, 10 Feb 2021 18:24:43 -0500 IronPort-SDR: Ph4HJ72lrXSK0eAwVGaFLX+21yQu0jDWJzhW8HoB56qyMsrLZ/5bFiRoHQvmTkYubgNK65/NyU BfNuS+ZxNlnA== X-IronPort-AV: E=McAfee;i="6000,8403,9891"; a="201287988" X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="201287988" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2021 15:23:44 -0800 IronPort-SDR: IN1rxQebWQOHuVW9hTgqBi50sCs6J9NTeezbMy5iAt/ihcyVPga6nebHzlyT1K90g5VH/+3bKF 7WPOmjx4Hv3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="361512353" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga007.fm.intel.com with ESMTP; 10 Feb 2021 15:23:44 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Przemyslaw Patynowski , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Tony Brelinski Subject: [PATCH net-next 5/7] i40e: Add flow director support for IPv6 Date: Wed, 10 Feb 2021 15:24:34 -0800 Message-Id: <20210210232436.4084373-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> References: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Przemyslaw Patynowski Flow director for IPv6 is not supported. 1) Implementation of support for IPv6 flow director. 2) Added handlers for addition of TCP6, UDP6, SCTP6, IPv6. 3) Refactored legacy code to make it more generic. 4) Added packet templates for TCP6, UDP6, SCTP6, IPv6. 5) Added handling of IPv6 source and destination address for flow director. 6) Improved argument passing for source and destination portin TCP6, UDP6 and SCTP6. 7) Added handling of ethtool -n for IPv6, TCP6,UDP6, SCTP6. 8) Used correct bit flag regarding FLEXOFF field of flow director data descriptor. Without this patch, there would be no support for flow director on IPv6, TCP6, UDP6, SCTP6. Tested based on x710 datasheet by using: ethtool -N enp133s0f0 flow-type tcp4 src-port 13 dst-port 37 user-def 0x44142 action 1 ethtool -N enp133s0f0 flow-type tcp6 src-port 13 dst-port 40 user-def 0x44142 action 2 ethtool -N enp133s0f0 flow-type udp4 src-port 20 dst-port 40 user-def 0x44142 action 3 ethtool -N enp133s0f0 flow-type udp6 src-port 25 dst-port 40 user-def 0x44142 action 4 ethtool -N enp133s0f0 flow-type sctp4 src-port 55 dst-port 65 user-def 0x44142 action 5 ethtool -N enp133s0f0 flow-type sctp6 src-port 60 dst-port 40 user-def 0x44142 action 6 ethtool -N enp133s0f0 flow-type ip4 src-ip 1.1.1.1 dst-ip 1.1.1.4 user-def 0x44142 action 7 ethtool -N enp133s0f0 flow-type ip6 src-ip fe80::3efd:feff:fe6f:bbbb dst-ip fe80::3efd:feff:fe6f:aaaa user-def 0x44142 action 8 Then send traffic from client which matches the criteria provided to ethtool. Observe that packets are redirected to user set queues with ethtool -S Signed-off-by: Przemyslaw Patynowski Tested-by: Tony Brelinski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e.h | 9 +- .../net/ethernet/intel/i40e/i40e_ethtool.c | 211 +++++++++- drivers/net/ethernet/intel/i40e/i40e_main.c | 79 +++- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 371 +++++++++++++----- 4 files changed, 551 insertions(+), 119 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index b2303f86ad1d..6cdfdcbaefe9 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -213,10 +213,12 @@ struct i40e_fdir_filter { struct hlist_node fdir_node; /* filter ipnut set */ u8 flow_type; - u8 ip4_proto; + u8 ipl4_proto; /* TX packet view of src and dst */ __be32 dst_ip; __be32 src_ip; + __be32 dst_ip6[4]; + __be32 src_ip6[4]; __be16 src_port; __be16 dst_port; __be32 sctp_v_tag; @@ -477,6 +479,11 @@ struct i40e_pf { u16 fd_sctp4_filter_cnt; u16 fd_ip4_filter_cnt; + u16 fd_tcp6_filter_cnt; + u16 fd_udp6_filter_cnt; + u16 fd_sctp6_filter_cnt; + u16 fd_ip6_filter_cnt; + /* Flexible filter table values that need to be programmed into * hardware, which expects L3 and L4 to be programmed separately. We * need to ensure that the values are in ascended order and don't have diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index 4e57d3f38ce7..b002ac4be8ce 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -3222,13 +3222,30 @@ static int i40e_get_ethtool_fdir_entry(struct i40e_pf *pf, fsp->m_u.usr_ip4_spec.proto = 0; } - /* Reverse the src and dest notion, since the HW views them from - * Tx perspective where as the user expects it from Rx filter view. - */ - fsp->h_u.tcp_ip4_spec.psrc = rule->dst_port; - fsp->h_u.tcp_ip4_spec.pdst = rule->src_port; - fsp->h_u.tcp_ip4_spec.ip4src = rule->dst_ip; - fsp->h_u.tcp_ip4_spec.ip4dst = rule->src_ip; + if (fsp->flow_type == IPV6_USER_FLOW || + fsp->flow_type == UDP_V6_FLOW || + fsp->flow_type == TCP_V6_FLOW || + fsp->flow_type == SCTP_V6_FLOW) { + /* Reverse the src and dest notion, since the HW views them + * from Tx perspective where as the user expects it from + * Rx filter view. + */ + fsp->h_u.tcp_ip6_spec.psrc = rule->dst_port; + fsp->h_u.tcp_ip6_spec.pdst = rule->src_port; + memcpy(fsp->h_u.tcp_ip6_spec.ip6dst, rule->src_ip6, + sizeof(__be32) * 4); + memcpy(fsp->h_u.tcp_ip6_spec.ip6src, rule->dst_ip6, + sizeof(__be32) * 4); + } else { + /* Reverse the src and dest notion, since the HW views them + * from Tx perspective where as the user expects it from + * Rx filter view. + */ + fsp->h_u.tcp_ip4_spec.psrc = rule->dst_port; + fsp->h_u.tcp_ip4_spec.pdst = rule->src_port; + fsp->h_u.tcp_ip4_spec.ip4src = rule->dst_ip; + fsp->h_u.tcp_ip4_spec.ip4dst = rule->src_ip; + } switch (rule->flow_type) { case SCTP_V4_FLOW: @@ -3240,9 +3257,21 @@ static int i40e_get_ethtool_fdir_entry(struct i40e_pf *pf, case UDP_V4_FLOW: index = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; break; + case SCTP_V6_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_SCTP; + break; + case TCP_V6_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_TCP; + break; + case UDP_V6_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_UDP; + break; case IP_USER_FLOW: index = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER; break; + case IPV6_USER_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_OTHER; + break; default: /* If we have stored a filter with a flow type not listed here * it is almost certainly a driver bug. WARN(), and then @@ -3258,6 +3287,20 @@ static int i40e_get_ethtool_fdir_entry(struct i40e_pf *pf, input_set = i40e_read_fd_input_set(pf, index); no_input_set: + if (input_set & I40E_L3_V6_SRC_MASK) { + fsp->m_u.tcp_ip6_spec.ip6src[0] = htonl(0xFFFFFFFF); + fsp->m_u.tcp_ip6_spec.ip6src[1] = htonl(0xFFFFFFFF); + fsp->m_u.tcp_ip6_spec.ip6src[2] = htonl(0xFFFFFFFF); + fsp->m_u.tcp_ip6_spec.ip6src[3] = htonl(0xFFFFFFFF); + } + + if (input_set & I40E_L3_V6_DST_MASK) { + fsp->m_u.tcp_ip6_spec.ip6dst[0] = htonl(0xFFFFFFFF); + fsp->m_u.tcp_ip6_spec.ip6dst[1] = htonl(0xFFFFFFFF); + fsp->m_u.tcp_ip6_spec.ip6dst[2] = htonl(0xFFFFFFFF); + fsp->m_u.tcp_ip6_spec.ip6dst[3] = htonl(0xFFFFFFFF); + } + if (input_set & I40E_L3_SRC_MASK) fsp->m_u.tcp_ip4_spec.ip4src = htonl(0xFFFFFFFF); @@ -3921,6 +3964,14 @@ static const char *i40e_flow_str(struct ethtool_rx_flow_spec *fsp) return "sctp4"; case IP_USER_FLOW: return "ip4"; + case TCP_V6_FLOW: + return "tcp6"; + case UDP_V6_FLOW: + return "udp6"; + case SCTP_V6_FLOW: + return "sctp6"; + case IPV6_USER_FLOW: + return "ip6"; default: return "unknown"; } @@ -4056,9 +4107,14 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi, struct ethtool_rx_flow_spec *fsp, struct i40e_rx_flow_userdef *userdef) { - struct i40e_pf *pf = vsi->back; + static const __be32 ipv6_full_mask[4] = {cpu_to_be32(0xffffffff), + cpu_to_be32(0xffffffff), cpu_to_be32(0xffffffff), + cpu_to_be32(0xffffffff)}; + struct ethtool_tcpip6_spec *tcp_ip6_spec; + struct ethtool_usrip6_spec *usr_ip6_spec; struct ethtool_tcpip4_spec *tcp_ip4_spec; struct ethtool_usrip4_spec *usr_ip4_spec; + struct i40e_pf *pf = vsi->back; u64 current_mask, new_mask; bool new_flex_offset = false; bool flex_l3 = false; @@ -4080,11 +4136,28 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi, index = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; fdir_filter_count = &pf->fd_udp4_filter_cnt; break; + case SCTP_V6_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_SCTP; + fdir_filter_count = &pf->fd_sctp6_filter_cnt; + break; + case TCP_V6_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_TCP; + fdir_filter_count = &pf->fd_tcp6_filter_cnt; + break; + case UDP_V6_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_UDP; + fdir_filter_count = &pf->fd_udp6_filter_cnt; + break; case IP_USER_FLOW: index = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER; fdir_filter_count = &pf->fd_ip4_filter_cnt; flex_l3 = true; break; + case IPV6_USER_FLOW: + index = I40E_FILTER_PCTYPE_NONF_IPV6_OTHER; + fdir_filter_count = &pf->fd_ip6_filter_cnt; + flex_l3 = true; + break; default: return -EOPNOTSUPP; } @@ -4147,6 +4220,53 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi, return -EOPNOTSUPP; break; + case SCTP_V6_FLOW: + new_mask &= ~I40E_VERIFY_TAG_MASK; + fallthrough; + case TCP_V6_FLOW: + case UDP_V6_FLOW: + tcp_ip6_spec = &fsp->m_u.tcp_ip6_spec; + + /* Check if user provided IPv6 source address. */ + if (ipv6_addr_equal((struct in6_addr *)&tcp_ip6_spec->ip6src, + (struct in6_addr *)&ipv6_full_mask)) + new_mask |= I40E_L3_V6_SRC_MASK; + else if (ipv6_addr_any((struct in6_addr *) + &tcp_ip6_spec->ip6src)) + new_mask &= ~I40E_L3_V6_SRC_MASK; + else + return -EOPNOTSUPP; + + /* Check if user provided destination address. */ + if (ipv6_addr_equal((struct in6_addr *)&tcp_ip6_spec->ip6dst, + (struct in6_addr *)&ipv6_full_mask)) + new_mask |= I40E_L3_V6_DST_MASK; + else if (ipv6_addr_any((struct in6_addr *) + &tcp_ip6_spec->ip6src)) + new_mask &= ~I40E_L3_V6_DST_MASK; + else + return -EOPNOTSUPP; + + /* L4 source port */ + if (tcp_ip6_spec->psrc == htons(0xFFFF)) + new_mask |= I40E_L4_SRC_MASK; + else if (!tcp_ip6_spec->psrc) + new_mask &= ~I40E_L4_SRC_MASK; + else + return -EOPNOTSUPP; + + /* L4 destination port */ + if (tcp_ip6_spec->pdst == htons(0xFFFF)) + new_mask |= I40E_L4_DST_MASK; + else if (!tcp_ip6_spec->pdst) + new_mask &= ~I40E_L4_DST_MASK; + else + return -EOPNOTSUPP; + + /* Filtering on Traffic Classes is not supported. */ + if (tcp_ip6_spec->tclass) + return -EOPNOTSUPP; + break; case IP_USER_FLOW: usr_ip4_spec = &fsp->m_u.usr_ip4_spec; @@ -4186,6 +4306,45 @@ static int i40e_check_fdir_input_set(struct i40e_vsi *vsi, if (usr_ip4_spec->proto) return -EINVAL; + break; + case IPV6_USER_FLOW: + usr_ip6_spec = &fsp->m_u.usr_ip6_spec; + + /* Check if user provided IPv6 source address. */ + if (ipv6_addr_equal((struct in6_addr *)&usr_ip6_spec->ip6src, + (struct in6_addr *)&ipv6_full_mask)) + new_mask |= I40E_L3_V6_SRC_MASK; + else if (ipv6_addr_any((struct in6_addr *) + &usr_ip6_spec->ip6src)) + new_mask &= ~I40E_L3_V6_SRC_MASK; + else + return -EOPNOTSUPP; + + /* Check if user provided destination address. */ + if (ipv6_addr_equal((struct in6_addr *)&usr_ip6_spec->ip6dst, + (struct in6_addr *)&ipv6_full_mask)) + new_mask |= I40E_L3_V6_DST_MASK; + else if (ipv6_addr_any((struct in6_addr *) + &usr_ip6_spec->ip6src)) + new_mask &= ~I40E_L3_V6_DST_MASK; + else + return -EOPNOTSUPP; + + if (usr_ip6_spec->l4_4_bytes == htonl(0xFFFFFFFF)) + new_mask |= I40E_L4_SRC_MASK | I40E_L4_DST_MASK; + else if (!usr_ip6_spec->l4_4_bytes) + new_mask &= ~(I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + else + return -EOPNOTSUPP; + + /* Filtering on Traffic class is not supported. */ + if (usr_ip6_spec->tclass) + return -EOPNOTSUPP; + + /* Filtering on L4 protocol is not supported */ + if (usr_ip6_spec->l4_proto) + return -EINVAL; + break; default: return -EOPNOTSUPP; @@ -4370,7 +4529,7 @@ static bool i40e_match_fdir_filter(struct i40e_fdir_filter *a, a->dst_port != b->dst_port || a->src_port != b->src_port || a->flow_type != b->flow_type || - a->ip4_proto != b->ip4_proto) + a->ipl4_proto != b->ipl4_proto) return false; return true; @@ -4528,15 +4687,33 @@ static int i40e_add_fdir_ethtool(struct i40e_vsi *vsi, input->dst_ip = fsp->h_u.tcp_ip4_spec.ip4src; input->src_ip = fsp->h_u.tcp_ip4_spec.ip4dst; input->flow_type = fsp->flow_type & ~FLOW_EXT; - input->ip4_proto = fsp->h_u.usr_ip4_spec.proto; - /* Reverse the src and dest notion, since the HW expects them to be from - * Tx perspective where as the input from user is from Rx filter view. - */ - input->dst_port = fsp->h_u.tcp_ip4_spec.psrc; - input->src_port = fsp->h_u.tcp_ip4_spec.pdst; - input->dst_ip = fsp->h_u.tcp_ip4_spec.ip4src; - input->src_ip = fsp->h_u.tcp_ip4_spec.ip4dst; + if (input->flow_type == IPV6_USER_FLOW || + input->flow_type == UDP_V6_FLOW || + input->flow_type == TCP_V6_FLOW || + input->flow_type == SCTP_V6_FLOW) { + /* Reverse the src and dest notion, since the HW expects them + * to be from Tx perspective where as the input from user is + * from Rx filter view. + */ + input->ipl4_proto = fsp->h_u.usr_ip6_spec.l4_proto; + input->dst_port = fsp->h_u.tcp_ip6_spec.psrc; + input->src_port = fsp->h_u.tcp_ip6_spec.pdst; + memcpy(input->dst_ip6, fsp->h_u.ah_ip6_spec.ip6src, + sizeof(__be32) * 4); + memcpy(input->src_ip6, fsp->h_u.ah_ip6_spec.ip6dst, + sizeof(__be32) * 4); + } else { + /* Reverse the src and dest notion, since the HW expects them + * to be from Tx perspective where as the input from user is + * from Rx filter view. + */ + input->ipl4_proto = fsp->h_u.usr_ip4_spec.proto; + input->dst_port = fsp->h_u.tcp_ip4_spec.psrc; + input->src_port = fsp->h_u.tcp_ip4_spec.pdst; + input->dst_ip = fsp->h_u.tcp_ip4_spec.ip4src; + input->src_ip = fsp->h_u.tcp_ip4_spec.ip4dst; + } if (userdef.flex_filter) { input->flex_filter = true; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index cfb3b74ee58c..ccddf5ca0644 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -3495,6 +3495,24 @@ static void i40e_set_vsi_rx_mode(struct i40e_vsi *vsi) i40e_set_rx_mode(vsi->netdev); } +/** + * i40e_reset_fdir_filter_cnt - Reset flow director filter counters + * @pf: Pointer to the targeted PF + * + * Set all flow director counters to 0. + */ +static void i40e_reset_fdir_filter_cnt(struct i40e_pf *pf) +{ + pf->fd_tcp4_filter_cnt = 0; + pf->fd_udp4_filter_cnt = 0; + pf->fd_sctp4_filter_cnt = 0; + pf->fd_ip4_filter_cnt = 0; + pf->fd_tcp6_filter_cnt = 0; + pf->fd_udp6_filter_cnt = 0; + pf->fd_sctp6_filter_cnt = 0; + pf->fd_ip6_filter_cnt = 0; +} + /** * i40e_fdir_filter_restore - Restore the Sideband Flow Director filters * @vsi: Pointer to the targeted VSI @@ -3512,10 +3530,7 @@ static void i40e_fdir_filter_restore(struct i40e_vsi *vsi) return; /* Reset FDir counters as we're replaying all existing filters */ - pf->fd_tcp4_filter_cnt = 0; - pf->fd_udp4_filter_cnt = 0; - pf->fd_sctp4_filter_cnt = 0; - pf->fd_ip4_filter_cnt = 0; + i40e_reset_fdir_filter_cnt(pf); hlist_for_each_entry_safe(filter, node, &pf->fdir_filter_list, fdir_node) { @@ -8763,32 +8778,51 @@ static void i40e_fdir_filter_exit(struct i40e_pf *pf) INIT_LIST_HEAD(&pf->l4_flex_pit_list); pf->fdir_pf_active_filters = 0; - pf->fd_tcp4_filter_cnt = 0; - pf->fd_udp4_filter_cnt = 0; - pf->fd_sctp4_filter_cnt = 0; - pf->fd_ip4_filter_cnt = 0; + i40e_reset_fdir_filter_cnt(pf); /* Reprogram the default input set for TCP/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_TCP, I40E_L3_SRC_MASK | I40E_L3_DST_MASK | I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + /* Reprogram the default input set for TCP/IPv6 */ + i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV6_TCP, + I40E_L3_V6_SRC_MASK | I40E_L3_V6_DST_MASK | + I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + /* Reprogram the default input set for UDP/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_UDP, I40E_L3_SRC_MASK | I40E_L3_DST_MASK | I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + /* Reprogram the default input set for UDP/IPv6 */ + i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV6_UDP, + I40E_L3_V6_SRC_MASK | I40E_L3_V6_DST_MASK | + I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + /* Reprogram the default input set for SCTP/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_SCTP, I40E_L3_SRC_MASK | I40E_L3_DST_MASK | I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + /* Reprogram the default input set for SCTP/IPv6 */ + i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV6_SCTP, + I40E_L3_V6_SRC_MASK | I40E_L3_V6_DST_MASK | + I40E_L4_SRC_MASK | I40E_L4_DST_MASK); + /* Reprogram the default input set for Other/IPv4 */ i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV4_OTHER, I40E_L3_SRC_MASK | I40E_L3_DST_MASK); i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_FRAG_IPV4, I40E_L3_SRC_MASK | I40E_L3_DST_MASK); + + /* Reprogram the default input set for Other/IPv6 */ + i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_NONF_IPV6_OTHER, + I40E_L3_SRC_MASK | I40E_L3_DST_MASK); + + i40e_write_fd_input_set(pf, I40E_FILTER_PCTYPE_FRAG_IPV6, + I40E_L3_SRC_MASK | I40E_L3_DST_MASK); } /** @@ -9270,8 +9304,17 @@ static void i40e_delete_invalid_filter(struct i40e_pf *pf, case SCTP_V4_FLOW: pf->fd_sctp4_filter_cnt--; break; + case TCP_V6_FLOW: + pf->fd_tcp6_filter_cnt--; + break; + case UDP_V6_FLOW: + pf->fd_udp6_filter_cnt--; + break; + case SCTP_V6_FLOW: + pf->fd_udp6_filter_cnt--; + break; case IP_USER_FLOW: - switch (filter->ip4_proto) { + switch (filter->ipl4_proto) { case IPPROTO_TCP: pf->fd_tcp4_filter_cnt--; break; @@ -9286,6 +9329,22 @@ static void i40e_delete_invalid_filter(struct i40e_pf *pf, break; } break; + case IPV6_USER_FLOW: + switch (filter->ipl4_proto) { + case IPPROTO_TCP: + pf->fd_tcp6_filter_cnt--; + break; + case IPPROTO_UDP: + pf->fd_udp6_filter_cnt--; + break; + case IPPROTO_SCTP: + pf->fd_sctp6_filter_cnt--; + break; + case IPPROTO_IP: + pf->fd_ip6_filter_cnt--; + break; + } + break; } /* Remove the filter from the list and free memory */ @@ -9319,7 +9378,7 @@ void i40e_fdir_check_and_reenable(struct i40e_pf *pf) * rules active. */ if ((fcnt_prog < (fcnt_avail - I40E_FDIR_BUFFER_HEAD_ROOM_FOR_ATR)) && - (pf->fd_tcp4_filter_cnt == 0)) + pf->fd_tcp4_filter_cnt == 0 && pf->fd_tcp6_filter_cnt == 0) i40e_reenable_fdir_atr(pf); /* if hw had a problem adding a filter, delete it */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 8d2ea4293d69..72c521d7c189 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -42,9 +42,6 @@ static void i40e_fdir(struct i40e_ring *tx_ring, flex_ptype |= I40E_TXD_FLTR_QW0_PCTYPE_MASK & (fdata->pctype << I40E_TXD_FLTR_QW0_PCTYPE_SHIFT); - flex_ptype |= I40E_TXD_FLTR_QW0_PCTYPE_MASK & - (fdata->flex_offset << I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT); - /* Use LAN VSI Id if not programmed by user */ flex_ptype |= I40E_TXD_FLTR_QW0_DEST_VSI_MASK & ((u32)(fdata->dest_vsi ? : pf->vsi[pf->lan_vsi]->id) << @@ -160,52 +157,83 @@ static int i40e_program_fdir_filter(struct i40e_fdir_filter *fdir_data, return -1; } -#define IP_HEADER_OFFSET 14 -#define I40E_UDPIP_DUMMY_PACKET_LEN 42 +#define IP_HEADER_OFFSET 14 +#define I40E_UDPIP_DUMMY_PACKET_LEN 42 +#define I40E_UDPIP6_DUMMY_PACKET_LEN 62 /** - * i40e_add_del_fdir_udpv4 - Add/Remove UDPv4 filters + * i40e_add_del_fdir_udp - Add/Remove UDP filters * @vsi: pointer to the targeted VSI * @fd_data: the flow director data required for the FDir descriptor * @add: true adds a filter, false removes it + * @ipv4: true is v4, false is v6 * * Returns 0 if the filters were successfully added or removed **/ -static int i40e_add_del_fdir_udpv4(struct i40e_vsi *vsi, - struct i40e_fdir_filter *fd_data, - bool add) +static int i40e_add_del_fdir_udp(struct i40e_vsi *vsi, + struct i40e_fdir_filter *fd_data, + bool add, + bool ipv4) { struct i40e_pf *pf = vsi->back; + struct ipv6hdr *ipv6; struct udphdr *udp; struct iphdr *ip; u8 *raw_packet; int ret; - static char packet[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, 0, - 0x45, 0, 0, 0x1c, 0, 0, 0x40, 0, 0x40, 0x11, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; + static char packet_ipv4[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, + 0, 0x45, 0, 0, 0x1c, 0, 0, 0x40, 0, 0x40, 0x11, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; + static char packet_ipv6[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x86, + 0xdd, 0x60, 0, 0, 0, 0, 0, 0x11, 0, + /*src address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + /*dst address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + /*udp header*/ + 0, 0, 0, 0, 0, 0, 0, 0}; raw_packet = kzalloc(I40E_FDIR_MAX_RAW_PACKET_SIZE, GFP_KERNEL); if (!raw_packet) return -ENOMEM; - memcpy(raw_packet, packet, I40E_UDPIP_DUMMY_PACKET_LEN); + if (ipv4) { + memcpy(raw_packet, packet_ipv4, I40E_UDPIP_DUMMY_PACKET_LEN); - ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); - udp = (struct udphdr *)(raw_packet + IP_HEADER_OFFSET - + sizeof(struct iphdr)); + ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); + udp = (struct udphdr *)(raw_packet + IP_HEADER_OFFSET + + sizeof(struct iphdr)); - ip->daddr = fd_data->dst_ip; + ip->daddr = fd_data->dst_ip; + ip->saddr = fd_data->src_ip; + } else { + memcpy(raw_packet, packet_ipv6, I40E_UDPIP6_DUMMY_PACKET_LEN); + ipv6 = (struct ipv6hdr *)(raw_packet + IP_HEADER_OFFSET); + udp = (struct udphdr *)(raw_packet + IP_HEADER_OFFSET + + sizeof(struct ipv6hdr)); + + memcpy(ipv6->saddr.in6_u.u6_addr32, + fd_data->src_ip6, sizeof(__be32) * 4); + memcpy(ipv6->daddr.in6_u.u6_addr32, + fd_data->dst_ip6, sizeof(__be32) * 4); + } udp->dest = fd_data->dst_port; - ip->saddr = fd_data->src_ip; udp->source = fd_data->src_port; if (fd_data->flex_filter) { - u8 *payload = raw_packet + I40E_UDPIP_DUMMY_PACKET_LEN; + u8 *payload; __be16 pattern = fd_data->flex_word; u16 off = fd_data->flex_offset; + if (ipv4) + payload = raw_packet + I40E_UDPIP_DUMMY_PACKET_LEN; + else + payload = raw_packet + I40E_UDPIP6_DUMMY_PACKET_LEN; + *((__force __be16 *)(payload + off)) = pattern; } - fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; + if (ipv4) + fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; + else + fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV6_UDP; + ret = i40e_program_fdir_filter(fd_data, raw_packet, pf, add); if (ret) { dev_info(&pf->pdev->dev, @@ -225,61 +253,104 @@ static int i40e_add_del_fdir_udpv4(struct i40e_vsi *vsi, fd_data->pctype, fd_data->fd_id); } - if (add) - pf->fd_udp4_filter_cnt++; - else - pf->fd_udp4_filter_cnt--; + if (add) { + if (ipv4) + pf->fd_udp4_filter_cnt++; + else + pf->fd_udp6_filter_cnt++; + } else { + if (ipv4) + pf->fd_udp4_filter_cnt--; + else + pf->fd_udp6_filter_cnt--; + } return 0; } -#define I40E_TCPIP_DUMMY_PACKET_LEN 54 +#define I40E_TCPIP_DUMMY_PACKET_LEN 54 +#define I40E_TCPIP6_DUMMY_PACKET_LEN 74 /** - * i40e_add_del_fdir_tcpv4 - Add/Remove TCPv4 filters + * i40e_add_del_fdir_tcp - Add/Remove TCPv4 filters * @vsi: pointer to the targeted VSI * @fd_data: the flow director data required for the FDir descriptor * @add: true adds a filter, false removes it + * @ipv4: true is v4, false is v6 * * Returns 0 if the filters were successfully added or removed **/ -static int i40e_add_del_fdir_tcpv4(struct i40e_vsi *vsi, - struct i40e_fdir_filter *fd_data, - bool add) +static int i40e_add_del_fdir_tcp(struct i40e_vsi *vsi, + struct i40e_fdir_filter *fd_data, + bool add, + bool ipv4) { struct i40e_pf *pf = vsi->back; + struct ipv6hdr *ipv6; struct tcphdr *tcp; struct iphdr *ip; u8 *raw_packet; int ret; /* Dummy packet */ - static char packet[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, 0, - 0x45, 0, 0, 0x28, 0, 0, 0x40, 0, 0x40, 0x6, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x80, 0x11, + static char packet_ipv4[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, + 0, 0x45, 0, 0, 0x28, 0, 0, 0x40, 0, 0x40, 0x6, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x50, 0x11, + 0x0, 0x72, 0, 0, 0, 0}; + static char packet_ipv6[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x86, + 0xdd, 0x60, 0, 0, 0, 0, 0, 0x6, 0, + /*src address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + /*dst address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x50, 0x11, 0x0, 0x72, 0, 0, 0, 0}; raw_packet = kzalloc(I40E_FDIR_MAX_RAW_PACKET_SIZE, GFP_KERNEL); if (!raw_packet) return -ENOMEM; - memcpy(raw_packet, packet, I40E_TCPIP_DUMMY_PACKET_LEN); + if (ipv4) { + memcpy(raw_packet, packet_ipv4, I40E_TCPIP_DUMMY_PACKET_LEN); + + ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); + tcp = (struct tcphdr *)(raw_packet + IP_HEADER_OFFSET + + sizeof(struct iphdr)); + + ip->daddr = fd_data->dst_ip; + ip->saddr = fd_data->src_ip; + } else { + memcpy(raw_packet, packet_ipv6, I40E_TCPIP6_DUMMY_PACKET_LEN); + + tcp = (struct tcphdr *)(raw_packet + IP_HEADER_OFFSET + + sizeof(struct ipv6hdr)); + ipv6 = (struct ipv6hdr *)(raw_packet + IP_HEADER_OFFSET); + + memcpy(ipv6->saddr.in6_u.u6_addr32, + fd_data->src_ip6, sizeof(__be32) * 4); + memcpy(ipv6->daddr.in6_u.u6_addr32, + fd_data->dst_ip6, sizeof(__be32) * 4); + } ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); tcp = (struct tcphdr *)(raw_packet + IP_HEADER_OFFSET + sizeof(struct iphdr)); - ip->daddr = fd_data->dst_ip; tcp->dest = fd_data->dst_port; - ip->saddr = fd_data->src_ip; tcp->source = fd_data->src_port; if (fd_data->flex_filter) { - u8 *payload = raw_packet + I40E_TCPIP_DUMMY_PACKET_LEN; + u8 *payload; __be16 pattern = fd_data->flex_word; u16 off = fd_data->flex_offset; + if (ipv4) + payload = raw_packet + I40E_TCPIP_DUMMY_PACKET_LEN; + else + payload = raw_packet + I40E_TCPIP6_DUMMY_PACKET_LEN; + *((__force __be16 *)(payload + off)) = pattern; } - fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP; + if (ipv4) + fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP; + else + fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV6_TCP; ret = i40e_program_fdir_filter(fd_data, raw_packet, pf, add); if (ret) { dev_info(&pf->pdev->dev, @@ -299,65 +370,102 @@ static int i40e_add_del_fdir_tcpv4(struct i40e_vsi *vsi, } if (add) { - pf->fd_tcp4_filter_cnt++; + if (ipv4) + pf->fd_tcp4_filter_cnt++; + else + pf->fd_tcp6_filter_cnt++; if ((pf->flags & I40E_FLAG_FD_ATR_ENABLED) && I40E_DEBUG_FD & pf->hw.debug_mask) dev_info(&pf->pdev->dev, "Forcing ATR off, sideband rules for TCP/IPv4 flow being applied\n"); set_bit(__I40E_FD_ATR_AUTO_DISABLED, pf->state); } else { - pf->fd_tcp4_filter_cnt--; + if (ipv4) + pf->fd_tcp4_filter_cnt--; + else + pf->fd_tcp6_filter_cnt--; } return 0; } -#define I40E_SCTPIP_DUMMY_PACKET_LEN 46 +#define I40E_SCTPIP_DUMMY_PACKET_LEN 46 +#define I40E_SCTPIP6_DUMMY_PACKET_LEN 66 /** - * i40e_add_del_fdir_sctpv4 - Add/Remove SCTPv4 Flow Director filters for + * i40e_add_del_fdir_sctp - Add/Remove SCTPv4 Flow Director filters for * a specific flow spec * @vsi: pointer to the targeted VSI * @fd_data: the flow director data required for the FDir descriptor * @add: true adds a filter, false removes it + * @ipv4: true is v4, false is v6 * * Returns 0 if the filters were successfully added or removed **/ -static int i40e_add_del_fdir_sctpv4(struct i40e_vsi *vsi, - struct i40e_fdir_filter *fd_data, - bool add) +static int i40e_add_del_fdir_sctp(struct i40e_vsi *vsi, + struct i40e_fdir_filter *fd_data, + bool add, + bool ipv4) { struct i40e_pf *pf = vsi->back; + struct ipv6hdr *ipv6; struct sctphdr *sctp; struct iphdr *ip; u8 *raw_packet; int ret; - /* Dummy packet */ - static char packet[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, 0, - 0x45, 0, 0, 0x20, 0, 0, 0x40, 0, 0x40, 0x84, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; - + /* Dummy packets */ + static char packet_ipv4[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, + 0, 0x45, 0, 0, 0x20, 0, 0, 0x40, 0, 0x40, 0x84, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; + + static char packet_ipv6[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x86, + 0xdd, 0x60, 0, 0, 0, 0, 0, 0x84, 0, + /*src address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + /*dst address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; raw_packet = kzalloc(I40E_FDIR_MAX_RAW_PACKET_SIZE, GFP_KERNEL); if (!raw_packet) return -ENOMEM; - memcpy(raw_packet, packet, I40E_SCTPIP_DUMMY_PACKET_LEN); + if (ipv4) { + memcpy(raw_packet, packet_ipv4, I40E_SCTPIP_DUMMY_PACKET_LEN); - ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); - sctp = (struct sctphdr *)(raw_packet + IP_HEADER_OFFSET - + sizeof(struct iphdr)); + ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); + sctp = (struct sctphdr *)(raw_packet + IP_HEADER_OFFSET + + sizeof(struct iphdr)); + + ip->daddr = fd_data->dst_ip; + ip->saddr = fd_data->src_ip; + } else { + memcpy(raw_packet, packet_ipv6, I40E_SCTPIP6_DUMMY_PACKET_LEN); + + ipv6 = (struct ipv6hdr *)(raw_packet + IP_HEADER_OFFSET); + sctp = (struct sctphdr *)(raw_packet + IP_HEADER_OFFSET + + sizeof(struct ipv6hdr)); + + memcpy(ipv6->saddr.in6_u.u6_addr32, + fd_data->src_ip6, sizeof(__be32) * 4); + memcpy(ipv6->saddr.in6_u.u6_addr32, + fd_data->src_ip6, sizeof(__be32) * 4); + } - ip->daddr = fd_data->dst_ip; sctp->dest = fd_data->dst_port; - ip->saddr = fd_data->src_ip; sctp->source = fd_data->src_port; if (fd_data->flex_filter) { - u8 *payload = raw_packet + I40E_SCTPIP_DUMMY_PACKET_LEN; + u8 *payload; __be16 pattern = fd_data->flex_word; u16 off = fd_data->flex_offset; + if (ipv4) + payload = raw_packet + I40E_SCTPIP_DUMMY_PACKET_LEN; + else + payload = raw_packet + I40E_SCTPIP6_DUMMY_PACKET_LEN; *((__force __be16 *)(payload + off)) = pattern; } - fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_SCTP; + if (ipv4) + fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_SCTP; + else + fd_data->pctype = I40E_FILTER_PCTYPE_NONF_IPV6_SCTP; + ret = i40e_program_fdir_filter(fd_data, raw_packet, pf, add); if (ret) { dev_info(&pf->pdev->dev, @@ -377,54 +485,97 @@ static int i40e_add_del_fdir_sctpv4(struct i40e_vsi *vsi, fd_data->pctype, fd_data->fd_id); } - if (add) - pf->fd_sctp4_filter_cnt++; - else - pf->fd_sctp4_filter_cnt--; + if (add) { + if (ipv4) + pf->fd_sctp4_filter_cnt++; + else + pf->fd_sctp6_filter_cnt++; + } else { + if (ipv4) + pf->fd_sctp4_filter_cnt--; + else + pf->fd_sctp6_filter_cnt--; + } return 0; } -#define I40E_IP_DUMMY_PACKET_LEN 34 +#define I40E_IP_DUMMY_PACKET_LEN 34 +#define I40E_IP6_DUMMY_PACKET_LEN 54 /** - * i40e_add_del_fdir_ipv4 - Add/Remove IPv4 Flow Director filters for + * i40e_add_del_fdir_ip - Add/Remove IPv4 Flow Director filters for * a specific flow spec * @vsi: pointer to the targeted VSI * @fd_data: the flow director data required for the FDir descriptor * @add: true adds a filter, false removes it + * @ipv4: true is v4, false is v6 * * Returns 0 if the filters were successfully added or removed **/ -static int i40e_add_del_fdir_ipv4(struct i40e_vsi *vsi, - struct i40e_fdir_filter *fd_data, - bool add) +static int i40e_add_del_fdir_ip(struct i40e_vsi *vsi, + struct i40e_fdir_filter *fd_data, + bool add, + bool ipv4) { struct i40e_pf *pf = vsi->back; + struct ipv6hdr *ipv6; struct iphdr *ip; u8 *raw_packet; + int iter_start; + int iter_end; int ret; int i; - static char packet[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, 0, - 0x45, 0, 0, 0x14, 0, 0, 0x40, 0, 0x40, 0x10, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0}; + static char packet_ipv4[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x08, + 0, 0x45, 0, 0, 0x14, 0, 0, 0x40, 0, 0x40, 0x10, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0}; + static char packet_ipv6[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x86, + 0xdd, 0x60, 0, 0, 0, 0, 0, 0, 0, + /*src address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + /*dst address*/0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; + + if (ipv4) { + iter_start = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER; + iter_end = I40E_FILTER_PCTYPE_FRAG_IPV4; + } else { + iter_start = I40E_FILTER_PCTYPE_NONF_IPV6_OTHER; + iter_end = I40E_FILTER_PCTYPE_FRAG_IPV6; + } - for (i = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER; - i <= I40E_FILTER_PCTYPE_FRAG_IPV4; i++) { + for (i = iter_start; i <= iter_end; i++) { raw_packet = kzalloc(I40E_FDIR_MAX_RAW_PACKET_SIZE, GFP_KERNEL); if (!raw_packet) return -ENOMEM; - memcpy(raw_packet, packet, I40E_IP_DUMMY_PACKET_LEN); - ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); - - ip->saddr = fd_data->src_ip; - ip->daddr = fd_data->dst_ip; - ip->protocol = 0; + if (ipv4) { + memcpy(raw_packet, packet_ipv4, + I40E_IP_DUMMY_PACKET_LEN); + ip = (struct iphdr *)(raw_packet + IP_HEADER_OFFSET); + + ip->saddr = fd_data->src_ip; + ip->daddr = fd_data->dst_ip; + ip->protocol = IPPROTO_IP; + } else { + memcpy(raw_packet, packet_ipv6, + I40E_IP6_DUMMY_PACKET_LEN); + ipv6 = (struct ipv6hdr *)(raw_packet + + IP_HEADER_OFFSET); + memcpy(ipv6->saddr.in6_u.u6_addr32, + fd_data->src_ip6, sizeof(__be32) * 4); + memcpy(ipv6->daddr.in6_u.u6_addr32, + fd_data->dst_ip6, sizeof(__be32) * 4); + + ipv6->nexthdr = IPPROTO_NONE; + } if (fd_data->flex_filter) { - u8 *payload = raw_packet + I40E_IP_DUMMY_PACKET_LEN; + u8 *payload; __be16 pattern = fd_data->flex_word; u16 off = fd_data->flex_offset; + if (ipv4) + payload = raw_packet + I40E_IP_DUMMY_PACKET_LEN; + else + payload = raw_packet + + I40E_IP6_DUMMY_PACKET_LEN; *((__force __be16 *)(payload + off)) = pattern; } @@ -451,10 +602,17 @@ static int i40e_add_del_fdir_ipv4(struct i40e_vsi *vsi, } } - if (add) - pf->fd_ip4_filter_cnt++; - else - pf->fd_ip4_filter_cnt--; + if (add) { + if (ipv4) + pf->fd_ip4_filter_cnt++; + else + pf->fd_ip6_filter_cnt++; + } else { + if (ipv4) + pf->fd_ip4_filter_cnt--; + else + pf->fd_ip6_filter_cnt--; + } return 0; } @@ -469,37 +627,68 @@ static int i40e_add_del_fdir_ipv4(struct i40e_vsi *vsi, int i40e_add_del_fdir(struct i40e_vsi *vsi, struct i40e_fdir_filter *input, bool add) { + enum ip_ver { ipv6 = 0, ipv4 = 1 }; struct i40e_pf *pf = vsi->back; int ret; switch (input->flow_type & ~FLOW_EXT) { case TCP_V4_FLOW: - ret = i40e_add_del_fdir_tcpv4(vsi, input, add); + ret = i40e_add_del_fdir_tcp(vsi, input, add, ipv4); break; case UDP_V4_FLOW: - ret = i40e_add_del_fdir_udpv4(vsi, input, add); + ret = i40e_add_del_fdir_udp(vsi, input, add, ipv4); break; case SCTP_V4_FLOW: - ret = i40e_add_del_fdir_sctpv4(vsi, input, add); + ret = i40e_add_del_fdir_sctp(vsi, input, add, ipv4); + break; + case TCP_V6_FLOW: + ret = i40e_add_del_fdir_tcp(vsi, input, add, ipv6); + break; + case UDP_V6_FLOW: + ret = i40e_add_del_fdir_udp(vsi, input, add, ipv6); + break; + case SCTP_V6_FLOW: + ret = i40e_add_del_fdir_sctp(vsi, input, add, ipv6); break; case IP_USER_FLOW: - switch (input->ip4_proto) { + switch (input->ipl4_proto) { case IPPROTO_TCP: - ret = i40e_add_del_fdir_tcpv4(vsi, input, add); + ret = i40e_add_del_fdir_tcp(vsi, input, add, ipv4); break; case IPPROTO_UDP: - ret = i40e_add_del_fdir_udpv4(vsi, input, add); + ret = i40e_add_del_fdir_udp(vsi, input, add, ipv4); break; case IPPROTO_SCTP: - ret = i40e_add_del_fdir_sctpv4(vsi, input, add); + ret = i40e_add_del_fdir_sctp(vsi, input, add, ipv4); break; case IPPROTO_IP: - ret = i40e_add_del_fdir_ipv4(vsi, input, add); + ret = i40e_add_del_fdir_ip(vsi, input, add, ipv4); break; default: /* We cannot support masking based on protocol */ dev_info(&pf->pdev->dev, "Unsupported IPv4 protocol 0x%02x\n", - input->ip4_proto); + input->ipl4_proto); + return -EINVAL; + } + break; + case IPV6_USER_FLOW: + switch (input->ipl4_proto) { + case IPPROTO_TCP: + ret = i40e_add_del_fdir_tcp(vsi, input, add, ipv6); + break; + case IPPROTO_UDP: + ret = i40e_add_del_fdir_udp(vsi, input, add, ipv6); + break; + case IPPROTO_SCTP: + ret = i40e_add_del_fdir_sctp(vsi, input, add, ipv6); + break; + case IPPROTO_IP: + ret = i40e_add_del_fdir_ip(vsi, input, add, ipv6); + break; + default: + /* We cannot support masking based on protocol */ + dev_info(&pf->pdev->dev, "Unsupported IPv6 protocol 0x%02x\n", + input->ipl4_proto); return -EINVAL; } break; From patchwork Wed Feb 10 23:24:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 380671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 085F6C433DB for ; Wed, 10 Feb 2021 23:26:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDEBA64E0D for ; Wed, 10 Feb 2021 23:26:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234105AbhBJX0V (ORCPT ); Wed, 10 Feb 2021 18:26:21 -0500 Received: from mga01.intel.com ([192.55.52.88]:20916 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234079AbhBJXZL (ORCPT ); Wed, 10 Feb 2021 18:25:11 -0500 IronPort-SDR: pUgLt3sh6bJC3q8WMulLmv2h0GxW5aOEmSHasxiViAZXdWV4L1/FiqeRKkoqWX2ZtN1pMjQ0fV Uktxa0DLYJAw== X-IronPort-AV: E=McAfee;i="6000,8403,9891"; a="201287991" X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="201287991" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2021 15:23:45 -0800 IronPort-SDR: mmEn9ybtKILAVnuFdR2OHfeZxazIwMbtcCeXWLU879bC4ZlGDii7qunPlPzLHPgvnjkI+I5qKc J+Dlxyd1l8MA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,169,1610438400"; d="scan'208";a="361512359" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga007.fm.intel.com with ESMTP; 10 Feb 2021 15:23:45 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Kaixu Xia , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, Tosk Robot , Tony Brelinski Subject: [PATCH net-next 7/7] i40e: remove the useless value assignment in i40e_clean_adminq_subtask Date: Wed, 10 Feb 2021 15:24:36 -0800 Message-Id: <20210210232436.4084373-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> References: <20210210232436.4084373-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Kaixu Xia The variable ret is overwritten by the following call i40e_clean_arq_element() and the assignment is useless, so remove it. Reported-by: Tosk Robot Signed-off-by: Kaixu Xia Tested-by: Tony Brelinski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index ccddf5ca0644..63e19d2e3301 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -9842,7 +9842,7 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf) dev_dbg(&pf->pdev->dev, "ARQ: Update LLDP MIB event received\n"); #ifdef CONFIG_I40E_DCB rtnl_lock(); - ret = i40e_handle_lldp_event(pf, &event); + i40e_handle_lldp_event(pf, &event); rtnl_unlock(); #endif /* CONFIG_I40E_DCB */ break;