From patchwork Mon Aug 24 17:32:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 261998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4483C433E1 for ; Mon, 24 Aug 2020 17:34:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B05A20767 for ; Mon, 24 Aug 2020 17:34:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727966AbgHXReQ (ORCPT ); Mon, 24 Aug 2020 13:34:16 -0400 Received: from mga09.intel.com ([134.134.136.24]:29863 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726745AbgHXRd3 (ORCPT ); Mon, 24 Aug 2020 13:33:29 -0400 IronPort-SDR: oYFOYkw5xTrMU5KUQnZvdf/qX0lbqMTBJYTyZIRWbi4HyBbWUdL2IOx4UTa+qgSdePdGrgPj0X fSXvpodJ76xA== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008548" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008548" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:21 -0700 IronPort-SDR: nhL2NpUJuubTS23CYyuL1LLvjnIJ5BFnG5sdInzQUfM1fGtgErhXXVENxgd2xVaND/22OyarbZ Qs/55/lNYtZw== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245332" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:21 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 03/15] iecm: Add TX/RX header files Date: Mon, 24 Aug 2020 10:32:54 -0700 Message-Id: <20200824173306.3178343-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Introduces the data for the TX/RX paths for use by the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- include/linux/net/intel/iecm_lan_pf_regs.h | 120 ++++ include/linux/net/intel/iecm_lan_txrx.h | 635 +++++++++++++++++++++ include/linux/net/intel/iecm_txrx.h | 582 +++++++++++++++++++ 3 files changed, 1337 insertions(+) create mode 100644 include/linux/net/intel/iecm_lan_pf_regs.h create mode 100644 include/linux/net/intel/iecm_lan_txrx.h create mode 100644 include/linux/net/intel/iecm_txrx.h diff --git a/include/linux/net/intel/iecm_lan_pf_regs.h b/include/linux/net/intel/iecm_lan_pf_regs.h new file mode 100644 index 000000000000..6690a2645608 --- /dev/null +++ b/include/linux/net/intel/iecm_lan_pf_regs.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_LAN_PF_REGS_H_ +#define _IECM_LAN_PF_REGS_H_ + +/* Receive queues */ +#define PF_QRX_BASE 0x00000000 +#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000))) +#define PF_QRX_BUFFQ_BASE 0x03000000 +#define PF_QRX_BUFFQ_TAIL(_QRX) \ + (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000))) + +/* Transmit queues */ +#define PF_QTX_BASE 0x05000000 +#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000)) + +/* Control(PF Mailbox) Queue */ +#define PF_FW_BASE 0x08400000 + +#define PF_FW_ARQBAL (PF_FW_BASE) +#define PF_FW_ARQBAH (PF_FW_BASE + 0x4) +#define PF_FW_ARQLEN (PF_FW_BASE + 0x8) +#define PF_FW_ARQLEN_ARQLEN_S 0 +#define PF_FW_ARQLEN_ARQLEN_M MAKEMASK(0x1FFF, PF_FW_ARQLEN_ARQLEN_S) +#define PF_FW_ARQLEN_ARQVFE_S 28 +#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S) +#define PF_FW_ARQLEN_ARQOVFL_S 29 +#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S) +#define PF_FW_ARQLEN_ARQCRIT_S 30 +#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S) +#define PF_FW_ARQLEN_ARQENABLE_S 31 +#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S) +#define PF_FW_ARQH (PF_FW_BASE + 0xC) +#define PF_FW_ARQH_ARQH_S 0 +#define PF_FW_ARQH_ARQH_M MAKEMASK(0x1FFF, PF_FW_ARQH_ARQH_S) +#define PF_FW_ARQT (PF_FW_BASE + 0x10) + +#define PF_FW_ATQBAL (PF_FW_BASE + 0x14) +#define PF_FW_ATQBAH (PF_FW_BASE + 0x18) +#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C) +#define PF_FW_ATQLEN_ATQLEN_S 0 +#define PF_FW_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, PF_FW_ATQLEN_ATQLEN_S) +#define PF_FW_ATQLEN_ATQVFE_S 28 +#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S) +#define PF_FW_ATQLEN_ATQOVFL_S 29 +#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S) +#define PF_FW_ATQLEN_ATQCRIT_S 30 +#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S) +#define PF_FW_ATQLEN_ATQENABLE_S 31 +#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S) +#define PF_FW_ATQH (PF_FW_BASE + 0x20) +#define PF_FW_ATQH_ATQH_S 0 +#define PF_FW_ATQH_ATQH_M MAKEMASK(0x3FF, PF_FW_ATQH_ATQH_S) +#define PF_FW_ATQT (PF_FW_BASE + 0x24) + +/* Interrupts */ +#define PF_GLINT_BASE 0x08900000 +#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000)) +#define PF_GLINT_DYN_CTL_INTENA_S 0 +#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S) +#define PF_GLINT_DYN_CTL_CLEARPBA_S 1 +#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S) +#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2 +#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S) +#define PF_GLINT_DYN_CTL_ITR_INDX_S 3 +#define PF_GLINT_DYN_CTL_INTERVAL_S 5 +#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S) +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24 +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M \ + BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S) +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25 +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S) +#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31 +#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S) +#define PF_GLINT_ITR(_i, _INT) (PF_GLINT_BASE + (((_INT) + \ + (((_i) + 1) * 4)) * 0x1000)) +#define PF_GLINT_ITR_MAX_INDEX 2 +#define PF_GLINT_ITR_INTERVAL_S 0 +#define PF_GLINT_ITR_INTERVAL_M MAKEMASK(0xFFF, PF_GLINT_ITR_INTERVAL_S) + +/* Generic registers */ +#define PF_INT_DIR_OICR_ENA 0x08406000 +#define PF_INT_DIR_OICR_ENA_S 0 +#define PF_INT_DIR_OICR_ENA_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_ENA_S) +#define PF_INT_DIR_OICR 0x08406004 +#define PF_INT_DIR_OICR_TSYN_EVNT 0 +#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1) +#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2) +#define PF_INT_DIR_OICR_CAUSE 0x08406008 +#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0 +#define PF_INT_DIR_OICR_CAUSE_CAUSE_M \ + MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_CAUSE_CAUSE_S) +#define PF_INT_PBA_CLEAR 0x0840600C + +#define PF_FUNC_RID 0x08406010 +#define PF_FUNC_RID_FUNCTION_NUMBER_S 0 +#define PF_FUNC_RID_FUNCTION_NUMBER_M \ + MAKEMASK(0x7, PF_FUNC_RID_FUNCTION_NUMBER_S) +#define PF_FUNC_RID_DEVICE_NUMBER_S 3 +#define PF_FUNC_RID_DEVICE_NUMBER_M \ + MAKEMASK(0x1F, PF_FUNC_RID_DEVICE_NUMBER_S) +#define PF_FUNC_RID_BUS_NUMBER_S 8 +#define PF_FUNC_RID_BUS_NUMBER_M MAKEMASK(0xFF, PF_FUNC_RID_BUS_NUMBER_S) + +/* Reset registers */ +#define PFGEN_RTRIG 0x08407000 +#define PFGEN_RTRIG_CORER_S 0 +#define PFGEN_RTRIG_CORER_M BIT(0) +#define PFGEN_RTRIG_LINKR_S 1 +#define PFGEN_RTRIG_LINKR_M BIT(1) +#define PFGEN_RTRIG_IMCR_S 2 +#define PFGEN_RTRIG_IMCR_M BIT(2) +#define PFGEN_RSTAT 0x08407008 /* PFR Status */ +#define PFGEN_RSTAT_PFR_STATE_S 0 +#define PFGEN_RSTAT_PFR_STATE_M MAKEMASK(0x3, PFGEN_RSTAT_PFR_STATE_S) +#define PFGEN_CTRL 0x0840700C +#define PFGEN_CTRL_PFSWR BIT(0) + +#endif diff --git a/include/linux/net/intel/iecm_lan_txrx.h b/include/linux/net/intel/iecm_lan_txrx.h new file mode 100644 index 000000000000..68d02239f319 --- /dev/null +++ b/include/linux/net/intel/iecm_lan_txrx.h @@ -0,0 +1,635 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_LAN_TXRX_H_ +#define _IECM_LAN_TXRX_H_ + +enum iecm_rss_hash { + /* Values 0 - 28 are reserved for future use */ + IECM_HASH_INVALID = 0, + IECM_HASH_NONF_UNICAST_IPV4_UDP = 29, + IECM_HASH_NONF_MULTICAST_IPV4_UDP, + IECM_HASH_NONF_IPV4_UDP, + IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK, + IECM_HASH_NONF_IPV4_TCP, + IECM_HASH_NONF_IPV4_SCTP, + IECM_HASH_NONF_IPV4_OTHER, + IECM_HASH_FRAG_IPV4, + /* Values 37-38 are reserved */ + IECM_HASH_NONF_UNICAST_IPV6_UDP = 39, + IECM_HASH_NONF_MULTICAST_IPV6_UDP, + IECM_HASH_NONF_IPV6_UDP, + IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK, + IECM_HASH_NONF_IPV6_TCP, + IECM_HASH_NONF_IPV6_SCTP, + IECM_HASH_NONF_IPV6_OTHER, + IECM_HASH_FRAG_IPV6, + IECM_HASH_NONF_RSVD47, + IECM_HASH_NONF_FCOE_OX, + IECM_HASH_NONF_FCOE_RX, + IECM_HASH_NONF_FCOE_OTHER, + /* Values 51-62 are reserved */ + IECM_HASH_L2_PAYLOAD = 63, + IECM_HASH_MAX +}; + +/* Supported RSS offloads */ +#define IECM_DEFAULT_RSS_HASH ( \ + BIT_ULL(IECM_HASH_NONF_IPV4_UDP) | \ + BIT_ULL(IECM_HASH_NONF_IPV4_SCTP) | \ + BIT_ULL(IECM_HASH_NONF_IPV4_TCP) | \ + BIT_ULL(IECM_HASH_NONF_IPV4_OTHER) | \ + BIT_ULL(IECM_HASH_FRAG_IPV4) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_UDP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_TCP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_SCTP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_OTHER) | \ + BIT_ULL(IECM_HASH_FRAG_IPV6) | \ + BIT_ULL(IECM_HASH_L2_PAYLOAD)) + +#define IECM_DEFAULT_RSS_HASH_EXPANDED (IECM_DEFAULT_RSS_HASH | \ + BIT_ULL(IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \ + BIT_ULL(IECM_HASH_NONF_UNICAST_IPV4_UDP) | \ + BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV4_UDP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \ + BIT_ULL(IECM_HASH_NONF_UNICAST_IPV6_UDP) | \ + BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV6_UDP)) + +/* For iecm_splitq_base_tx_compl_desc */ +#define IECM_TXD_COMPLQ_GEN_S 15 +#define IECM_TXD_COMPLQ_GEN_M BIT_ULL(IECM_TXD_COMPLQ_GEN_S) +#define IECM_TXD_COMPLQ_COMPL_TYPE_S 11 +#define IECM_TXD_COMPLQ_COMPL_TYPE_M \ + MAKEMASK(0x7UL, IECM_TXD_COMPLQ_COMPL_TYPE_S) +#define IECM_TXD_COMPLQ_QID_S 0 +#define IECM_TXD_COMPLQ_QID_M MAKEMASK(0x3FFUL, IECM_TXD_COMPLQ_QID_S) + +/* For base mode TX descriptors */ +#define IECM_TXD_CTX_QW1_MSS_S 50 +#define IECM_TXD_CTX_QW1_MSS_M \ + MAKEMASK(0x3FFFULL, IECM_TXD_CTX_QW1_MSS_S) +#define IECM_TXD_CTX_QW1_TSO_LEN_S 30 +#define IECM_TXD_CTX_QW1_TSO_LEN_M \ + MAKEMASK(0x3FFFFULL, IECM_TXD_CTX_QW1_TSO_LEN_S) +#define IECM_TXD_CTX_QW1_CMD_S 4 +#define IECM_TXD_CTX_QW1_CMD_M \ + MAKEMASK(0xFFFUL, IECM_TXD_CTX_QW1_CMD_S) +#define IECM_TXD_CTX_QW1_DTYPE_S 0 +#define IECM_TXD_CTX_QW1_DTYPE_M \ + MAKEMASK(0xFUL, IECM_TXD_CTX_QW1_DTYPE_S) +#define IECM_TXD_QW1_L2TAG1_S 48 +#define IECM_TXD_QW1_L2TAG1_M \ + MAKEMASK(0xFFFFULL, IECM_TXD_QW1_L2TAG1_S) +#define IECM_TXD_QW1_TX_BUF_SZ_S 34 +#define IECM_TXD_QW1_TX_BUF_SZ_M \ + MAKEMASK(0x3FFFULL, IECM_TXD_QW1_TX_BUF_SZ_S) +#define IECM_TXD_QW1_OFFSET_S 16 +#define IECM_TXD_QW1_OFFSET_M \ + MAKEMASK(0x3FFFFULL, IECM_TXD_QW1_OFFSET_S) +#define IECM_TXD_QW1_CMD_S 4 +#define IECM_TXD_QW1_CMD_M MAKEMASK(0xFFFUL, IECM_TXD_QW1_CMD_S) +#define IECM_TXD_QW1_DTYPE_S 0 +#define IECM_TXD_QW1_DTYPE_M MAKEMASK(0xFUL, IECM_TXD_QW1_DTYPE_S) + +/* TX Completion Descriptor Completion Types */ +#define IECM_TXD_COMPLT_ITR_FLUSH 0 +#define IECM_TXD_COMPLT_RULE_MISS 1 +#define IECM_TXD_COMPLT_RS 2 +#define IECM_TXD_COMPLT_REINJECTED 3 +#define IECM_TXD_COMPLT_RE 4 +#define IECM_TXD_COMPLT_SW_MARKER 5 + +enum iecm_tx_desc_dtype_value { + IECM_TX_DESC_DTYPE_DATA = 0, + IECM_TX_DESC_DTYPE_CTX = 1, + IECM_TX_DESC_DTYPE_REINJECT_CTX = 2, + IECM_TX_DESC_DTYPE_FLEX_DATA = 3, + IECM_TX_DESC_DTYPE_FLEX_CTX = 4, + IECM_TX_DESC_DTYPE_FLEX_TSO_CTX = 5, + IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 = 6, + IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7, + IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX = 8, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_TSO_CTX = 9, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_CTX = 10, + IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX = 11, + IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_TSO_CTX = 13, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_CTX = 14, + /* DESC_DONE - HW has completed write-back of descriptor */ + IECM_TX_DESC_DTYPE_DESC_DONE = 15, +}; + +enum iecm_tx_ctx_desc_cmd_bits { + IECM_TX_CTX_DESC_TSO = 0x01, + IECM_TX_CTX_DESC_TSYN = 0x02, + IECM_TX_CTX_DESC_IL2TAG2 = 0x04, + IECM_TX_CTX_DESC_RSVD = 0x08, + IECM_TX_CTX_DESC_SWTCH_NOTAG = 0x00, + IECM_TX_CTX_DESC_SWTCH_UPLINK = 0x10, + IECM_TX_CTX_DESC_SWTCH_LOCAL = 0x20, + IECM_TX_CTX_DESC_SWTCH_VSI = 0x30, + IECM_TX_CTX_DESC_FILT_AU_EN = 0x40, + IECM_TX_CTX_DESC_FILT_AU_EVICT = 0x80, + IECM_TX_CTX_DESC_RSVD1 = 0xF00 +}; + +enum iecm_tx_desc_len_fields { + /* Note: These are predefined bit offsets */ + IECM_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */ + IECM_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */ + IECM_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */ +}; + +enum iecm_tx_base_desc_cmd_bits { + IECM_TX_DESC_CMD_EOP = 0x0001, + IECM_TX_DESC_CMD_RS = 0x0002, + /* only on VFs else RSVD */ + IECM_TX_DESC_CMD_ICRC = 0x0004, + IECM_TX_DESC_CMD_IL2TAG1 = 0x0008, + IECM_TX_DESC_CMD_RSVD1 = 0x0010, + IECM_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ + IECM_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ + IECM_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ + IECM_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ + IECM_TX_DESC_CMD_RSVD2 = 0x0080, + IECM_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ + IECM_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ + IECM_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ + IECM_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ + IECM_TX_DESC_CMD_RSVD3 = 0x0400, + IECM_TX_DESC_CMD_RSVD4 = 0x0800, +}; + +/* Transmit descriptors */ +/* splitq Tx buf, singleq Tx buf and singleq compl desc */ +struct iecm_base_tx_desc { + __le64 buf_addr; /* Address of descriptor's data buf */ + __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */ +};/* read used with buffer queues*/ + +struct iecm_splitq_tx_compl_desc { + /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */ + __le16 qid_comptype_gen; + union { + __le16 q_head; /* Queue head */ + __le16 compl_tag; /* Completion tag */ + } q_head_compl_tag; + u8 ts[3]; + u8 rsvd; /* Reserved */ +};/* writeback used with completion queues*/ + +/* Context descriptors */ +struct iecm_base_tx_ctx_desc { + struct { + __le32 rsvd0; + __le16 l2tag2; + __le16 rsvd1; + } qw0; + __le64 qw1; /* type_cmd_tlen_mss/rt_hint */ +}; + +/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */ +enum iecm_tx_flex_desc_cmd_bits { + IECM_TX_FLEX_DESC_CMD_EOP = 0x01, + IECM_TX_FLEX_DESC_CMD_RS = 0x02, + IECM_TX_FLEX_DESC_CMD_RE = 0x04, + IECM_TX_FLEX_DESC_CMD_IL2TAG1 = 0x08, + IECM_TX_FLEX_DESC_CMD_DUMMY = 0x10, + IECM_TX_FLEX_DESC_CMD_CS_EN = 0x20, + IECM_TX_FLEX_DESC_CMD_FILT_AU_EN = 0x40, + IECM_TX_FLEX_DESC_CMD_FILT_AU_EVICT = 0x80, +}; + +struct iecm_flex_tx_desc { + __le64 buf_addr; /* Packet buffer address */ + struct { + __le16 cmd_dtype; +#define IECM_FLEX_TXD_QW1_DTYPE_S 0 +#define IECM_FLEX_TXD_QW1_DTYPE_M \ + (0x1FUL << IECM_FLEX_TXD_QW1_DTYPE_S) +#define IECM_FLEX_TXD_QW1_CMD_S 5 +#define IECM_FLEX_TXD_QW1_CMD_M MAKEMASK(0x7FFUL, IECM_TXD_QW1_CMD_S) + union { + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_DATA_(0x03) */ + u8 raw[4]; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 (0x06) */ + struct { + __le16 l2tag1; + u8 flex; + u8 tsync; + } tsync; + + /* DTYPE=IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */ + struct { + __le16 l2tag1; + __le16 l2tag2; + } l2tags; + } flex; + __le16 buf_size; + } qw1; +}; + +struct iecm_flex_tx_sched_desc { + __le64 buf_addr; /* Packet buffer address */ + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */ + struct { + u8 cmd_dtype; +#define IECM_TXD_FLEX_FLOW_DTYPE_M 0x1F +#define IECM_TXD_FLEX_FLOW_CMD_EOP 0x20 +#define IECM_TXD_FLEX_FLOW_CMD_CS_EN 0x40 +#define IECM_TXD_FLEX_FLOW_CMD_RE 0x80 + + /* [23:23] Horizon Overflow bit, [22:0] timestamp */ + u8 ts[3]; +#define IECM_TXD_FLOW_SCH_HORIZON_OVERFLOW_M 0x80 + + __le16 compl_tag; + __le16 rxr_bufsize; +#define IECM_TXD_FLEX_FLOW_RXR 0x4000 +#define IECM_TXD_FLEX_FLOW_BUFSIZE_M 0x3FFF + } qw1; +}; + +/* Common cmd fields for all flex context descriptors + * Note: these defines already account for the 5 bit dtype in the cmd_dtype + * field + */ +enum iecm_tx_flex_ctx_desc_cmd_bits { + IECM_TX_FLEX_CTX_DESC_CMD_TSO = 0x0020, + IECM_TX_FLEX_CTX_DESC_CMD_TSYN_EN = 0x0040, + IECM_TX_FLEX_CTX_DESC_CMD_L2TAG2 = 0x0080, + IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = 0x0200, /* 2 bits */ + IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = 0x0400, /* 2 bits */ + IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = 0x0600, /* 2 bits */ +}; + +/* Standard flex descriptor TSO context quad word */ +struct iecm_flex_tx_tso_ctx_qw { + __le32 flex_tlen; +#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF +#define IECM_TXD_FLEX_TSO_CTX_FLEX_S 24 + __le16 mss_rt; +#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF + u8 hdr_len; + u8 flex; +}; + +union iecm_flex_tx_ctx_desc { + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_CTX (0x04) */ + struct { + u8 qw0_flex[8]; + struct { + __le16 cmd_dtype; + __le16 l2tag1; + u8 qw1_flex[4]; + } qw1; + } gen; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */ + struct { + struct iecm_flex_tx_tso_ctx_qw qw0; + struct { + __le16 cmd_dtype; + u8 flex[6]; + } qw1; + } tso; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX (0x08) */ + struct { + struct iecm_flex_tx_tso_ctx_qw qw0; + struct { + __le16 cmd_dtype; + __le16 l2tag2; + u8 flex0; + u8 ptag; + u8 flex1[2]; + } qw1; + } tso_l2tag2_ptag; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX (0x0B) */ + struct { + u8 qw0_flex[8]; + struct { + __le16 cmd_dtype; + __le16 l2tag2; + u8 flex[4]; + } qw1; + } l2tag2; + + /* DTYPE = IECM_TX_DESC_DTYPE_REINJECT_CTX (0x02) */ + struct { + struct { + __le32 sa_domain; +#define IECM_TXD_FLEX_CTX_SA_DOM_M 0xFFFF +#define IECM_TXD_FLEX_CTX_SA_DOM_VAL 0x10000 + __le32 sa_idx; +#define IECM_TXD_FLEX_CTX_SAIDX_M 0x1FFFFF + } qw0; + struct { + __le16 cmd_dtype; + __le16 txr2comp; +#define IECM_TXD_FLEX_CTX_TXR2COMP 0x1 + __le16 miss_txq_comp_tag; + __le16 miss_txq_id; + } qw1; + } reinjection_pkt; +}; + +/* Host Split Context Descriptors */ +struct iecm_flex_tx_hs_ctx_desc { + union { + struct { + __le32 host_fnum_tlen; +#define IECM_TXD_FLEX_CTX_TLEN_S 0 +#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF +#define IECM_TXD_FLEX_CTX_FNUM_S 18 +#define IECM_TXD_FLEX_CTX_FNUM_M 0x7FF +#define IECM_TXD_FLEX_CTX_HOST_S 29 +#define IECM_TXD_FLEX_CTX_HOST_M 0x7 + __le16 ftype_mss_rt; +#define IECM_TXD_FLEX_CTX_MSS_RT_0 0 +#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF +#define IECM_TXD_FLEX_CTX_FTYPE_S 14 +#define IECM_TXD_FLEX_CTX_FTYPE_VF MAKEMASK(0x0, IECM_TXD_FLEX_CTX_FTYPE_S) +#define IECM_TXD_FLEX_CTX_FTYPE_VDEV MAKEMASK(0x1, IECM_TXD_FLEX_CTX_FTYPE_S) +#define IECM_TXD_FLEX_CTX_FTYPE_PF MAKEMASK(0x2, IECM_TXD_FLEX_CTX_FTYPE_S) + u8 hdr_len; + u8 ptag; + } tso; + struct { + u8 flex0[2]; + __le16 host_fnum_ftype; + u8 flex1[3]; + u8 ptag; + } no_tso; + } qw0; + + __le64 qw1_cmd_dtype; +#define IECM_TXD_FLEX_CTX_QW1_PASID_S 16 +#define IECM_TXD_FLEX_CTX_QW1_PASID_M 0xFFFFF +#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID_S 36 +#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID \ + MAKEMASK(0x1, IECM_TXD_FLEX_CTX_PASID_VALID_S) +#define IECM_TXD_FLEX_CTX_QW1_TPH_S 37 +#define IECM_TXD_FLEX_CTX_QW1_TPH \ + MAKEMASK(0x1, IECM_TXD_FLEX_CTX_TPH_S) +#define IECM_TXD_FLEX_CTX_QW1_PFNUM_S 38 +#define IECM_TXD_FLEX_CTX_QW1_PFNUM_M 0xF +/* The following are only valid for DTYPE = 0x09 and DTYPE = 0x0A */ +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_S 42 +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_M 0x1FFFFF +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S 63 +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VALID \ + MAKEMASK(0x1, IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S) +/* The following are only valid for DTYPE = 0x0D and DTYPE = 0x0E */ +#define IECM_TXD_FLEX_CTX_QW1_FLEX0_S 48 +#define IECM_TXD_FLEX_CTX_QW1_FLEX0_M 0xFF +#define IECM_TXD_FLEX_CTX_QW1_FLEX1_S 56 +#define IECM_TXD_FLEX_CTX_QW1_FLEX1_M 0xFF +}; + +/* Rx */ +/* For iecm_splitq_base_rx_flex desc members */ +#define IECM_RXD_FLEX_PTYPE_S 0 +#define IECM_RXD_FLEX_PTYPE_M MAKEMASK(0x3FFUL, IECM_RXD_FLEX_PTYPE_S) +#define IECM_RXD_FLEX_UMBCAST_S 10 +#define IECM_RXD_FLEX_UMBCAST_M MAKEMASK(0x3UL, IECM_RXD_FLEX_UMBCAST_S) +#define IECM_RXD_FLEX_FF0_S 12 +#define IECM_RXD_FLEX_FF0_M MAKEMASK(0xFUL, IECM_RXD_FLEX_FF0_S) +#define IECM_RXD_FLEX_LEN_PBUF_S 0 +#define IECM_RXD_FLEX_LEN_PBUF_M \ + MAKEMASK(0x3FFFUL, IECM_RXD_FLEX_LEN_PBUF_S) +#define IECM_RXD_FLEX_GEN_S 14 +#define IECM_RXD_FLEX_GEN_M BIT_ULL(IECM_RXD_FLEX_GEN_S) +#define IECM_RXD_FLEX_BUFQ_ID_S 15 +#define IECM_RXD_FLEX_BUFQ_ID_M BIT_ULL(IECM_RXD_FLEX_BUFQ_ID_S) +#define IECM_RXD_FLEX_LEN_HDR_S 0 +#define IECM_RXD_FLEX_LEN_HDR_M \ + MAKEMASK(0x3FFUL, IECM_RXD_FLEX_LEN_HDR_S) +#define IECM_RXD_FLEX_RSC_S 10 +#define IECM_RXD_FLEX_RSC_M BIT_ULL(IECM_RXD_FLEX_RSC_S) +#define IECM_RXD_FLEX_SPH_S 11 +#define IECM_RXD_FLEX_SPH_M BIT_ULL(IECM_RXD_FLEX_SPH_S) +#define IECM_RXD_FLEX_MISS_S 12 +#define IECM_RXD_FLEX_MISS_M BIT_ULL(IECM_RXD_FLEX_MISS_S) +#define IECM_RXD_FLEX_FF1_S 13 +#define IECM_RXD_FLEX_FF1_M MAKEMASK(0x7UL, IECM_RXD_FLEX_FF1_M) + +/* For iecm_singleq_base_rx_legacy desc members */ +#define IECM_RXD_QW1_LEN_SPH_S 63 +#define IECM_RXD_QW1_LEN_SPH_M BIT_ULL(IECM_RXD_QW1_LEN_SPH_S) +#define IECM_RXD_QW1_LEN_HBUF_S 52 +#define IECM_RXD_QW1_LEN_HBUF_M MAKEMASK(0x7FFULL, IECM_RXD_QW1_LEN_HBUF_S) +#define IECM_RXD_QW1_LEN_PBUF_S 38 +#define IECM_RXD_QW1_LEN_PBUF_M MAKEMASK(0x3FFFULL, IECM_RXD_QW1_LEN_PBUF_S) +#define IECM_RXD_QW1_PTYPE_S 30 +#define IECM_RXD_QW1_PTYPE_M MAKEMASK(0xFFULL, IECM_RXD_QW1_PTYPE_S) +#define IECM_RXD_QW1_ERROR_S 19 +#define IECM_RXD_QW1_ERROR_M MAKEMASK(0xFFUL, IECM_RXD_QW1_ERROR_S) +#define IECM_RXD_QW1_STATUS_S 0 +#define IECM_RXD_QW1_STATUS_M MAKEMASK(0x7FFFFUL, IECM_RXD_QW1_STATUS_S) + +enum iecm_rx_flex_desc_status_error_0_qw1_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_FLEX_DESC_STATUS0_DD_S = 0, + IECM_RX_FLEX_DESC_STATUS0_EOF_S, + IECM_RX_FLEX_DESC_STATUS0_HBO_S, + IECM_RX_FLEX_DESC_STATUS0_L3L4P_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S, +}; + +enum iecm_rx_flex_desc_status_error_0_qw0_bits { + IECM_RX_FLEX_DESC_STATUS0_LPBK_S = 0, + IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S, + IECM_RX_FLEX_DESC_STATUS0_RXE_S, + IECM_RX_FLEX_DESC_STATUS0_CRCP_S, + IECM_RX_FLEX_DESC_STATUS0_RSS_VALID_S, + IECM_RX_FLEX_DESC_STATUS0_L2TAG1P_S, + IECM_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S, + IECM_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S, + IECM_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */ +}; + +enum iecm_rx_flex_desc_status_error_1_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_FLEX_DESC_STATUS1_RSVD_S = 0, /* 2 bits */ + IECM_RX_FLEX_DESC_STATUS1_ATRAEFAIL_S = 2, + IECM_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 3, + IECM_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 4, + IECM_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 5, + IECM_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 6, + IECM_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 7, + IECM_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ +}; + +enum iecm_rx_base_desc_status_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_BASE_DESC_STATUS_DD_S = 0, + IECM_RX_BASE_DESC_STATUS_EOF_S = 1, + IECM_RX_BASE_DESC_STATUS_L2TAG1P_S = 2, + IECM_RX_BASE_DESC_STATUS_L3L4P_S = 3, + IECM_RX_BASE_DESC_STATUS_CRCP_S = 4, + IECM_RX_BASE_DESC_STATUS_RSVD_S = 5, /* 3 BITS */ + IECM_RX_BASE_DESC_STATUS_EXT_UDP_0_S = 8, + IECM_RX_BASE_DESC_STATUS_UMBCAST_S = 9, /* 2 BITS */ + IECM_RX_BASE_DESC_STATUS_FLM_S = 11, + IECM_RX_BASE_DESC_STATUS_FLTSTAT_S = 12, /* 2 BITS */ + IECM_RX_BASE_DESC_STATUS_LPBK_S = 14, + IECM_RX_BASE_DESC_STATUS_IPV6EXADD_S = 15, + IECM_RX_BASE_DESC_STATUS_RSVD1_S = 16, /* 2 BITS */ + IECM_RX_BASE_DESC_STATUS_INT_UDP_0_S = 18, + IECM_RX_BASE_DESC_STATUS_LAST /* this entry must be last!!! */ +}; + +enum iecm_rx_desc_fltstat_values { + IECM_RX_DESC_FLTSTAT_NO_DATA = 0, + IECM_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ + IECM_RX_DESC_FLTSTAT_RSV = 2, + IECM_RX_DESC_FLTSTAT_RSS_HASH = 3, +}; + +enum iecm_rx_base_desc_error_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_BASE_DESC_ERROR_RXE_S = 0, + IECM_RX_BASE_DESC_ERROR_ATRAEFAIL_S = 1, + IECM_RX_BASE_DESC_ERROR_HBO_S = 2, + IECM_RX_BASE_DESC_ERROR_L3L4E_S = 3, /* 3 BITS */ + IECM_RX_BASE_DESC_ERROR_IPE_S = 3, + IECM_RX_BASE_DESC_ERROR_L4E_S = 4, + IECM_RX_BASE_DESC_ERROR_EIPE_S = 5, + IECM_RX_BASE_DESC_ERROR_OVERSIZE_S = 6, + IECM_RX_BASE_DESC_ERROR_RSVD_S = 7 +}; + +/* Receive Descriptors */ +/* splitq buf*/ +struct iecm_splitq_rx_buf_desc { + struct { + __le16 buf_id; /* Buffer Identifier */ + __le16 rsvd0; + __le32 rsvd1; + } qword0; + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 rsvd2; +}; /* read used with buffer queues*/ + +/* singleq buf */ +struct iecm_singleq_rx_buf_desc { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 rsvd1; + __le64 rsvd2; +}; /* read used with buffer queues*/ + +union iecm_rx_buf_desc { + struct iecm_singleq_rx_buf_desc read; + struct iecm_splitq_rx_buf_desc split_rd; +}; + +/* splitq compl */ +struct iecm_flex_rx_desc { + /* Qword 0 */ + u8 rxdid_ucast; /* profile_id=[3:0] */ + /* rsvd=[5:4] */ + /* ucast=[7:6] */ + u8 status_err0_qw0; + __le16 ptype_err_fflags0; /* ptype=[9:0] */ + /* ip_hdr_err=[10:10] */ + /* udp_len_err=[11:11] */ + /* ff0=[15:12] */ + __le16 pktlen_gen_bufq_id; /* plen=[13:0] */ + /* gen=[14:14] only in splitq */ + /* bufq_id=[15:15] only in splitq */ + __le16 hdrlen_flags; /* header=[9:0] */ + /* rsc=[10:10] only in splitq */ + /* sph=[11:11] only in splitq */ + /* ext_udp_0=[12:12] */ + /* int_udp_0=[13:13] */ + /* trunc_mirr=[14:14] */ + /* miss_prepend=[15:15] */ + /* Qword 1 */ + u8 status_err0_qw1; + u8 status_err1; + u8 fflags1; + u8 ts_low; + union { + __le16 fmd0; + __le16 buf_id; /* only in splitq */ + } fmd0_bufid; + union { + __le16 fmd1; + __le16 raw_cs; + __le16 l2tag1; + __le16 rscseglen; + } fmd1_misc; + /* Qword 2 */ + union { + __le16 fmd2; + __le16 hash1; + } fmd2_hash1; + union { + u8 fflags2; + u8 mirrorid; + u8 hash2; + } ff2_mirrid_hash2; + u8 hash3; + union { + __le16 fmd3; + __le16 l2tag2; + } fmd3_l2tag2; + __le16 fmd4; + /* Qword 3 */ + union { + __le16 fmd5; + __le16 l2tag1; + } fmd5_l2tag1; + __le16 fmd6; + union { + struct { + __le16 fmd7_0; + __le16 fmd7_1; + } fmd7; + __le32 ts_high; + } flex_ts; +}; /* writeback */ + +/* singleq wb(compl) */ +struct iecm_singleq_base_rx_desc { + struct { + struct { + __le16 mirroring_status; + __le16 l2tag1; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + __le32 fd_id; /* Flow Director filter id */ + } hi_dword; + } qword0; + struct { + /* status/error/PTYPE/length */ + __le64 status_error_ptype_len; + } qword1; + struct { + __le16 ext_status; /* extended status */ + __le16 rsvd; + __le16 l2tag2_1; + __le16 l2tag2_2; + } qword2; + struct { + __le32 reserved; + __le32 fd_id; + } qword3; +}; /* writeback */ + +union iecm_rx_desc { + struct iecm_singleq_rx_buf_desc read; + struct iecm_singleq_base_rx_desc base_wb; + struct iecm_flex_rx_desc flex_wb; +}; +#endif /* _IECM_LAN_TXRX_H_ */ diff --git a/include/linux/net/intel/iecm_txrx.h b/include/linux/net/intel/iecm_txrx.h new file mode 100644 index 000000000000..acac414eae05 --- /dev/null +++ b/include/linux/net/intel/iecm_txrx.h @@ -0,0 +1,582 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IECM_TXRX_H_ +#define _IECM_TXRX_H_ + +#define IECM_MAX_Q 16 +/* Mailbox Queue */ +#define IECM_MAX_NONQ 1 +#define IECM_MAX_TXQ_DESC 512 +#define IECM_MAX_RXQ_DESC 512 +#define IECM_MIN_TXQ_DESC 128 +#define IECM_MIN_RXQ_DESC 128 +#define IECM_REQ_DESC_MULTIPLE 32 + +#define IECM_DFLT_SINGLEQ_TX_Q_GROUPS 1 +#define IECM_DFLT_SINGLEQ_RX_Q_GROUPS 1 +#define IECM_DFLT_SINGLEQ_TXQ_PER_GROUP 4 +#define IECM_DFLT_SINGLEQ_RXQ_PER_GROUP 4 + +#define IECM_COMPLQ_PER_GROUP 1 +#define IECM_BUFQS_PER_RXQ_SET 2 + +#define IECM_DFLT_SPLITQ_TX_Q_GROUPS 4 +#define IECM_DFLT_SPLITQ_RX_Q_GROUPS 4 +#define IECM_DFLT_SPLITQ_TXQ_PER_GROUP 1 +#define IECM_DFLT_SPLITQ_RXQ_PER_GROUP 1 + +/* Default vector sharing */ +#define IECM_MAX_NONQ_VEC 1 +#define IECM_MAX_Q_VEC 4 /* For Tx Completion queue and Rx queue */ +#define IECM_MAX_RDMA_VEC 2 /* To share with RDMA */ +#define IECM_MIN_RDMA_VEC 1 /* Minimum vectors to be shared with RDMA */ +#define IECM_MIN_VEC 3 /* One for mailbox, one for data queues, one + * for RDMA + */ + +#define IECM_DFLT_TX_Q_DESC_COUNT 512 +#define IECM_DFLT_TX_COMPLQ_DESC_COUNT 512 +#define IECM_DFLT_RX_Q_DESC_COUNT 512 +#define IECM_DFLT_RX_BUFQ_DESC_COUNT 512 + +#define IECM_RX_BUF_WRITE 16 /* Must be power of 2 */ +#define IECM_RX_HDR_SIZE 256 +#define IECM_RX_BUF_2048 2048 +#define IECM_RX_BUF_STRIDE 64 +#define IECM_LOW_WATERMARK 64 +#define IECM_HDR_BUF_SIZE 256 +#define IECM_PACKET_HDR_PAD \ + (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) +#define IECM_MAX_RXBUFFER 9728 +#define IECM_MAX_MTU \ + (IECM_MAX_RXBUFFER - IECM_PACKET_HDR_PAD) + +#define IECM_SINGLEQ_RX_BUF_DESC(R, i) \ + (&(((struct iecm_singleq_rx_buf_desc *)((R)->desc_ring))[i])) +#define IECM_SPLITQ_RX_BUF_DESC(R, i) \ + (&(((struct iecm_splitq_rx_buf_desc *)((R)->desc_ring))[i])) + +#define IECM_BASE_TX_DESC(R, i) \ + (&(((struct iecm_base_tx_desc *)((R)->desc_ring))[i])) +#define IECM_SPLITQ_TX_COMPLQ_DESC(R, i) \ + (&(((struct iecm_splitq_tx_compl_desc *)((R)->desc_ring))[i])) + +#define IECM_FLEX_TX_DESC(R, i) \ + (&(((union iecm_tx_flex_desc *)((R)->desc_ring))[i])) +#define IECM_FLEX_TX_CTX_DESC(R, i) \ + (&(((union iecm_flex_tx_ctx_desc *)((R)->desc_ring))[i])) + +#define IECM_DESC_UNUSED(R) \ + ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->desc_count) + \ + (R)->next_to_clean - (R)->next_to_use - 1) + +union iecm_tx_flex_desc { + struct iecm_flex_tx_desc q; /* queue based scheduling */ + struct iecm_flex_tx_sched_desc flow; /* flow based scheduling */ +}; + +struct iecm_tx_buf { + struct hlist_node hlist; + void *next_to_watch; + struct sk_buff *skb; + unsigned int bytecount; + unsigned short gso_segs; +#define IECM_TX_FLAGS_TSO BIT(0) + u32 tx_flags; + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); + u16 compl_tag; /* Unique identifier for buffer; used to + * compare with completion tag returned + * in buffer completion event + */ +}; + +struct iecm_buf_lifo { + u16 top; + u16 size; + struct iecm_tx_buf **bufs; +}; + +struct iecm_tx_offload_params { + u16 td_cmd; /* command field to be inserted into descriptor */ + u32 tso_len; /* total length of payload to segment */ + u16 mss; + u8 tso_hdr_len; /* length of headers to be duplicated */ + + /* Flow scheduling offload timestamp, formatting as hw expects it */ +#define IECM_TW_TIME_STAMP_GRAN_512_DIV_S 9 +#define IECM_TW_TIME_STAMP_GRAN_1024_DIV_S 10 +#define IECM_TW_TIME_STAMP_GRAN_2048_DIV_S 11 +#define IECM_TW_TIME_STAMP_GRAN_4096_DIV_S 12 + u64 desc_ts; + + /* For legacy offloads */ + u32 hdr_offsets; +}; + +struct iecm_tx_splitq_params { + /* Descriptor build function pointer */ + void (*splitq_build_ctb)(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *params, + u16 td_cmd, u16 size); + + /* General descriptor info */ + enum iecm_tx_desc_dtype_value dtype; + u16 eop_cmd; + u16 compl_tag; /* only relevant for flow scheduling */ + + struct iecm_tx_offload_params offload; +}; + +#define IECM_TX_COMPLQ_CLEAN_BUDGET 256 +#define IECM_TX_MIN_LEN 17 +#define IECM_TX_DESCS_FOR_SKB_DATA_PTR 1 +#define IECM_TX_MAX_BUF 8 +#define IECM_TX_DESCS_PER_CACHE_LINE 4 +#define IECM_TX_DESCS_FOR_CTX 1 +/* TX descriptors needed, worst case */ +#define IECM_TX_DESC_NEEDED (MAX_SKB_FRAGS + IECM_TX_DESCS_FOR_CTX + \ + IECM_TX_DESCS_PER_CACHE_LINE + \ + IECM_TX_DESCS_FOR_SKB_DATA_PTR) + +/* The size limit for a transmit buffer in a descriptor is (16K - 1). + * In order to align with the read requests we will align the value to + * the nearest 4K which represents our maximum read request size. + */ +#define IECM_TX_MAX_READ_REQ_SIZE 4096 +#define IECM_TX_MAX_DESC_DATA (16 * 1024 - 1) +#define IECM_TX_MAX_DESC_DATA_ALIGNED \ + (~(IECM_TX_MAX_READ_REQ_SIZE - 1) & IECM_TX_MAX_DESC_DATA) + +#define IECM_RX_DMA_ATTR \ + (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) +#define IECM_RX_DESC(R, i) \ + (&(((union iecm_rx_desc *)((R)->desc_ring))[i])) + +struct iecm_rx_buf { + struct sk_buff *skb; + dma_addr_t dma; + struct page *page; + unsigned int page_offset; + u16 pagecnt_bias; + u16 buf_id; +}; + +/* Packet type non-ip values */ +enum iecm_rx_ptype_l2 { + IECM_RX_PTYPE_L2_RESERVED = 0, + IECM_RX_PTYPE_L2_MAC_PAY2 = 1, + IECM_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, + IECM_RX_PTYPE_L2_FIP_PAY2 = 3, + IECM_RX_PTYPE_L2_OUI_PAY2 = 4, + IECM_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, + IECM_RX_PTYPE_L2_LLDP_PAY2 = 6, + IECM_RX_PTYPE_L2_ECP_PAY2 = 7, + IECM_RX_PTYPE_L2_EVB_PAY2 = 8, + IECM_RX_PTYPE_L2_QCN_PAY2 = 9, + IECM_RX_PTYPE_L2_EAPOL_PAY2 = 10, + IECM_RX_PTYPE_L2_ARP = 11, +}; + +enum iecm_rx_ptype_outer_ip { + IECM_RX_PTYPE_OUTER_L2 = 0, + IECM_RX_PTYPE_OUTER_IP = 1, +}; + +enum iecm_rx_ptype_outer_ip_ver { + IECM_RX_PTYPE_OUTER_NONE = 0, + IECM_RX_PTYPE_OUTER_IPV4 = 1, + IECM_RX_PTYPE_OUTER_IPV6 = 2, +}; + +enum iecm_rx_ptype_outer_fragmented { + IECM_RX_PTYPE_NOT_FRAG = 0, + IECM_RX_PTYPE_FRAG = 1, +}; + +enum iecm_rx_ptype_tunnel_type { + IECM_RX_PTYPE_TUNNEL_NONE = 0, + IECM_RX_PTYPE_TUNNEL_IP_IP = 1, + IECM_RX_PTYPE_TUNNEL_IP_GRENAT = 2, + IECM_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, + IECM_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, +}; + +enum iecm_rx_ptype_tunnel_end_prot { + IECM_RX_PTYPE_TUNNEL_END_NONE = 0, + IECM_RX_PTYPE_TUNNEL_END_IPV4 = 1, + IECM_RX_PTYPE_TUNNEL_END_IPV6 = 2, +}; + +enum iecm_rx_ptype_inner_prot { + IECM_RX_PTYPE_INNER_PROT_NONE = 0, + IECM_RX_PTYPE_INNER_PROT_UDP = 1, + IECM_RX_PTYPE_INNER_PROT_TCP = 2, + IECM_RX_PTYPE_INNER_PROT_SCTP = 3, + IECM_RX_PTYPE_INNER_PROT_ICMP = 4, + IECM_RX_PTYPE_INNER_PROT_TIMESYNC = 5, +}; + +enum iecm_rx_ptype_payload_layer { + IECM_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, + IECM_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, + IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, + IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, +}; + +struct iecm_rx_ptype_decoded { + u32 ptype:10; + u32 known:1; + u32 outer_ip:1; + u32 outer_ip_ver:2; + u32 outer_frag:1; + u32 tunnel_type:3; + u32 tunnel_end_prot:2; + u32 tunnel_end_frag:1; + u32 inner_prot:4; + u32 payload_layer:3; +}; + +enum iecm_rx_hsplit { + IECM_RX_NO_HDR_SPLIT = 0, + IECM_RX_HDR_SPLIT = 1, + IECM_RX_HDR_SPLIT_PERF = 2, +}; + +/* The iecm_ptype_lkup table is used to convert from the 10-bit ptype in the + * hardware to a bit-field that can be used by SW to more easily determine the + * packet type. + * + * Macros are used to shorten the table lines and make this table human + * readable. + * + * We store the PTYPE in the top byte of the bit field - this is just so that + * we can check that the table doesn't have a row missing, as the index into + * the table should be the PTYPE. + * + * Typical work flow: + * + * IF NOT iecm_ptype_lkup[ptype].known + * THEN + * Packet is unknown + * ELSE IF iecm_ptype_lkup[ptype].outer_ip == IECM_RX_PTYPE_OUTER_IP + * Use the rest of the fields to look at the tunnels, inner protocols, etc + * ELSE + * Use the enum iecm_rx_ptype_l2 to decode the packet type + * ENDIF + */ +/* macro to make the table lines short */ +#define IECM_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ + { PTYPE, \ + 1, \ + IECM_RX_PTYPE_OUTER_##OUTER_IP, \ + IECM_RX_PTYPE_OUTER_##OUTER_IP_VER, \ + IECM_RX_PTYPE_##OUTER_FRAG, \ + IECM_RX_PTYPE_TUNNEL_##T, \ + IECM_RX_PTYPE_TUNNEL_END_##TE, \ + IECM_RX_PTYPE_##TEF, \ + IECM_RX_PTYPE_INNER_PROT_##I, \ + IECM_RX_PTYPE_PAYLOAD_LAYER_##PL } + +#define IECM_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 } + +/* shorter macros makes the table fit but are terse */ +#define IECM_RX_PTYPE_NOF IECM_RX_PTYPE_NOT_FRAG +#define IECM_RX_PTYPE_FRG IECM_RX_PTYPE_FRAG +#define IECM_RX_PTYPE_INNER_PROT_TS IECM_RX_PTYPE_INNER_PROT_TIMESYNC +#define IECM_RX_SUPP_PTYPE 18 +#define IECM_RX_MAX_PTYPE 1024 + +#define IECM_INT_NAME_STR_LEN (IFNAMSIZ + 16) + +enum iecm_queue_flags_t { + __IECM_Q_GEN_CHK, + __IECM_Q_FLOW_SCH_EN, + __IECM_Q_SW_MARKER, + __IECM_Q_FLAGS_NBITS, +}; + +struct iecm_intr_reg { + u32 dyn_ctl; + u32 dyn_ctl_intena_m; + u32 dyn_ctl_clrpba_m; + u32 dyn_ctl_itridx_s; + u32 dyn_ctl_itridx_m; + u32 dyn_ctl_intrvl_s; + u32 itr; +}; + +struct iecm_q_vector { + struct iecm_vport *vport; + cpumask_t affinity_mask; + struct napi_struct napi; + u16 v_idx; /* index in the vport->q_vector array */ + u8 itr_countdown; /* when 0 should adjust ITR */ + struct iecm_intr_reg intr_reg; + int num_txq; + struct iecm_queue **tx; + int num_rxq; + struct iecm_queue **rx; + char name[IECM_INT_NAME_STR_LEN]; +}; + +struct iecm_rx_queue_stats { + u64 packets; + u64 bytes; + u64 csum_complete; + u64 csum_unnecessary; + u64 csum_err; + u64 hsplit; + u64 hsplit_hbo; +}; + +struct iecm_tx_queue_stats { + u64 packets; + u64 bytes; +}; + +union iecm_queue_stats { + struct iecm_rx_queue_stats rx; + struct iecm_tx_queue_stats tx; +}; + +enum iecm_latency_range { + IECM_LOWEST_LATENCY = 0, + IECM_LOW_LATENCY = 1, + IECM_BULK_LATENCY = 2, +}; + +struct iecm_itr { + u16 current_itr; + u16 target_itr; + enum virtchnl_itr_idx itr_idx; + union iecm_queue_stats stats; /* will reset to 0 when adjusting ITR */ + enum iecm_latency_range latency_range; + unsigned long next_update; /* jiffies of last ITR update */ +}; + +/* indices into GLINT_ITR registers */ +#define IECM_ITR_ADAPTIVE_MIN_INC 0x0002 +#define IECM_ITR_ADAPTIVE_MIN_USECS 0x0002 +#define IECM_ITR_ADAPTIVE_MAX_USECS 0x007e +#define IECM_ITR_ADAPTIVE_LATENCY 0x8000 +#define IECM_ITR_ADAPTIVE_BULK 0x0000 +#define ITR_IS_BULK(x) (!((x) & IECM_ITR_ADAPTIVE_LATENCY)) + +#define IECM_ITR_DYNAMIC 0X8000 /* use top bit as a flag */ +#define IECM_ITR_MAX 0x1FE0 +#define IECM_ITR_100K 0x000A +#define IECM_ITR_50K 0x0014 +#define IECM_ITR_20K 0x0032 +#define IECM_ITR_18K 0x003C +#define IECM_ITR_GRAN_S 1 /* Assume ITR granularity is 2us */ +#define IECM_ITR_MASK 0x1FFE /* ITR register value alignment mask */ +#define ITR_REG_ALIGN(setting) __ALIGN_MASK(setting, ~IECM_ITR_MASK) +#define IECM_ITR_IS_DYNAMIC(setting) (!!((setting) & IECM_ITR_DYNAMIC)) +#define IECM_ITR_SETTING(setting) ((setting) & ~IECM_ITR_DYNAMIC) +#define ITR_COUNTDOWN_START 100 +#define IECM_ITR_TX_DEF IECM_ITR_20K +#define IECM_ITR_RX_DEF IECM_ITR_50K + +/* queue associated with a vport */ +struct iecm_queue { + struct device *dev; /* Used for DMA mapping */ + struct iecm_vport *vport; /* Back reference to associated vport */ + union { + struct iecm_txq_group *txq_grp; + struct iecm_rxq_group *rxq_grp; + }; + /* bufq: Used as group id, either 0 or 1, on clean Buf Q uses this + * index to determine which group of refill queues to clean. + * Bufqs are use in splitq only. + * txq: Index to map between Tx Q group and hot path Tx ptrs stored in + * vport. Used in both single Q/split Q. + */ + u16 idx; + /* Used for both Q models single and split. In split Q model relevant + * only to Tx Q and Rx Q + */ + u8 __iomem *tail; + /* Used in both single and split Q. In single Q, Tx Q uses tx_buf and + * Rx Q uses rx_buf. In split Q, Tx Q uses tx_buf, Rx Q uses skb, and + * Buf Q uses rx_buf. + */ + union { + struct iecm_tx_buf *tx_buf; + struct { + struct iecm_rx_buf *buf; + struct iecm_rx_buf *hdr_buf; + } rx_buf; + struct sk_buff *skb; + }; + enum virtchnl_queue_type q_type; + /* Queue id(Tx/Tx compl/Rx/Bufq) */ + u16 q_id; + u16 desc_count; /* Number of descriptors */ + + /* Relevant in both split & single Tx Q & Buf Q*/ + u16 next_to_use; + /* In split q model only relevant for Tx Compl Q and Rx Q */ + u16 next_to_clean; /* used in interrupt processing */ + /* Used only for Rx. In split Q model only relevant to Rx Q */ + u16 next_to_alloc; + /* Generation bit check stored, as HW flips the bit at Queue end */ + DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS); + + union iecm_queue_stats q_stats; + struct u64_stats_sync stats_sync; + + enum iecm_rx_hsplit rx_hsplit_en; + + u16 rx_hbuf_size; /* Header buffer size */ + u16 rx_buf_size; + u16 rx_max_pkt_size; + u16 rx_buf_stride; + u8 rsc_low_watermark; + /* Used for both Q models single and split. In split Q model relevant + * only to Tx compl Q and Rx compl Q + */ + struct iecm_q_vector *q_vector; /* Back reference to associated vector */ + struct iecm_itr itr; + unsigned int size; /* length of descriptor ring in bytes */ + dma_addr_t dma; /* physical address of ring */ + void *desc_ring; /* Descriptor ring memory */ + + struct iecm_buf_lifo buf_stack; /* Stack of empty buffers to store + * buffer info for out of order + * buffer completions + */ + u16 tx_buf_key; /* 16 bit unique "identifier" (index) + * to be used as the completion tag when + * queue is using flow based scheduling + */ + DECLARE_HASHTABLE(sched_buf_hash, 12); +} ____cacheline_internodealigned_in_smp; + +/* Software queues are used in splitq mode to manage buffers between rxq + * producer and the bufq consumer. These are required in order to maintain a + * lockless buffer management system and are strictly software only constructs. + */ +struct iecm_sw_queue { + u16 next_to_clean ____cacheline_aligned_in_smp; + u16 next_to_alloc ____cacheline_aligned_in_smp; + DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS) + ____cacheline_aligned_in_smp; + u16 *ring ____cacheline_aligned_in_smp; + u16 q_entries; +} ____cacheline_internodealigned_in_smp; + +/* Splitq only. iecm_rxq_set associates an rxq with at most two refillqs. + * Each rxq needs a refillq to return used buffers back to the respective bufq. + * Bufqs then clean these refillqs for buffers to give to hardware. + */ +struct iecm_rxq_set { + struct iecm_queue rxq; + /* refillqs assoc with bufqX mapped to this rxq */ + struct iecm_sw_queue *refillq0; + struct iecm_sw_queue *refillq1; +}; + +/* Splitq only. iecm_bufq_set associates a bufq to an overflow and array of + * refillqs. In this bufq_set, there will be one refillq for each rxq in this + * rxq_group. Used buffers received by rxqs will be put on refillqs which + * bufqs will clean to return new buffers back to hardware. + * + * Buffers needed by some number of rxqs associated in this rxq_group are + * managed by at most two bufqs (depending on performance configuration). + */ +struct iecm_bufq_set { + struct iecm_queue bufq; + struct iecm_sw_queue overflowq; + /* This is always equal to num_rxq_sets in idfp_rxq_group */ + int num_refillqs; + struct iecm_sw_queue *refillqs; +}; + +/* In singleq mode, an rxq_group is simply an array of rxqs. In splitq, a + * rxq_group contains all the rxqs, bufqs, refillqs, and overflowqs needed to + * manage buffers in splitq mode. + */ +struct iecm_rxq_group { + struct iecm_vport *vport; /* back pointer */ + + union { + struct { + int num_rxq; + struct iecm_queue *rxqs; + } singleq; + struct { + int num_rxq_sets; + struct iecm_rxq_set *rxq_sets; + struct iecm_bufq_set *bufq_sets; + } splitq; + }; +}; + +/* Between singleq and splitq, a txq_group is largely the same except for the + * complq. In splitq a single complq is responsible for handling completions + * for some number of txqs associated in this txq_group. + */ +struct iecm_txq_group { + struct iecm_vport *vport; /* back pointer */ + + int num_txq; + struct iecm_queue *txqs; + + /* splitq only */ + struct iecm_queue *complq; +}; + +int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget); +void iecm_vport_init_num_qs(struct iecm_vport *vport, + struct virtchnl_create_vport *vport_msg); +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport); +void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, + int num_req_qs); +void iecm_vport_calc_num_q_groups(struct iecm_vport *vport); +int iecm_vport_queues_alloc(struct iecm_vport *vport); +void iecm_vport_queues_rel(struct iecm_vport *vport); +void iecm_vport_calc_num_q_vec(struct iecm_vport *vport); +void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport); +void iecm_vport_intr_clear_dflt_itr(struct iecm_vport *vport); +void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector); +void iecm_vport_intr_deinit(struct iecm_vport *vport); +int iecm_vport_intr_init(struct iecm_vport *vport); +irqreturn_t +iecm_vport_intr_clean_queues(int __always_unused irq, void *data); +void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport); +int iecm_config_rss(struct iecm_vport *vport); +void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list); +void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list); +int iecm_init_rss(struct iecm_vport *vport); +void iecm_deinit_rss(struct iecm_vport *vport); +int iecm_config_rss(struct iecm_vport *vport); +void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *old_buf); +void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, + unsigned int size); +struct sk_buff *iecm_rx_construct_skb(struct iecm_queue *rxq, + struct iecm_rx_buf *rx_buf, + unsigned int size); +bool iecm_rx_cleanup_headers(struct sk_buff *skb); +bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *rx_buf); +void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb); +bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf); +void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val); +void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val); +void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf); +unsigned int iecm_tx_desc_count_required(struct sk_buff *skb); +int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size); +void iecm_tx_timeout(struct net_device *netdev, + unsigned int __always_unused txqueue); +netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, + struct net_device *netdev); +netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb, + struct net_device *netdev); +bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rxq, + u16 cleaned_count); +void iecm_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats); +#endif /* !_IECM_TXRX_H_ */ From patchwork Mon Aug 24 17:32:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 261999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58672C433DF for ; Mon, 24 Aug 2020 17:34:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 236D62067C for ; Mon, 24 Aug 2020 17:34:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728281AbgHXReE (ORCPT ); Mon, 24 Aug 2020 13:34:04 -0400 Received: from mga09.intel.com ([134.134.136.24]:29865 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728238AbgHXRdf (ORCPT ); Mon, 24 Aug 2020 13:33:35 -0400 IronPort-SDR: rJ0jq0OrsQz3FQkf+Y3s+ZeJM8z5XWCXY2EZH+wUKdOw0fB73xMWBLS7gIVb52iFSySu8xhXpE NKj40plOAubg== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008555" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008555" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:22 -0700 IronPort-SDR: qVV0o5dqjMYQWyZoP2AmROJmE/6XN/c1bxHFLywSPwDo9BblSkLM+4B7NTZ68QbmVY0xZr6+6T JBZzRFgTN1kA== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245339" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:21 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 04/15] iecm: Common module introduction and function stubs Date: Mon, 24 Aug 2020 10:32:55 -0700 Message-Id: <20200824173306.3178343-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This introduces function stubs for the framework of the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/iecm/iecm_controlq.c | 194 +++ .../ethernet/intel/iecm/iecm_controlq_setup.c | 83 ++ .../net/ethernet/intel/iecm/iecm_ethtool.c | 16 + drivers/net/ethernet/intel/iecm/iecm_lib.c | 440 ++++++ drivers/net/ethernet/intel/iecm/iecm_main.c | 38 + .../ethernet/intel/iecm/iecm_singleq_txrx.c | 258 ++++ drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1265 +++++++++++++++++ .../net/ethernet/intel/iecm/iecm_virtchnl.c | 577 ++++++++ 8 files changed, 2871 insertions(+) create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_ethtool.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_lib.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_main.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_txrx.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_virtchnl.c diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c new file mode 100644 index 000000000000..585392e4d72a --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c @@ -0,0 +1,194 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020, Intel Corporation. */ + +#include + +/** + * iecm_ctlq_setup_regs - initialize control queue registers + * @cq: pointer to the specific control queue + * @q_create_info: structs containing info for each queue to be initialized + */ +static void +iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq, + struct iecm_ctlq_create_info *q_create_info) +{ + /* stub */ +} + +/** + * iecm_ctlq_init_regs - Initialize control queue registers + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * @is_rxq: true if receive control queue, false otherwise + * + * Initialize registers. The caller is expected to have already initialized the + * descriptor ring memory and buffer memory + */ +static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + bool is_rxq) +{ + /* stub */ +} + +/** + * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf + * @cq: pointer to the specific Control queue + * + * Record the address of the receive queue DMA buffers in the descriptors. + * The buffers must have been previously allocated. + */ +static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_shutdown - shutdown the CQ + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * The main shutdown routine for any controq queue + */ +static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_add - add one control queue + * @hw: pointer to hardware struct + * @qinfo: info for queue to be created + * @cq_out: (output) double pointer to control queue to be created + * + * Allocate and initialize a control queue and add it to the control queue list. + * The cq parameter will be allocated/initialized and passed back to the caller + * if no errors occur. + * + * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add + */ +enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, + struct iecm_ctlq_create_info *qinfo, + struct iecm_ctlq_info **cq_out) +{ + /* stub */ +} + +/** + * iecm_ctlq_remove - deallocate and remove specified control queue + * @hw: pointer to hardware struct + * @cq: pointer to control queue to be removed + */ +void iecm_ctlq_remove(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_init - main initialization routine for all control queues + * @hw: pointer to hardware struct + * @num_q: number of queues to initialize + * @q_info: array of structs containing info for each queue to be initialized + * + * This initializes any number and any type of control queues. This is an all + * or nothing routine; if one fails, all previously allocated queues will be + * destroyed. This must be called prior to using the individual add/remove + * APIs. + */ +enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, + struct iecm_ctlq_create_info *q_info) +{ + /* stub */ +} + +/** + * iecm_ctlq_deinit - destroy all control queues + * @hw: pointer to hw struct + */ +enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw) +{ + /* stub */ +} + +/** + * iecm_ctlq_send - send command to Control Queue (CTQ) + * @hw: pointer to hw struct + * @cq: handle to control queue struct to send on + * @num_q_msg: number of messages to send on control queue + * @q_msg: pointer to array of queue messages to be sent + * + * The caller is expected to allocate DMAable buffers and pass them to the + * send routine via the q_msg struct / control queue specific data struct. + * The control queue will hold a reference to each send message until + * the completion for that message has been cleaned. + */ +enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 num_q_msg, + struct iecm_ctlq_msg q_msg[]) +{ + /* stub */ +} + +/** + * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the + * requested queue + * @cq: pointer to the specific Control queue + * @clean_count: (input|output) number of descriptors to clean as input, and + * number of descriptors actually cleaned as output + * @msg_status: (output) pointer to msg pointer array to be populated; needs + * to be allocated by caller + * + * Returns an array of message pointers associated with the cleaned + * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned + * descriptors. The status will be returned for each; any messages that failed + * to send will have a non-zero status. The caller is expected to free original + * ctlq_msgs and free or reuse the DMA buffers. + */ +enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, + u16 *clean_count, + struct iecm_ctlq_msg *msg_status[]) +{ + /* stub */ +} + +/** + * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring + * @hw: pointer to hw struct + * @cq: pointer to control queue handle + * @buff_count: (input|output) input is number of buffers caller is trying to + * return; output is number of buffers that were not posted + * @buffs: array of pointers to DMA mem structs to be given to hardware + * + * Caller uses this function to return DMA buffers to the descriptor ring after + * consuming them; buff_count will be the number of buffers. + * + * Note: this function needs to be called after a receive call even + * if there are no DMA buffers to be returned, i.e. buff_count = 0, + * buffs = NULL to support direct commands + */ +enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 *buff_count, + struct iecm_dma_mem **buffs) +{ + /* stub */ +} + +/** + * iecm_ctlq_recv - receive control queue message call back + * @cq: pointer to control queue handle to receive on + * @num_q_msg: (input|output) input number of messages that should be received; + * output number of messages actually received + * @q_msg: (output) array of received control queue messages on this q; + * needs to be pre-allocated by caller for as many messages as requested + * + * Called by interrupt handler or polling mechanism. Caller is expected + * to free buffers + */ +enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq, + u16 *num_q_msg, struct iecm_ctlq_msg *q_msg) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c new file mode 100644 index 000000000000..d4402be5aa7f --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020, Intel Corporation. */ + +#include + +/** + * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + */ +static enum iecm_status +iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Allocate the buffer head for all control queues, and if it's a receive + * queue, allocate DMA buffers + */ +static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * This assumes the posted send buffers have already been cleaned + * and de-allocated + */ +static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_free_bufs - Free CQ buffer info elements + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX + * queues. The upper layers are expected to manage freeing of TX DMA buffers + */ +static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Free the memory used by the ring, buffers and other related structures + */ +void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs + * @hw: pointer to hw struct + * @cq: pointer to control queue struct + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + */ +enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c new file mode 100644 index 000000000000..a6532592f2f4 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c @@ -0,0 +1,16 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_set_ethtool_ops - Initialize ethtool ops struct + * @netdev: network interface device structure + * + * Sets ethtool ops struct in our netdev so that ethtool can call + * our functions. + */ +void iecm_set_ethtool_ops(struct net_device *netdev) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c new file mode 100644 index 000000000000..a5ff9209d290 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -0,0 +1,440 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include + +static const struct net_device_ops iecm_netdev_ops_splitq; +static const struct net_device_ops iecm_netdev_ops_singleq; + +/** + * iecm_mb_intr_rel_irq - Free the IRQ association with the OS + * @adapter: adapter structure + */ +static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_rel - Release interrupt capabilities and free memory + * @adapter: adapter to disable interrupts on + */ +static void iecm_intr_rel(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_clean - Interrupt handler for the mailbox + * @irq: interrupt number + * @data: pointer to the adapter structure + */ +static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) +{ + /* stub */ +} + +/** + * iecm_mb_irq_enable - Enable MSIX interrupt for the mailbox + * @adapter: adapter to get the hardware address for register write + */ +static void iecm_mb_irq_enable(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_req_irq - Request IRQ for the mailbox interrupt + * @adapter: adapter structure to pass to the mailbox IRQ handler + */ +static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_get_mb_vec_id - Get vector index for mailbox + * @adapter: adapter structure to access the vector chunks + * + * The first vector id in the requested vector chunks from the CP is for + * the mailbox + */ +static void iecm_get_mb_vec_id(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_init - Initialize the mailbox interrupt + * @adapter: adapter structure to store the mailbox vector + */ +static int iecm_mb_intr_init(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_distribute - Distribute MSIX vectors + * @adapter: adapter structure to get the vports + * + * Distribute the MSIX vectors acquired from the OS to the vports based on the + * num of vectors requested by each vport + */ +static void iecm_intr_distribute(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_req - Request interrupt capabilities + * @adapter: adapter to enable interrupts on + * + * Returns 0 on success, negative on failure + */ +static int iecm_intr_req(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_cfg_netdev - Allocate, configure and register a netdev + * @vport: main vport structure + * + * Returns 0 on success, negative value on failure + */ +static int iecm_cfg_netdev(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_cfg_hw - Initialize HW struct + * @adapter: adapter to setup hw struct for + * + * Returns 0 on success, negative on failure + */ +static int iecm_cfg_hw(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_res_alloc - Allocate vport major memory resources + * @vport: virtual port private structure + */ +static int iecm_vport_res_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_res_free - Free vport major memory resources + * @vport: virtual port private structure + */ +static void iecm_vport_res_free(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_get_free_slot - get the next non-NULL location index in array + * @array: array to search + * @size: size of the array + * @curr: last known occupied index to be used as a search hint + * + * void * is being used to keep the functionality generic. This lets us use this + * function on any array of pointers. + */ +static int iecm_get_free_slot(void *array, int size, int curr) +{ + /* stub */ +} + +/** + * iecm_netdev_to_vport - get a vport handle from a netdev + * @netdev: network interface device structure + */ +struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_netdev_to_adapter - get an adapter handle from a netdev + * @netdev: network interface device structure + */ +struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_vport_stop - Disable a vport + * @vport: vport to disable + */ +static void iecm_vport_stop(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_stop - Disables a network interface + * @netdev: network interface device structure + * + * The stop entry point is called when an interface is de-activated by the OS, + * and the netdevice enters the DOWN state. The hardware is still under the + * driver's control, but the netdev interface is disabled. + * + * Returns success only - not allowed to fail + */ +static int iecm_stop(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_vport_rel - Delete a vport and free its resources + * @vport: the vport being removed + */ +static void iecm_vport_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_rel_all - Delete all vports + * @adapter: adapter from which all vports are being removed + */ +static void iecm_vport_rel_all(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_set_hsplit - enable or disable header split on a given vport + * @vport: virtual port + * @prog: bpf_program attached to an interface or NULL + */ +void iecm_vport_set_hsplit(struct iecm_vport *vport, + struct bpf_prog __always_unused *prog) +{ + /* stub */ +} + +/** + * iecm_vport_alloc - Allocates the next available struct vport in the adapter + * @adapter: board private structure + * @vport_id: vport identifier + * + * returns a pointer to a vport on success, NULL on failure. + */ +static struct iecm_vport * +iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) +{ + /* stub */ +} + +/** + * iecm_service_task - Delayed task for handling mailbox responses + * @work: work_struct handle to our data + * + */ +static void iecm_service_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_up_complete - Complete interface up sequence + * @vport: virtual port structure + */ +static int iecm_up_complete(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_init_buf_tail - Write initial buffer ring tail value + * @vport: virtual port struct + */ +static void iecm_rx_init_buf_tail(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_open - Bring up a vport + * @vport: vport to bring up + */ +static int iecm_vport_open(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_init_task - Delayed initialization task + * @work: work_struct handle to our data + * + * Init task finishes up pending work started in probe. Due to the asynchronous + * nature in which the device communicates with hardware, we may have to wait + * several milliseconds to get a response. Instead of busy polling in probe, + * pulling it out into a delayed work task prevents us from bogging down the + * whole system waiting for a response from hardware. + */ +static void iecm_init_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_api_init - Initialize and verify device API + * @adapter: driver specific private structure + * + * Returns 0 on success, negative on failure + */ +static int iecm_api_init(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_deinit_task - Device deinit routine + * @adapter: Driver specific private structure + * + * Extended remove logic which will be used for + * hard reset as well + */ +static void iecm_deinit_task(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_init_hard_reset - Initiate a hardware reset + * @adapter: Driver specific private structure + * + * Deallocate the vports and all the resources associated with them and + * reallocate. Also reinitialize the mailbox. Return 0 on success, + * negative on failure. + */ +static int iecm_init_hard_reset(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vc_event_task - Handle virtchannel event logic + * @work: work queue struct + */ +static void iecm_vc_event_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_initiate_soft_reset - Initiate a software reset + * @vport: virtual port data struct + * @reset_cause: reason for the soft reset + * + * Soft reset only reallocs vport queue resources. Returns 0 on success, + * negative on failure. + */ +int iecm_initiate_soft_reset(struct iecm_vport *vport, + enum iecm_flags reset_cause) +{ + /* stub */ +} + +/** + * iecm_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in iecm_pci_tbl + * @adapter: driver specific private structure + * + * Returns 0 on success, negative on failure + */ +int iecm_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent, + struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_probe); + +/** + * iecm_remove - Device removal routine + * @pdev: PCI device information struct + */ +void iecm_remove(struct pci_dev *pdev) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_remove); + +/** + * iecm_shutdown - PCI callback for shutting down device + * @pdev: PCI device information struct + */ +void iecm_shutdown(struct pci_dev *pdev) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_shutdown); + +/** + * iecm_open - Called when a network interface becomes active + * @netdev: network interface device structure + * + * The open entry point is called when a network interface is made + * active by the system (IFF_UP). At this point all resources needed + * for transmit and receive operations are allocated, the interrupt + * handler is registered with the OS, the netdev watchdog is enabled, + * and the stack is notified that the interface is ready. + * + * Returns 0 on success, negative value on failure + */ +static int iecm_open(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_change_mtu - NDO callback to change the MTU + * @netdev: network interface device structure + * @new_mtu: new value for maximum frame size + * + * Returns 0 on success, negative on failure + */ +static int iecm_change_mtu(struct net_device *netdev, int new_mtu) +{ + /* stub */ +} + +void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) +{ + /* stub */ +} + +void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem) +{ + /* stub */ +} + +static const struct net_device_ops iecm_netdev_ops_splitq = { + .ndo_open = iecm_open, + .ndo_stop = iecm_stop, + .ndo_start_xmit = iecm_tx_splitq_start, + .ndo_validate_addr = eth_validate_addr, + .ndo_get_stats64 = iecm_get_stats64, +}; + +static const struct net_device_ops iecm_netdev_ops_singleq = { + .ndo_open = iecm_open, + .ndo_stop = iecm_stop, + .ndo_start_xmit = iecm_tx_singleq_start, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = iecm_change_mtu, + .ndo_get_stats64 = iecm_get_stats64, +}; diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c new file mode 100644 index 000000000000..68d9f2e6445b --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_main.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include + +char iecm_drv_name[] = "iecm"; +#define DRV_SUMMARY "Intel(R) Data Plane Function Linux Driver" +static const char iecm_driver_string[] = DRV_SUMMARY; +static const char iecm_copyright[] = "Copyright (c) 2020, Intel Corporation."; + +MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_LICENSE("GPL v2"); + +/** + * iecm_module_init - Driver registration routine + * + * iecm_module_init is the first routine called when the driver is + * loaded. All it does is register with the PCI subsystem. + */ +static int __init iecm_module_init(void) +{ + /* stub */ +} +module_init(iecm_module_init); + +/** + * iecm_module_exit - Driver exit cleanup routine + * + * iecm_module_exit is called just before the driver is removed + * from memory. + */ +static void __exit iecm_module_exit(void) +{ + /* stub */ +} +module_exit(iecm_module_exit); diff --git a/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c new file mode 100644 index 000000000000..063c35274f38 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include +#include + +/** + * iecm_tx_singleq_build_ctob - populate command tag offset and size + * @td_cmd: Command to be filled in desc + * @td_offset: Offset to be filled in desc + * @size: Size of the buffer + * @td_tag: VLAN tag to be filled + * + * Returns the 64 bit value populated with the input parameters + */ +static __le64 +iecm_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, + u64 td_tag) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_csum - Enable Tx checksum offloads + * @first: pointer to first descriptor + * @off: pointer to struct that holds offload parameters + * + * Returns 0 or error (negative) if checksum offload + */ +static +int iecm_tx_singleq_csum(struct iecm_tx_buf *first, + struct iecm_tx_offload_params *off) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_map - Build the Tx base descriptor + * @tx_q: queue to send buffer on + * @first: first buffer info buffer to use + * @offloads: pointer to struct that holds offload parameters + * + * This function loops over the skb data pointed to by *first + * and gets a physical address for each memory location and programs + * it and the length into the transmit base mode descriptor. + */ +static void +iecm_tx_singleq_map(struct iecm_queue *tx_q, struct iecm_tx_buf *first, + struct iecm_tx_offload_params *offloads) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_frame - Sends buffer on Tx ring using base descriptors + * @skb: send buffer + * @tx_q: queue to send buffer on + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +static netdev_tx_t +iecm_tx_singleq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_start - Selects the right Tx queue to send buffer + * @skb: send buffer + * @netdev: network interface device structure + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb, + struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_clean - Reclaim resources from queue + * @tx_q: Tx queue to clean + * @napi_budget: Used to determine if we are in netpoll + * + */ +static bool iecm_tx_singleq_clean(struct iecm_queue *tx_q, int napi_budget) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_clean_all - Clean all Tx queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_test_staterr - tests bits in Rx descriptor + * status and error fields + * @rx_desc: pointer to receive descriptor (in le64 format) + * @stat_err_bits: value to mask + * + * This function does some fast chicanery in order to return the + * value of the mask which is really only used for boolean tests. + * The status_error_ptype_len doesn't need to be shifted because it begins + * at offset zero. + */ +static bool +iecm_rx_singleq_test_staterr(struct iecm_singleq_base_rx_desc *rx_desc, + const u64 stat_err_bits) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_is_non_eop - process handling of non-EOP buffers + * @rxq: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * @skb: Current socket buffer containing buffer in progress + */ +static bool iecm_rx_singleq_is_non_eop(struct iecm_queue *rxq, + struct iecm_singleq_base_rx_desc + *rx_desc, struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_csum - Indicate in skb if checksum is good + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: skb currently being received and modified + * @rx_desc: the receive descriptor + * @ptype: the packet type decoded by hardware + * + * skb->protocol must be set before this function is called + */ +static void iecm_rx_singleq_csum(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_singleq_base_rx_desc *rx_desc, + u8 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_process_skb_fields - Populate skb header fields from Rx + * descriptor + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: descriptor for skb + * @ptype: packet type + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, protocol, and + * other fields within the skb. + */ +static void +iecm_rx_singleq_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_singleq_base_rx_desc *rx_desc, + u8 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_buf_hw_alloc_all - Replace used receive buffers + * @rx_q: queue for which the hw buffers are allocated + * @cleaned_count: number of buffers to replace + * + * Returns false if all allocations were successful, true if any fail + */ +bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rx_q, + u16 cleaned_count) +{ + /* stub */ +} + +/** + * iecm_singleq_rx_put_buf - wrapper function to clean and recycle buffers + * @rx_bufq: Rx descriptor queue to transact packets on + * @rx_buf: Rx buffer to pull data from + * + * This function will update the next_to_use/next_to_alloc if the current + * buffer is recycled. + */ +static void iecm_singleq_rx_put_buf(struct iecm_queue *rx_bufq, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value + * @q: queue to bump + */ +static void iecm_singleq_rx_bump_ntc(struct iecm_queue *q) +{ + /* stub */ +} + +/** + * iecm_singleq_rx_get_buf_page - Fetch Rx buffer page and synchronize data + * @dev: device struct + * @rx_buf: Rx buf to fetch page for + * @size: size of buffer to add to skb + * + * This function will pull an Rx buffer page from the ring and synchronize it + * for use by the CPU. + */ +static struct sk_buff * +iecm_singleq_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, + const unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_clean - Reclaim resources after receive completes + * @rx_q: Rx queue to clean + * @budget: Total limit on number of packets to process + * + * Returns true if there's any budget left (e.g. the clean is finished) + */ +static int iecm_rx_singleq_clean(struct iecm_queue *rx_q, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_clean_all - Clean all Rx queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * @cleaned: returns number of packets cleaned + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_rx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget, + int *cleaned) +{ + /* stub */ +} + +/** + * iecm_vport_singleq_napi_poll - NAPI handler + * @napi: struct from which you get q_vector + * @budget: budget provided by stack + */ +int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c new file mode 100644 index 000000000000..6df39810264a --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -0,0 +1,1265 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_buf_lifo_push - push a buffer pointer onto stack + * @stack: pointer to stack struct + * @buf: pointer to buf to push + * + * Returns 0 on success, negative on failure + **/ +static int iecm_buf_lifo_push(struct iecm_buf_lifo *stack, + struct iecm_tx_buf *buf) +{ + /* stub */ +} + +/** + * iecm_buf_lifo_pop - pop a buffer pointer from stack + * @stack: pointer to stack struct + **/ +static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) +{ + /* stub */ +} + +/** + * iecm_get_stats64 - get statistics for network device structure + * @netdev: network interface device structure + * @stats: main device statistics structure + */ +void iecm_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + /* stub */ +} + +/** + * iecm_tx_buf_rel - Release a Tx buffer + * @tx_q: the queue that owns the buffer + * @tx_buf: the buffer to free + */ +void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf) +{ + /* stub */ +} + +/** + * iecm_tx_buf_rel all - Free any empty Tx buffers + * @txq: queue to be cleaned + */ +static void iecm_tx_buf_rel_all(struct iecm_queue *txq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_rel - Free Tx resources per queue + * @txq: Tx descriptor ring for a specific queue + * @bufq: buffer q or completion q + * + * Free all transmit software resources + */ +static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_rel_all - Free Tx Resources for All Queues + * @vport: virtual port structure + * + * Free all transmit software resources + */ +static void iecm_tx_desc_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_buf_alloc_all - Allocate memory for all buffer resources + * @tx_q: queue for which the buffers are allocated + * + * Returns 0 on success, negative on failure + */ +static int iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_desc_alloc - Allocate the Tx descriptors + * @tx_q: the Tx ring to set up + * @bufq: buffer or completion queue + * + * Returns 0 on success, negative on failure + */ +static int iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_alloc_all - allocate all queues Tx resources + * @vport: virtual port private structure + * + * Returns 0 on success, negative on failure + */ +static int iecm_tx_desc_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_buf_rel - Release a Rx buffer + * @rxq: the queue that owns the buffer + * @rx_buf: the buffer to free + */ +static void iecm_rx_buf_rel(struct iecm_queue *rxq, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_buf_rel_all - Free all Rx buffer resources for a queue + * @rxq: queue to be cleaned + */ +static void iecm_rx_buf_rel_all(struct iecm_queue *rxq) +{ + /* stub */ +} + +/** + * iecm_rx_desc_rel - Free a specific Rx q resources + * @rxq: queue to clean the resources from + * @bufq: buffer q or completion q + * @q_model: single or split q model + * + * Free a specific Rx queue resources + */ +static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq, + enum virtchnl_queue_model q_model) +{ + /* stub */ +} + +/** + * iecm_rx_desc_rel_all - Free Rx Resources for All Queues + * @vport: virtual port structure + * + * Free all Rx queues resources + */ +static void iecm_rx_desc_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_update - Store the new tail and head values + * @rxq: queue to bump + * @val: new head index + */ +void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_alloc - recycle or make a new page + * @rxq: ring to use + * @buf: rx_buffer struct to modify + * + * Returns true if the page was successfully allocated or + * reused. + */ +bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) +{ + /* stub */ +} + +/** + * iecm_rx_hdr_buf_hw_alloc - recycle or make a new page for header buffer + * @rxq: ring to use + * @hdr_buf: rx_buffer struct to modify + * + * Returns true if the page was successfully allocated or + * reused. + */ +static bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq, + struct iecm_rx_buf *hdr_buf) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_alloc_all - Replace used receive buffers + * @rxq: queue for which the hw buffers are allocated + * @cleaned_count: number of buffers to replace + * + * Returns false if all allocations were successful, true if any fail + */ +static bool +iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, + u16 cleaned_count) +{ + /* stub */ +} + +/** + * iecm_rx_buf_alloc_all - Allocate memory for all buffer resources + * @rxq: queue for which the buffers are allocated + * + * Returns 0 on success, negative on failure + */ +static int iecm_rx_buf_alloc_all(struct iecm_queue *rxq) +{ + /* stub */ +} + +/** + * iecm_rx_desc_alloc - Allocate queue Rx resources + * @rxq: Rx queue for which the resources are setup + * @bufq: buffer or completion queue + * @q_model: single or split queue model + * + * Returns 0 on success, negative on failure + */ +static int iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, + enum virtchnl_queue_model q_model) +{ + /* stub */ +} + +/** + * iecm_rx_desc_alloc_all - allocate all RX queues resources + * @vport: virtual port structure + * + * Returns 0 on success, negative on failure + */ +static int iecm_rx_desc_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_txq_group_rel - Release all resources for txq groups + * @vport: vport to release txq groups on + */ +static void iecm_txq_group_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rxq_group_rel - Release all resources for rxq groups + * @vport: vport to release rxq groups on + */ +static void iecm_rxq_group_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queue_grp_rel_all - Release all queue groups + * @vport: vport to release queue groups for + */ +static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queues_rel - Free memory for all queues + * @vport: virtual port + * + * Free the memory allocated for queues associated to a vport + */ +void iecm_vport_queues_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_init_fast_path_txqs - Initialize fast path txq array + * @vport: vport to init txqs on + * + * We get a queue index from skb->queue_mapping and we need a fast way to + * dereference the queue from queue groups. This allows us to quickly pull a + * txq based on a queue index. + * + * Returns 0 on success, negative on failure + */ +static int iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_init_num_qs - Initialize number of queues + * @vport: vport to initialize qs + * @vport_msg: data to be filled into vport + */ +void iecm_vport_init_num_qs(struct iecm_vport *vport, + struct virtchnl_create_vport *vport_msg) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_desc - Calculate number of queue groups + * @vport: vport to calculate q groups for + */ +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); + +/** + * iecm_vport_calc_total_qs - Calculate total number of queues + * @vport_msg: message to fill with data + * @num_req_qs: user requested queues + */ +void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, + int num_req_qs) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_groups - Calculate number of queue groups + * @vport: vport to calculate q groups for + */ +void iecm_vport_calc_num_q_groups(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); + +/** + * iecm_vport_calc_numq_per_grp - Calculate number of queues per group + * @vport: vport to calculate queues for + * @num_txq: int return parameter + * @num_rxq: int return parameter + */ +static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, + int *num_txq, int *num_rxq) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_vec - Calculate total number of vectors required for + * this vport + * @vport: virtual port + * + */ +void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_txq_group_alloc - Allocate all txq group resources + * @vport: vport to allocate txq groups for + * @num_txq: number of txqs to allocate for each group + * + * Returns 0 on success, negative on failure + */ +static int iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq) +{ + /* stub */ +} + +/** + * iecm_rxq_group_alloc - Allocate all rxq group resources + * @vport: vport to allocate rxq groups for + * @num_rxq: number of rxqs to allocate for each group + * + * Returns 0 on success, negative on failure + */ +static int iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq) +{ + /* stub */ +} + +/** + * iecm_vport_queue_grp_alloc_all - Allocate all queue groups/resources + * @vport: vport with qgrps to allocate + * + * Returns 0 on success, negative on failure + */ +static int iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queues_alloc - Allocate memory for all queues + * @vport: virtual port + * + * Allocate memory for queues associated with a vport. Returns 0 on success, + * negative on failure. + */ +int iecm_vport_queues_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_find_q - Find the Tx q based on q id + * @vport: the vport we care about + * @q_id: Id of the queue + * + * Returns queue ptr if found else returns NULL + */ +static struct iecm_queue * +iecm_tx_find_q(struct iecm_vport *vport, int q_id) +{ + /* stub */ +} + +/** + * iecm_tx_handle_sw_marker - Handle queue marker packet + * @tx_q: Tx queue to handle software marker + */ +static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean_buf - Clean TX buffer resources + * @tx_q: Tx queue to clean buffer from + * @tx_buf: buffer to be cleaned + * @napi_budget: Used to determine if we are in netpoll + * + * Returns the stats (bytes/packets) cleaned from this buffer + */ +static struct iecm_tx_queue_stats +iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, + int napi_budget) +{ + /* stub */ +} + +/** + * iecm_stash_flow_sch_buffers - store buffere parameter info to be freed at a + * later time (only relevant for flow scheduling mode) + * @txq: Tx queue to clean + * @tx_buf: buffer to store + */ +static int +iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean - Reclaim resources from buffer queue + * @tx_q: Tx queue to clean + * @end: queue index until which it should be cleaned + * @napi_budget: Used to determine if we are in netpoll + * @descs_only: true if queue is using flow-based scheduling and should + * not clean buffers at this time + * + * Cleans the queue descriptor ring. If the queue is using queue-based + * scheduling, the buffers will be cleaned as well and this function will + * return the number of bytes/packets cleaned. If the queue is using flow-based + * scheduling, only the descriptors are cleaned at this time. Separate packet + * completion events will be reported on the completion queue, and the + * buffers will be cleaned separately. The stats returned from this function + * when using flow-based scheduling are irrelevant. + */ +static struct iecm_tx_queue_stats +iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, + bool descs_only) +{ + /* stub */ +} + +/** + * iecm_tx_clean_flow_sch_bufs - clean bufs that were stored for + * out of order completions + * @txq: queue to clean + * @compl_tag: completion tag of packet to clean (from completion descriptor) + * @desc_ts: pointer to 3 byte timestamp from descriptor + * @budget: Used to determine if we are in netpoll + */ +static struct iecm_tx_queue_stats +iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, + u8 *desc_ts, int budget) +{ + /* stub */ +} + +/** + * iecm_tx_clean_complq - Reclaim resources on completion queue + * @complq: Tx ring to clean + * @budget: Used to determine if we are in netpoll + * + * Returns true if there's any budget left (e.g. the clean is finished) + */ +static bool +iecm_tx_clean_complq(struct iecm_queue *complq, int budget) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_build_ctb - populate command tag and size for queue + * based scheduling descriptors + * @desc: descriptor to populate + * @parms: pointer to Tx params struct + * @td_cmd: command to be filled in desc + * @size: size of buffer + */ +static inline void +iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *parms, + u16 td_cmd, u16 size) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_build_flow_desc - populate command tag and size for flow + * scheduling descriptors + * @desc: descriptor to populate + * @parms: pointer to Tx params struct + * @td_cmd: command to be filled in desc + * @size: size of buffer + */ +static inline void +iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *parms, + u16 td_cmd, u16 size) +{ + /* stub */ +} + +/** + * __iecm_tx_maybe_stop - 2nd level check for Tx stop conditions + * @tx_q: the queue to be checked + * @size: the size buffer we want to assure is available + * + * Returns -EBUSY if a stop is needed, else 0 + */ +static int +__iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_maybe_stop - 1st level check for Tx stop conditions + * @tx_q: the queue to be checked + * @size: number of descriptors we want to assure is available + * + * Returns 0 if stop is not needed + */ +int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_buf_hw_update - Store the new tail and head values + * @tx_q: queue to bump + * @val: new head index + */ +void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val) +{ + /* stub */ +} + +/** + * __iecm_tx_desc_count required - Get the number of descriptors needed for Tx + * @size: transmit request size in bytes + * + * Due to hardware alignment restrictions (4K alignment), we need to + * assume that we can have no more than 12K of data per descriptor, even + * though each descriptor can take up to 16K - 1 bytes of aligned memory. + * Thus, we need to divide by 12K. But division is slow! Instead, + * we decompose the operation into shifts and one relatively cheap + * multiply operation. + * + * To divide by 12K, we first divide by 4K, then divide by 3: + * To divide by 4K, shift right by 12 bits + * To divide by 3, multiply by 85, then divide by 256 + * (Divide by 256 is done by shifting right by 8 bits) + * Finally, we add one to round up. Because 256 isn't an exact multiple of + * 3, we'll underestimate near each multiple of 12K. This is actually more + * accurate as we have 4K - 1 of wiggle room that we can fit into the last + * segment. For our purposes this is accurate out to 1M which is orders of + * magnitude greater than our largest possible GSO size. + * + * This would then be implemented as: + * return (((size >> 12) * 85) >> 8) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; + * + * Since multiplication and division are commutative, we can reorder + * operations into: + * return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; + */ +static unsigned int __iecm_tx_desc_count_required(unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_desc_count_required - calculate number of Tx descriptors needed + * @skb: send buffer + * + * Returns number of data descriptors needed for this skb. + */ +unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_map - Build the Tx flex descriptor + * @tx_q: queue to send buffer on + * @off: pointer to offload params struct + * @first: first buffer info buffer to use + * + * This function loops over the skb data pointed to by *first + * and gets a physical address for each memory location and programs + * it and the length into the transmit flex descriptor. + */ +static void +iecm_tx_splitq_map(struct iecm_queue *tx_q, + struct iecm_tx_offload_params *off, + struct iecm_tx_buf *first) +{ + /* stub */ +} + +/** + * iecm_tso - computes mss and TSO length to prepare for TSO + * @first: pointer to struct iecm_tx_buf + * @off: pointer to struct that holds offload parameters + * + * Returns error (negative) if TSO doesn't apply to the given skb, + * 0 otherwise. + * + * Note: this function can be used in the splitq and singleq paths + */ +static int iecm_tso(struct iecm_tx_buf *first, + struct iecm_tx_offload_params *off) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors + * @skb: send buffer + * @tx_q: queue to send buffer on + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +static netdev_tx_t +iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_start - Selects the right Tx queue to send buffer + * @skb: send buffer + * @netdev: network interface device structure + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, + struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_ptype_to_htype - get a hash type + * @vport: virtual port data + * @ptype: the ptype value from the descriptor + * + * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by + * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of + * Rx desc. + */ +static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport, + u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_hash - set the hash value in the skb + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + */ +static void +iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_csum - Indicate in skb if checksum is good + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + * + * skb->protocol must be set before this function is called + */ +static void +iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_rsc - Set the RSC fields in the skb + * @rxq : Rx descriptor ring packet is being transacted on + * @skb : pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + * + * Populate the skb fields with the total number of RSC segments, RSC payload + * length and packet type. + */ +static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_hwtstamp - check for an RX timestamp and pass up + * the stack + * @rx_desc: pointer to Rx descriptor containing timestamp + * @skb: skb to put timestamp in + */ +static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc, + struct sk_buff __maybe_unused *skb) +{ + /* stub */ +} + +/** + * iecm_rx_process_skb_fields - Populate skb header fields from Rx descriptor + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, protocol, and + * other fields within the skb. + */ +static bool +iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc) +{ + /* stub */ +} + +/** + * iecm_rx_skb - Send a completed packet up the stack + * @rxq: Rx ring in play + * @skb: packet to send up + * + * This function sends the completed packet (via. skb) up the stack using + * GRO receive functions + */ +void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_page_is_reserved - check if reuse is possible + * @page: page struct to check + */ +static bool iecm_rx_page_is_reserved(struct page *page) +{ + /* stub */ +} + +/** + * iecm_rx_buf_adjust_pg_offset - Prepare Rx buffer for reuse + * @rx_buf: Rx buffer to adjust + * @size: Size of adjustment + * + * Update the offset within page so that Rx buf will be ready to be reused. + * For systems with PAGE_SIZE < 8192 this function will flip the page offset + * so the second half of page assigned to Rx buffer will be used, otherwise + * the offset is moved by the @size bytes + */ +static void +iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_can_reuse_page - Determine if page can be reused for another Rx + * @rx_buf: buffer containing the page + * + * If page is reusable, we have a green light for calling iecm_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the DMA mapping needs to be destroyed and + * page freed + */ +static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_add_frag - Add contents of Rx buffer to sk_buff as a frag + * @rx_buf: buffer containing page to add + * @skb: sk_buff to place the data into + * @size: packet length from rx_desc + * + * This function will add the data contained in rx_buf->page to the skb. + * It will just attach the page as a frag to the skb. + * The function will then update the page offset. + */ +void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, + unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_reuse_page - page flip buffer and store it back on the queue + * @rx_bufq: Rx descriptor ring to store buffers on + * @hsplit: true if header buffer, false otherwise + * @old_buf: donor buffer to have page reused + * + * Synchronizes page for reuse by the adapter + */ +void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, + bool hsplit, + struct iecm_rx_buf *old_buf) +{ + /* stub */ +} + +/** + * iecm_rx_get_buf_page - Fetch Rx buffer page and synchronize data for use + * @dev: device struct + * @rx_buf: Rx buf to fetch page for + * @size: size of buffer to add to skb + * @dev: device struct + * + * This function will pull an Rx buffer page from the ring and synchronize it + * for use by the CPU. + */ +static void +iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, + const unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_construct_skb - Allocate skb and populate it + * @rxq: Rx descriptor queue + * @rx_buf: Rx buffer to pull data from + * @size: the length of the packet + * + * This function allocates an skb. It then populates it with the page + * data from the current receive descriptor, taking care to set up the + * skb correctly. + */ +struct sk_buff * +iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, + unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_cleanup_headers - Correct empty headers + * @skb: pointer to current skb being fixed + * + * Also address the case where we are pulling data in on pages only + * and as such no data is present in the skb header. + * + * In addition if skb is not at least 60 bytes we need to pad it so that + * it is large enough to qualify as a valid Ethernet frame. + * + * Returns true if an error was encountered and skb was freed. + */ +bool iecm_rx_cleanup_headers(struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_test_staterr - tests bits in Rx descriptor + * status and error fields + * @stat_err_field: field from descriptor to test bits in + * @stat_err_bits: value to mask + * + */ +static bool +iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_is_non_eop - process handling of non-EOP buffers + * @rx_desc: Rx descriptor for current buffer + * + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ +static bool +iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) +{ + /* stub */ +} + +/** + * iecm_rx_recycle_buf - Clean up used buffer and either recycle or free + * @rx_bufq: Rx descriptor queue to transact packets on + * @hsplit: true if buffer is a header buffer + * @rx_buf: Rx buffer to pull data from + * + * This function will clean up the contents of the rx_buf. It will either + * recycle the buffer or unmap it and free the associated resources. + * + * Returns true if the buffer is reused, false if the buffer is freed. + */ +bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_put_bufs - wrapper function to clean and recycle buffers + * @rx_bufq: Rx descriptor queue to transact packets on + * @hdr_buf: Rx header buffer to pull data from + * @rx_buf: Rx buffer to pull data from + * + * This function will update the next_to_use/next_to_alloc if the current + * buffer is recycled. + */ +static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, + struct iecm_rx_buf *hdr_buf, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value + * @q: queue to bump + */ +static void iecm_rx_bump_ntc(struct iecm_queue *q) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_clean - Clean completed descriptors from Rx queue + * @rxq: Rx descriptor queue to retrieve receive buffer queue + * @budget: Total limit on number of packets to process + * + * This function provides a "bounce buffer" approach to Rx interrupt + * processing. The advantage to this is that on systems that have + * expensive overhead for IOMMU access this provides a means of avoiding + * it by maintaining the mapping of the page to the system. + * + * Returns amount of work completed + */ +static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) +{ + /* stub */ +} + +/** + * iecm_vport_intr_clean_queues - MSIX mode Interrupt Handler + * @irq: interrupt number + * @data: pointer to a q_vector + * + */ +irqreturn_t +iecm_vport_intr_clean_queues(int __always_unused irq, void *data) +{ + /* stub */ +} + +/** + * iecm_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport + * @vport: main vport structure + */ +static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_rel - Free memory allocated for interrupt vectors + * @vport: virtual port + * + * Free the memory allocated for interrupt vectors associated to a vport + */ +void iecm_vport_intr_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_rel_irq - Free the IRQ association with the OS + * @vport: main vport structure + */ +static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_dis_irq_all - Disable each interrupt + * @vport: main vport structure + */ +void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_buildreg_itr - Enable default interrupt generation settings + * @q_vector: pointer to q_vector + * @type: ITR index + * @itr: ITR value + */ +static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector, + const int type, u16 itr) +{ + /* stub */ +} + +static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector) +{ + /* stub */ +} + +/** + * iecm_vport_intr_set_new_itr - update the ITR value based on statistics + * @q_vector: structure containing interrupt and ring information + * @itr: structure containing queue performance data + * @q_type: queue type + * + * Stores a new ITR value based on packets and byte + * counts during the last interrupt. The advantage of per interrupt + * computation is faster updates and more accurate ITR for the current + * traffic pattern. Constants in this function were computed + * based on theoretical maximum wire speed and thresholds were set based + * on testing data as well as attempting to minimize response time + * while increasing bulk throughput. + */ +static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, + struct iecm_itr *itr, + enum virtchnl_queue_type q_type) +{ + /* stub */ +} + +/** + * iecm_vport_intr_update_itr_ena_irq - Update ITR and re-enable MSIX interrupt + * @q_vector: q_vector for which ITR is being updated and interrupt enabled + */ +void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) +{ + /* stub */ +} + +/** + * iecm_vport_intr_req_irq - get MSI-X vectors from the OS for the vport + * @vport: main vport structure + * @basename: name for the vector + */ +static int +iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) +{ + /* stub */ +} + +/** + * iecm_vport_intr_ena_irq_all - Enable IRQ for the given vport + * @vport: main vport structure + */ +void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_deinit - Release all vector associations for the vport + * @vport: main vport structure + */ +void iecm_vport_intr_deinit(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport + * @vport: main vport structure + */ +static void +iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean_all- Clean completion queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_clean_all- Clean completion queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * @cleaned: returns number of packets cleaned + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, + int *cleaned) +{ + /* stub */ +} + +/** + * iecm_vport_splitq_napi_poll - NAPI handler + * @napi: struct from which you get q_vector + * @budget: budget provided by stack + */ +static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) +{ + /* stub */ +} + +/** + * iecm_vport_intr_map_vector_to_qs - Map vectors to queues + * @vport: virtual port + * + * Mapping for vectors to queues + */ +static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_init_vec_idx - Initialize the vector indexes + * @vport: virtual port + * + * Initialize vector indexes with values returned over mailbox + */ +static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_alloc - Allocate memory for interrupt vectors + * @vport: virtual port + * + * We allocate one q_vector per queue interrupt. If allocation fails we + * return -ENOMEM. + */ +int iecm_vport_intr_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_init - Setup all vectors for the given vport + * @vport: virtual port + * + * Returns 0 on success or negative on failure + */ +int iecm_vport_intr_init(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); + +/** + * iecm_config_rss - Prepare for RSS + * @vport: virtual port + * + * Return 0 on success, negative on failure + */ +int iecm_config_rss(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_get_rx_qid_list - Create a list of RX QIDs + * @vport: virtual port + * @qid_list: list of qids + * + * qid_list must be allocated for maximum entries to prevent buffer overflow. + */ +void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) +{ + /* stub */ +} + +/** + * iecm_fill_dflt_rss_lut - Fill the indirection table with the default values + * @vport: virtual port structure + * @qid_list: List of the RX qid's + * + * qid_list is created and freed by the caller + */ +void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) +{ + /* stub */ +} + +/** + * iecm_init_rss - Prepare for RSS + * @vport: virtual port + * + * Return 0 on success, negative on failure + */ +int iecm_init_rss(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_deinit_rss - Prepare for RSS + * @vport: virtual port + */ +void iecm_deinit_rss(struct iecm_vport *vport) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c new file mode 100644 index 000000000000..7fa8c06c5d7d --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -0,0 +1,577 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/* Lookup table mapping the HW PTYPE to the bit field for decoding */ +static const +struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = { + /* L2 Packet types */ + IECM_PTT_UNUSED_ENTRY(0), + IECM_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IECM_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), + IECM_PTT_UNUSED_ENTRY(12), + + /* Non Tunneled IPv4 */ + IECM_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), + IECM_PTT_UNUSED_ENTRY(25), + IECM_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), + IECM_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), + IECM_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), + + /* Non Tunneled IPv6 */ + IECM_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3), + IECM_PTT_UNUSED_ENTRY(91), + IECM_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), + IECM_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), + IECM_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), +}; + +/** + * iecm_recv_event_msg - Receive virtchnl event message + * @vport: virtual port structure + * + * Receive virtchnl event message + */ +static void iecm_recv_event_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_mb_clean - Reclaim the send mailbox queue entries + * @adapter: Driver specific private structure + * + * Reclaim the send mailbox queue entries to be used to send further messages + * + * Returns 0 on success, negative on failure + */ +static int iecm_mb_clean(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_mb_msg - Send message over mailbox + * @adapter: Driver specific private structure + * @op: virtchnl opcode + * @msg_size: size of the payload + * @msg: pointer to buffer holding the payload + * + * Will prepare the control queue message and initiates the send API + * + * Returns 0 on success, negative on failure + */ +int iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + u16 msg_size, u8 *msg) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_send_mb_msg); + +/** + * iecm_recv_mb_msg - Receive message over mailbox + * @adapter: Driver specific private structure + * @op: virtchnl operation code + * @msg: Received message holding buffer + * @msg_size: message size + * + * Will receive control queue message and posts the receive buffer. Returns 0 + * on success and negative on failure. + */ +int iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + void *msg, int msg_size) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_recv_mb_msg); + +/** + * iecm_send_ver_msg - send virtchnl version message + * @adapter: Driver specific private structure + * + * Send virtchnl version message. Returns 0 on success, negative on failure. + */ +static int iecm_send_ver_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_recv_ver_msg - Receive virtchnl version message + * @adapter: Driver specific private structure + * + * Receive virtchnl version message. Returns 0 on success, negative on failure. + */ +static int iecm_recv_ver_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_get_caps_msg - Send virtchnl get capabilities message + * @adapter: Driver specific private structure + * + * Send virtchl get capabilities message. Returns 0 on success, negative on + * failure. + */ +int iecm_send_get_caps_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_send_get_caps_msg); + +/** + * iecm_recv_get_caps_msg - Receive virtchnl get capabilities message + * @adapter: Driver specific private structure + * + * Receive virtchnl get capabilities message. Returns 0 on succes, negative on + * failure. + */ +static int iecm_recv_get_caps_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_create_vport_msg - Send virtchnl create vport message + * @adapter: Driver specific private structure + * + * send virtchnl create vport message + * + * Returns 0 on success, negative on failure + */ +static int iecm_send_create_vport_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_recv_create_vport_msg - Receive virtchnl create vport message + * @adapter: Driver specific private structure + * @vport_id: Virtual port identifier + * + * Receive virtchnl create vport message. Returns 0 on success, negative on + * failure. + */ +static int iecm_recv_create_vport_msg(struct iecm_adapter *adapter, + int *vport_id) +{ + /* stub */ +} + +/** + * iecm_wait_for_event - wait for virtchnl response + * @adapter: Driver private data structure + * @state: check on state upon timeout after 500ms + * @err_check: check if this specific error bit is set + * + * Checks if state is set upon expiry of timeout. Returns 0 on success, + * negative on failure. + */ +int iecm_wait_for_event(struct iecm_adapter *adapter, + enum iecm_vport_vc_state state, + enum iecm_vport_vc_state err_check) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_wait_for_event); + +/** + * iecm_send_destroy_vport_msg - Send virtchnl destroy vport message + * @vport: virtual port data structure + * + * Send virtchnl destroy vport message. Returns 0 on success, negative on + * failure. + */ +int iecm_send_destroy_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_enable_vport_msg - Send virtchnl enable vport message + * @vport: virtual port data structure + * + * Send enable vport virtchnl message. Returns 0 on success, negative on + * failure. + */ +int iecm_send_enable_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_disable_vport_msg - Send virtchnl disable vport message + * @vport: virtual port data structure + * + * Send disable vport virtchnl message. Returns 0 on success, negative on + * failure. + */ +int iecm_send_disable_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_tx_queues_msg - Send virtchnl config Tx queues message + * @vport: virtual port data structure + * + * Send config tx queues virtchnl message. Returns 0 on success, negative on + * failure. + */ +int iecm_send_config_tx_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_rx_queues_msg - Send virtchnl config Rx queues message + * @vport: virtual port data structure + * + * Send config rx queues virtchnl message. Returns 0 on success, negative on + * failure. + */ +int iecm_send_config_rx_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_ena_dis_queues_msg - Send virtchnl enable or disable + * queues message + * @vport: virtual port data structure + * @vc_op: virtchnl op code to send + * + * Send enable or disable queues virtchnl message. Returns 0 on success, + * negative on failure. + */ +static int iecm_send_ena_dis_queues_msg(struct iecm_vport *vport, + enum virtchnl_ops vc_op) +{ + /* stub */ +} + +/** + * iecm_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue + * vector message + * @vport: virtual port data structure + * @map: true for map and false for unmap + * + * Send map or unmap queue vector virtchnl message. Returns 0 on success, + * negative on failure. + */ +static int +iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map) +{ + /* stub */ +} + +/** + * iecm_send_enable_queues_msg - send enable queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send enable queues virtchnl message. Returns 0 on success, negative on + * failure. + */ +static int iecm_send_enable_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_disable_queues_msg - send disable queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send disable queues virtchnl message. Returns 0 on success, negative + * on failure. + */ +static int iecm_send_disable_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_delete_queues_msg - send delete queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send delete queues virtchnl message. Return 0 on success, negative on + * failure. + */ +int iecm_send_delete_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_queues_msg - Send config queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send config queues virtchnl message. Returns 0 on success, negative on + * failure. + */ +static int iecm_send_config_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_add_queues_msg - Send virtchnl add queues message + * @vport: Virtual port private data structure + * @num_tx_q: number of transmit queues + * @num_complq: number of transmit completion queues + * @num_rx_q: number of receive queues + * @num_rx_bufq: number of receive buffer queues + * + * Returns 0 on success, negative on failure. + */ +int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, + u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) +{ + /* stub */ +} + +/** + * iecm_send_get_stats_msg - Send virtchnl get statistics message + * @vport: vport to get stats for + * + * Returns 0 on success, negative on failure. + */ +int iecm_send_get_stats_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_hash_msg - Send set or get RSS hash message + * @vport: virtual port data structure + * @get: flag to get or set RSS hash + * + * Returns 0 on success, negative on failure. + */ +int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message + * @vport: virtual port data structure + * @get: flag to set or get RSS look up table + * + * Returns 0 on success, negative on failure. + */ +int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message + * @vport: virtual port data structure + * @get: flag to set or get RSS look up table + * + * Returns 0 on success, negative on failure + */ +int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_rx_ptype_msg - Send virtchnl get or set RSS key message + * @vport: virtual port data structure + * + * Returns 0 on success, negative on failure. + */ +int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_find_ctlq - Given a type and id, find ctlq info + * @hw: hardware struct + * @type: type of ctrlq to find + * @id: ctlq id to find + * + * Returns pointer to found ctlq info struct, NULL otherwise. + */ +static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, + enum iecm_ctlq_type type, int id) +{ + /* stub */ +} + +/** + * iecm_deinit_dflt_mbx - De initialize mailbox + * @adapter: adapter info struct + */ +void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_init_dflt_mbx - Setup default mailbox parameters and make request + * @adapter: adapter info struct + * + * Returns 0 on success, negative otherwise + */ +int iecm_init_dflt_mbx(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_params_buf_alloc - Allocate memory for mailbox resources + * @adapter: Driver specific private data structure + * + * Will alloc memory to hold the vport parameters received on mailbox + */ +int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_params_buf_rel - Release memory for mailbox resources + * @adapter: Driver specific private data structure + * + * Will release memory to hold the vport parameters received on mailbox + */ +void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vc_core_init - Initialize mailbox and get resources + * @adapter: Driver specific private structure + * @vport_id: Virtual port identifier + * + * Will check if HW is ready with reset complete. Initializes the mailbox and + * communicate with master to get all the default vport parameters. Returns 0 + * on success, negative on failure. + */ +int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vc_core_init); + +/** + * iecm_vport_init - Initialize virtual port + * @vport: virtual port to be initialized + * @vport_id: Unique identification number of vport + * + * Will initialize vport with the info received through MB earlier + */ +static void iecm_vport_init(struct iecm_vport *vport, + __always_unused int vport_id) +{ + /* stub */ +} + +/** + * iecm_vport_get_vec_ids - Initialize vector id from Mailbox parameters + * @vecids: Array of vector ids + * @num_vecids: number of vector ids + * @chunks: vector ids received over mailbox + * + * Will initialize all vector ids with ids received as mailbox parameters + * Returns number of ids filled + */ +int +iecm_vport_get_vec_ids(u16 *vecids, int num_vecids, + struct virtchnl_vector_chunks *chunks) +{ + /* stub */ +} + +/** + * iecm_vport_get_queue_ids - Initialize queue id from Mailbox parameters + * @qids: Array of queue ids + * @num_qids: number of queue ids + * @q_type: queue model + * @chunks: queue ids received over mailbox + * + * Will initialize all queue ids with ids received as mailbox parameters + * Returns number of ids filled + */ +static int +iecm_vport_get_queue_ids(u16 *qids, int num_qids, + enum virtchnl_queue_type q_type, + struct virtchnl_queue_chunks *chunks) +{ + /* stub */ +} + +/** + * __iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters + * @vport: virtual port for which the queues ids are initialized + * @qids: queue ids + * @num_qids: number of queue ids + * @q_type: type of queue + * + * Will initialize all queue ids with ids received as mailbox + * parameters. Returns number of queue ids initialized. + */ +static int +__iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, + int num_qids, enum virtchnl_queue_type q_type) +{ + /* stub */ +} + +/** + * iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters + * @vport: virtual port for which the queues ids are initialized + * + * Will initialize all queue ids with ids received as mailbox parameters. + * Returns 0 on success, negative if all the queues are not initialized. + */ +static int iecm_vport_queue_ids_init(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_adjust_qs - Adjust to new requested queues + * @vport: virtual port data struct + * + * Renegotiate queues. Returns 0 on success, negative on failure. + */ +void iecm_vport_adjust_qs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_is_capability_ena - Default implementation of capability checking + * @adapter: Private data struct + * @flag: flag to check + * + * Return true if capability is supported, false otherwise + */ +static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) +{ + /* stub */ +} + +/** + * iecm_vc_ops_init - Initialize virtchnl common API + * @adapter: Driver specific private structure + * + * Initialize the function pointers with the extended feature set functions + * as APF will deal only with new set of opcodes. + */ +void iecm_vc_ops_init(struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vc_ops_init); From patchwork Mon Aug 24 17:32:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 262000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD728C433DF for ; Mon, 24 Aug 2020 17:33:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA2802067C for ; Mon, 24 Aug 2020 17:33:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727998AbgHXRdt (ORCPT ); Mon, 24 Aug 2020 13:33:49 -0400 Received: from mga09.intel.com ([134.134.136.24]:29869 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725780AbgHXRdh (ORCPT ); Mon, 24 Aug 2020 13:33:37 -0400 IronPort-SDR: b7bgaY3o4wwYfAaFAEX2+iLNNyujbKPNbF2aM6w26E/VoYpl/JubVeYw2nvStyzXhqRbaOIEm1 2hKoofrXqEiA== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008560" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008560" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:22 -0700 IronPort-SDR: GxMQhm8eL9z/EpLQXk3JPveniO9xlBan5q/Hm8Un6v/NGW/eTkYwQlkUpJJMlDsn7+xhqMFflF uiYhmJ+AK0MQ== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245343" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:22 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 05/15] iecm: Add basic netdevice functionality Date: Mon, 24 Aug 2020 10:32:56 -0700 Message-Id: <20200824173306.3178343-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This implements probe, interface up/down, and netdev_ops. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 420 +++++++++++++++++- drivers/net/ethernet/intel/iecm/iecm_main.c | 7 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 6 +- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 71 ++- 4 files changed, 479 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index a5ff9209d290..23eef4a9c7ca 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -23,7 +23,17 @@ static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) */ static void iecm_intr_rel(struct iecm_adapter *adapter) { - /* stub */ + if (!adapter->msix_entries) + return; + clear_bit(__IECM_MB_INTR_MODE, adapter->flags); + clear_bit(__IECM_MB_INTR_TRIGGER, adapter->flags); + iecm_mb_intr_rel_irq(adapter); + + pci_free_irq_vectors(adapter->pdev); + kfree(adapter->msix_entries); + adapter->msix_entries = NULL; + kfree(adapter->req_vec_chunks); + adapter->req_vec_chunks = NULL; } /** @@ -95,7 +105,53 @@ static void iecm_intr_distribute(struct iecm_adapter *adapter) */ static int iecm_intr_req(struct iecm_adapter *adapter) { - /* stub */ + int min_vectors, max_vectors, err = 0; + unsigned int vector; + int num_vecs; + int v_actual; + + num_vecs = adapter->vports[0]->num_q_vectors + + IECM_MAX_NONQ_VEC + IECM_MAX_RDMA_VEC; + + min_vectors = IECM_MIN_VEC; +#define IECM_MAX_EVV_MAPPED_VEC 16 + max_vectors = min(num_vecs, IECM_MAX_EVV_MAPPED_VEC); + + v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors, + max_vectors, PCI_IRQ_MSIX); + if (v_actual < 0) { + dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n", + v_actual); + return v_actual; + } + + adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry), + GFP_KERNEL); + + if (!adapter->msix_entries) { + pci_free_irq_vectors(adapter->pdev); + return -ENOMEM; + } + + for (vector = 0; vector < v_actual; vector++) { + adapter->msix_entries[vector].entry = vector; + adapter->msix_entries[vector].vector = + pci_irq_vector(adapter->pdev, vector); + } + adapter->num_msix_entries = v_actual; + adapter->num_req_msix = num_vecs; + + iecm_intr_distribute(adapter); + + err = iecm_mb_intr_init(adapter); + if (err) + goto intr_rel; + iecm_mb_irq_enable(adapter); + return err; + +intr_rel: + iecm_intr_rel(adapter); + return err; } /** @@ -117,7 +173,18 @@ static int iecm_cfg_netdev(struct iecm_vport *vport) */ static int iecm_cfg_hw(struct iecm_adapter *adapter) { - /* stub */ + struct pci_dev *pdev = adapter->pdev; + struct iecm_hw *hw = &adapter->hw; + + hw->hw_addr_len = pci_resource_len(pdev, 0); + hw->hw_addr = ioremap(pci_resource_start(pdev, 0), hw->hw_addr_len); + + if (!hw->hw_addr) + return -EIO; + + hw->back = adapter; + + return 0; } /** @@ -126,7 +193,15 @@ static int iecm_cfg_hw(struct iecm_adapter *adapter) */ static int iecm_vport_res_alloc(struct iecm_vport *vport) { - /* stub */ + if (iecm_vport_queues_alloc(vport)) + return -ENOMEM; + + if (iecm_vport_intr_alloc(vport)) { + iecm_vport_queues_rel(vport); + return -ENOMEM; + } + + return 0; } /** @@ -135,7 +210,8 @@ static int iecm_vport_res_alloc(struct iecm_vport *vport) */ static void iecm_vport_res_free(struct iecm_vport *vport) { - /* stub */ + iecm_vport_intr_rel(vport); + iecm_vport_queues_rel(vport); } /** @@ -149,7 +225,22 @@ static void iecm_vport_res_free(struct iecm_vport *vport) */ static int iecm_get_free_slot(void *array, int size, int curr) { - /* stub */ + int **tmp_array = (int **)array; + int next; + + if (curr < (size - 1) && !tmp_array[curr + 1]) { + next = curr + 1; + } else { + int i = 0; + + while ((i < size) && (tmp_array[i])) + i++; + if (i == size) + next = IECM_NO_FREE_SLOT; + else + next = i; + } + return next; } /** @@ -158,7 +249,9 @@ static int iecm_get_free_slot(void *array, int size, int curr) */ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return np->vport; } /** @@ -167,7 +260,9 @@ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) */ struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return np->vport->adapter; } /** @@ -200,7 +295,17 @@ static int iecm_stop(struct net_device *netdev) */ static void iecm_vport_rel(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + + iecm_vport_stop(vport); + iecm_vport_res_free(vport); + iecm_deinit_rss(vport); + unregister_netdev(vport->netdev); + free_netdev(vport->netdev); + vport->netdev = NULL; + if (adapter->dev_ops.vc_ops.destroy_vport) + adapter->dev_ops.vc_ops.destroy_vport(vport); + kfree(vport); } /** @@ -209,7 +314,19 @@ static void iecm_vport_rel(struct iecm_vport *vport) */ static void iecm_vport_rel_all(struct iecm_adapter *adapter) { - /* stub */ + int i; + + if (!adapter->vports) + return; + + for (i = 0; i < adapter->num_alloc_vport; i++) { + if (!adapter->vports[i]) + continue; + + iecm_vport_rel(adapter->vports[i]); + adapter->vports[i] = NULL; + } + adapter->num_alloc_vport = 0; } /** @@ -233,7 +350,47 @@ void iecm_vport_set_hsplit(struct iecm_vport *vport, static struct iecm_vport * iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) { - /* stub */ + struct iecm_vport *vport = NULL; + + if (adapter->next_vport == IECM_NO_FREE_SLOT) + return vport; + + /* Need to protect the allocation of the vports at the adapter level */ + mutex_lock(&adapter->sw_mutex); + + vport = kzalloc(sizeof(*vport), GFP_KERNEL); + if (!vport) + goto unlock_adapter; + + vport->adapter = adapter; + vport->idx = adapter->next_vport; + vport->compln_clean_budget = IECM_TX_COMPLQ_CLEAN_BUDGET; + adapter->num_alloc_vport++; + adapter->dev_ops.vc_ops.vport_init(vport, vport_id); + + /* Setup default MSIX irq handler for the vport */ + vport->irq_q_handler = iecm_vport_intr_clean_queues; + vport->q_vector_base = IECM_MAX_NONQ_VEC; + + /* fill vport slot in the adapter struct */ + adapter->vports[adapter->next_vport] = vport; + if (iecm_cfg_netdev(vport)) + goto cfg_netdev_fail; + + /* prepare adapter->next_vport for next use */ + adapter->next_vport = iecm_get_free_slot(adapter->vports, + adapter->num_alloc_vport, + adapter->next_vport); + + goto unlock_adapter; + +cfg_netdev_fail: + adapter->vports[adapter->next_vport] = NULL; + kfree(vport); + vport = NULL; +unlock_adapter: + mutex_unlock(&adapter->sw_mutex); + return vport; } /** @@ -243,7 +400,22 @@ iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) */ static void iecm_service_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + serv_task.work); + + if (test_bit(__IECM_MB_INTR_MODE, adapter->flags)) { + if (test_and_clear_bit(__IECM_MB_INTR_TRIGGER, + adapter->flags)) { + iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0); + iecm_mb_irq_enable(adapter); + } + } else { + iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0); + } + + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(300)); } /** @@ -285,7 +457,50 @@ static int iecm_vport_open(struct iecm_vport *vport) */ static void iecm_init_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + init_task.work); + struct iecm_vport *vport; + struct pci_dev *pdev; + int vport_id, err; + + err = adapter->dev_ops.vc_ops.core_init(adapter, &vport_id); + if (err) + return; + + pdev = adapter->pdev; + vport = iecm_vport_alloc(adapter, vport_id); + if (!vport) { + err = -EFAULT; + dev_err(&pdev->dev, "probe failed on vport setup:%d\n", + err); + return; + } + /* Start the service task before requesting vectors. This will ensure + * vector information response from mailbox is handled + */ + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(5 * (pdev->devfn & 0x07))); + err = iecm_intr_req(adapter); + if (err) { + dev_err(&pdev->dev, "failed to enable interrupt vectors: %d\n", + err); + iecm_vport_rel(vport); + return; + } + /* Deal with major memory allocations for vport resources */ + err = iecm_vport_res_alloc(vport); + if (err) { + dev_err(&pdev->dev, "failed to allocate resources: %d\n", + err); + iecm_vport_rel(vport); + return; + } + + /* Once state is put into DOWN, driver is ready for dev_open */ + adapter->state = __IECM_DOWN; + if (test_and_clear_bit(__IECM_UP_REQUESTED, adapter->flags)) + iecm_vport_open(vport); } /** @@ -296,7 +511,46 @@ static void iecm_init_task(struct work_struct *work) */ static int iecm_api_init(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_reg_ops *reg_ops = &adapter->dev_ops.reg_ops; + struct pci_dev *pdev = adapter->pdev; + + if (!adapter->dev_ops.reg_ops_init) { + dev_err(&pdev->dev, "Invalid device, register API init not defined\n"); + return -EINVAL; + } + adapter->dev_ops.reg_ops_init(adapter); + if (!(reg_ops->ctlq_reg_init && reg_ops->vportq_reg_init && + reg_ops->intr_reg_init && reg_ops->mb_intr_reg_init && + reg_ops->reset_reg_init && reg_ops->trigger_reset)) { + dev_err(&pdev->dev, "Invalid device, missing one or more register functions\n"); + return -EINVAL; + } + + if (adapter->dev_ops.vc_ops_init) { + struct iecm_virtchnl_ops *vc_ops; + + adapter->dev_ops.vc_ops_init(adapter); + vc_ops = &adapter->dev_ops.vc_ops; + if (!(vc_ops->core_init && + vc_ops->vport_init && + vc_ops->vport_queue_ids_init && + vc_ops->get_caps && + vc_ops->config_queues && + vc_ops->enable_queues && + vc_ops->disable_queues && + vc_ops->irq_map_unmap && + vc_ops->get_set_rss_lut && + vc_ops->get_set_rss_hash && + vc_ops->adjust_qs && + vc_ops->get_ptype)) { + dev_err(&pdev->dev, "Invalid device, missing one or more virtchnl functions\n"); + return -EINVAL; + } + } else { + iecm_vc_ops_init(adapter); + } + + return 0; } /** @@ -308,7 +562,11 @@ static int iecm_api_init(struct iecm_adapter *adapter) */ static void iecm_deinit_task(struct iecm_adapter *adapter) { - /* stub */ + iecm_vport_rel_all(adapter); + cancel_delayed_work_sync(&adapter->serv_task); + iecm_deinit_dflt_mbx(adapter); + iecm_vport_params_buf_rel(adapter); + iecm_intr_rel(adapter); } /** @@ -330,7 +588,13 @@ static int iecm_init_hard_reset(struct iecm_adapter *adapter) */ static void iecm_vc_event_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + vc_event_task.work); + + if (test_bit(__IECM_HR_CORE_RESET, adapter->flags) || + test_bit(__IECM_HR_FUNC_RESET, adapter->flags)) + iecm_init_hard_reset(adapter); } /** @@ -359,7 +623,104 @@ int iecm_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent, struct iecm_adapter *adapter) { - /* stub */ + int err; + + adapter->pdev = pdev; + err = iecm_api_init(adapter); + if (err) { + dev_err(&pdev->dev, "Device API is incorrectly configured\n"); + return err; + } + + err = pcim_iomap_regions(pdev, BIT(IECM_BAR0), pci_name(pdev)); + if (err) { + dev_err(&pdev->dev, "BAR0 I/O map error %d\n", err); + return err; + } + + /* set up for high or low DMA */ + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + if (err) { + dev_err(&pdev->dev, "DMA configuration failed: 0x%x\n", err); + return err; + } + + pci_enable_pcie_error_reporting(pdev); + pci_set_master(pdev); + pci_set_drvdata(pdev, adapter); + + adapter->init_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + if (!adapter->init_wq) { + dev_err(&pdev->dev, "Failed to allocate workqueue\n"); + err = -ENOMEM; + goto err_wq_alloc; + } + + adapter->serv_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + if (!adapter->serv_wq) { + dev_err(&pdev->dev, "Failed to allocate workqueue\n"); + err = -ENOMEM; + goto err_mbx_wq_alloc; + } + /* setup msglvl */ + adapter->msg_enable = netif_msg_init(adapter->debug_msk, + IECM_AVAIL_NETIF_M); + + adapter->vports = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vports), GFP_KERNEL); + if (!adapter->vports) { + err = -ENOMEM; + goto err_vport_alloc; + } + + err = iecm_vport_params_buf_alloc(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to alloc vport params buffer: %d\n", + err); + goto err_mb_res; + } + + err = iecm_cfg_hw(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to configure HW structure for adapter: %d\n", + err); + goto err_cfg_hw; + } + + mutex_init(&adapter->sw_mutex); + mutex_init(&adapter->vc_msg_lock); + mutex_init(&adapter->reset_lock); + init_waitqueue_head(&adapter->vchnl_wq); + + INIT_DELAYED_WORK(&adapter->serv_task, iecm_service_task); + INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task); + INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task); + + mutex_lock(&adapter->reset_lock); + set_bit(__IECM_HR_DRV_LOAD, adapter->flags); + err = iecm_init_hard_reset(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to reset device: %d\n", err); + goto err_mb_init; + } + + return 0; +err_mb_init: +err_cfg_hw: + iecm_vport_params_buf_rel(adapter); +err_mb_res: + kfree(adapter->vports); +err_vport_alloc: + destroy_workqueue(adapter->serv_wq); +err_mbx_wq_alloc: + destroy_workqueue(adapter->init_wq); +err_wq_alloc: + pci_disable_pcie_error_reporting(pdev); + return err; } EXPORT_SYMBOL(iecm_probe); @@ -369,7 +730,22 @@ EXPORT_SYMBOL(iecm_probe); */ void iecm_remove(struct pci_dev *pdev) { - /* stub */ + struct iecm_adapter *adapter = pci_get_drvdata(pdev); + + if (!adapter) + return; + + iecm_deinit_task(adapter); + cancel_delayed_work_sync(&adapter->vc_event_task); + destroy_workqueue(adapter->serv_wq); + destroy_workqueue(adapter->init_wq); + kfree(adapter->vports); + kfree(adapter->vport_params_recvd); + kfree(adapter->vport_params_reqd); + mutex_destroy(&adapter->sw_mutex); + mutex_destroy(&adapter->vc_msg_lock); + mutex_destroy(&adapter->reset_lock); + pci_disable_pcie_error_reporting(pdev); } EXPORT_SYMBOL(iecm_remove); @@ -379,7 +755,13 @@ EXPORT_SYMBOL(iecm_remove); */ void iecm_shutdown(struct pci_dev *pdev) { - /* stub */ + struct iecm_adapter *adapter; + + adapter = pci_get_drvdata(pdev); + adapter->state = __IECM_REMOVE; + + if (system_state == SYSTEM_POWER_OFF) + pci_set_power_state(pdev, PCI_D3hot); } EXPORT_SYMBOL(iecm_shutdown); diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c index 68d9f2e6445b..3f26fbdf96f6 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_main.c +++ b/drivers/net/ethernet/intel/iecm/iecm_main.c @@ -21,7 +21,10 @@ MODULE_LICENSE("GPL v2"); */ static int __init iecm_module_init(void) { - /* stub */ + pr_info("%s\n", iecm_driver_string); + pr_info("%s\n", iecm_copyright); + + return 0; } module_init(iecm_module_init); @@ -33,6 +36,6 @@ module_init(iecm_module_init); */ static void __exit iecm_module_exit(void) { - /* stub */ + pr_info("module unloaded\n"); } module_exit(iecm_module_exit); diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 6df39810264a..e2da7dbc2ced 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -998,7 +998,11 @@ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) irqreturn_t iecm_vport_intr_clean_queues(int __always_unused irq, void *data) { - /* stub */ + struct iecm_q_vector *q_vector = (struct iecm_q_vector *)data; + + napi_schedule(&q_vector->napi); + + return IRQ_HANDLED; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 7fa8c06c5d7d..838bdeba8e61 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -424,7 +424,46 @@ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) */ int iecm_init_dflt_mbx(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_ctlq_create_info ctlq_info[] = { + { + .type = IECM_CTLQ_TYPE_MAILBOX_TX, + .id = IECM_DFLT_MBX_ID, + .len = IECM_DFLT_MBX_Q_LEN, + .buf_size = IECM_DFLT_MBX_BUF_SIZE + }, + { + .type = IECM_CTLQ_TYPE_MAILBOX_RX, + .id = IECM_DFLT_MBX_ID, + .len = IECM_DFLT_MBX_Q_LEN, + .buf_size = IECM_DFLT_MBX_BUF_SIZE + } + }; + struct iecm_hw *hw = &adapter->hw; + enum iecm_status status; + + adapter->dev_ops.reg_ops.ctlq_reg_init(ctlq_info); + +#define NUM_Q 2 + status = iecm_ctlq_init(hw, NUM_Q, ctlq_info); + if (status) + return -EINVAL; + + hw->asq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_TX, + IECM_DFLT_MBX_ID); + hw->arq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_RX, + IECM_DFLT_MBX_ID); + + if (!hw->asq || !hw->arq) { + iecm_ctlq_deinit(hw); + return -ENOENT; + } + adapter->state = __IECM_STARTUP; + /* Skew the delay for init tasks for each function based on fn number + * to prevent every function from making the same call simultaneously. + */ + queue_delayed_work(adapter->init_wq, &adapter->init_task, + msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07))); + return 0; } /** @@ -446,7 +485,15 @@ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) */ void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) { - /* stub */ + int i = 0; + + for (i = 0; i < IECM_MAX_NUM_VPORTS; i++) { + kfree(adapter->vport_params_recvd[i]); + kfree(adapter->vport_params_reqd[i]); + } + + kfree(adapter->caps); + kfree(adapter->config_data.req_qs_chunks); } /** @@ -572,6 +619,24 @@ static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) */ void iecm_vc_ops_init(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_virtchnl_ops *vc_ops = &adapter->dev_ops.vc_ops; + + vc_ops->core_init = iecm_vc_core_init; + vc_ops->vport_init = iecm_vport_init; + vc_ops->vport_queue_ids_init = iecm_vport_queue_ids_init; + vc_ops->get_caps = iecm_send_get_caps_msg; + vc_ops->is_cap_ena = iecm_is_capability_ena; + vc_ops->config_queues = iecm_send_config_queues_msg; + vc_ops->enable_queues = iecm_send_enable_queues_msg; + vc_ops->disable_queues = iecm_send_disable_queues_msg; + vc_ops->irq_map_unmap = iecm_send_map_unmap_queue_vector_msg; + vc_ops->enable_vport = iecm_send_enable_vport_msg; + vc_ops->disable_vport = iecm_send_disable_vport_msg; + vc_ops->destroy_vport = iecm_send_destroy_vport_msg; + vc_ops->get_ptype = iecm_send_get_rx_ptype_msg; + vc_ops->get_set_rss_lut = iecm_send_get_set_rss_lut_msg; + vc_ops->get_set_rss_hash = iecm_send_get_set_rss_hash_msg; + vc_ops->adjust_qs = iecm_vport_adjust_qs; + vc_ops->recv_mbx_msg = NULL; } EXPORT_SYMBOL(iecm_vc_ops_init); From patchwork Mon Aug 24 17:32:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 262001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02054C433E1 for ; Mon, 24 Aug 2020 17:33:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9A3F2074D for ; Mon, 24 Aug 2020 17:33:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728585AbgHXRdn (ORCPT ); Mon, 24 Aug 2020 13:33:43 -0400 Received: from mga09.intel.com ([134.134.136.24]:29863 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728191AbgHXRdi (ORCPT ); Mon, 24 Aug 2020 13:33:38 -0400 IronPort-SDR: F82myn6DJfXO3vFP/Z4uzJ/lOlbNnNMfFKjioP90K6Md0dn5AJnCG0Mn6yMqRReycdW6v8grDs 4PHT4LfihrMQ== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008566" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008566" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:23 -0700 IronPort-SDR: N2nNmzxNppKiSbdzCvn2GrRY4O6rEfQivNl+JAM8MMMRGZAjvwpvfjX7EQlrzOCnuF4GuxrwPB 2V8ygbdPYsKQ== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245347" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:22 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 06/15] iecm: Implement mailbox functionality Date: Mon, 24 Aug 2020 10:32:57 -0700 Message-Id: <20200824173306.3178343-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement mailbox setup, take down, and commands. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/iecm/iecm_controlq.c | 498 +++++++++++++++++- .../ethernet/intel/iecm/iecm_controlq_setup.c | 105 +++- drivers/net/ethernet/intel/iecm/iecm_lib.c | 94 +++- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 414 ++++++++++++++- 4 files changed, 1082 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c index 585392e4d72a..05451d96ee10 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_controlq.c +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c @@ -12,7 +12,15 @@ static void iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq, struct iecm_ctlq_create_info *q_create_info) { - /* stub */ + /* set head and tail registers in our local struct */ + cq->reg.head = q_create_info->reg.head; + cq->reg.tail = q_create_info->reg.tail; + cq->reg.len = q_create_info->reg.len; + cq->reg.bah = q_create_info->reg.bah; + cq->reg.bal = q_create_info->reg.bal; + cq->reg.len_mask = q_create_info->reg.len_mask; + cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask; + cq->reg.head_mask = q_create_info->reg.head_mask; } /** @@ -28,7 +36,32 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, struct iecm_ctlq_info *cq, bool is_rxq) { - /* stub */ + u32 reg = 0; + + if (is_rxq) + /* Update tail to post pre-allocated buffers for Rx queues */ + wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1)); + else + wr32(hw, cq->reg.tail, 0); + + /* For non-Mailbox control queues only TAIL need to be set */ + if (cq->q_id != -1) + return 0; + + /* Clear Head for both send or receive */ + wr32(hw, cq->reg.head, 0); + + /* set starting point */ + wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa)); + wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa)); + wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask)); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, cq->reg.bah); + if (reg != upper_32_bits(cq->desc_ring.pa)) + return IECM_ERR_CTLQ_ERROR; + + return 0; } /** @@ -40,7 +73,30 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, */ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) { - /* stub */ + int i = 0; + + for (i = 0; i < cq->ring_size; i++) { + struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i); + struct iecm_dma_mem *bi = cq->bi.rx_buff[i]; + + /* No buffer to post to descriptor, continue */ + if (!bi) + continue; + + desc->flags = + cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD); + desc->opcode = 0; + desc->datalen = (__le16)cpu_to_le16(bi->size); + desc->ret_val = 0; + desc->cookie_high = 0; + desc->cookie_low = 0; + desc->params.indirect.addr_high = + cpu_to_le32(upper_32_bits(bi->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(lower_32_bits(bi->pa)); + desc->params.indirect.param0 = 0; + desc->params.indirect.param1 = 0; + } } /** @@ -52,7 +108,20 @@ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) */ static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + mutex_lock(&cq->cq_lock); + + if (!cq->ring_size) + goto shutdown_sq_out; + + /* free ring buffers and the ring itself */ + iecm_ctlq_dealloc_ring_res(hw, cq); + + /* Set ring_size to 0 to indicate uninitialized queue */ + cq->ring_size = 0; + +shutdown_sq_out: + mutex_unlock(&cq->cq_lock); + mutex_destroy(&cq->cq_lock); } /** @@ -71,7 +140,74 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, struct iecm_ctlq_create_info *qinfo, struct iecm_ctlq_info **cq_out) { - /* stub */ + enum iecm_status status = 0; + bool is_rxq = false; + + if (!qinfo->len || !qinfo->buf_size || + qinfo->len > IECM_CTLQ_MAX_RING_SIZE || + qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN) + return IECM_ERR_CFG; + + *cq_out = kcalloc(1, sizeof(struct iecm_ctlq_info), GFP_KERNEL); + if (!(*cq_out)) + return IECM_ERR_NO_MEMORY; + + (*cq_out)->cq_type = qinfo->type; + (*cq_out)->q_id = qinfo->id; + (*cq_out)->buf_size = qinfo->buf_size; + (*cq_out)->ring_size = qinfo->len; + + (*cq_out)->next_to_use = 0; + (*cq_out)->next_to_clean = 0; + (*cq_out)->next_to_post = (*cq_out)->ring_size - 1; + + switch (qinfo->type) { + case IECM_CTLQ_TYPE_MAILBOX_RX: + is_rxq = true; + fallthrough; + case IECM_CTLQ_TYPE_MAILBOX_TX: + status = iecm_ctlq_alloc_ring_res(hw, *cq_out); + break; + default: + status = IECM_ERR_PARAM; + break; + } + + if (status) + goto init_free_q; + + if (is_rxq) { + iecm_ctlq_init_rxq_bufs(*cq_out); + } else { + /* Allocate the array of msg pointers for TX queues */ + (*cq_out)->bi.tx_msg = kcalloc(qinfo->len, + sizeof(struct iecm_ctlq_msg *), + GFP_KERNEL); + if (!(*cq_out)->bi.tx_msg) { + status = IECM_ERR_NO_MEMORY; + goto init_dealloc_q_mem; + } + } + + iecm_ctlq_setup_regs(*cq_out, qinfo); + + status = iecm_ctlq_init_regs(hw, *cq_out, is_rxq); + if (status) + goto init_dealloc_q_mem; + + mutex_init(&(*cq_out)->cq_lock); + + list_add(&(*cq_out)->cq_list, &hw->cq_list_head); + + return status; + +init_dealloc_q_mem: + /* free ring buffers and the ring itself */ + iecm_ctlq_dealloc_ring_res(hw, *cq_out); +init_free_q: + kfree(*cq_out); + + return status; } /** @@ -82,7 +218,9 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, void iecm_ctlq_remove(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + list_del(&cq->cq_list); + iecm_ctlq_shutdown(hw, cq); + kfree(cq); } /** @@ -99,7 +237,27 @@ void iecm_ctlq_remove(struct iecm_hw *hw, enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, struct iecm_ctlq_create_info *q_info) { - /* stub */ + struct iecm_ctlq_info *cq = NULL, *tmp = NULL; + enum iecm_status ret_code = 0; + int i = 0; + + INIT_LIST_HEAD(&hw->cq_list_head); + + for (i = 0; i < num_q; i++) { + struct iecm_ctlq_create_info *qinfo = q_info + i; + + ret_code = iecm_ctlq_add(hw, qinfo, &cq); + if (ret_code) + goto init_destroy_qs; + } + + return ret_code; + +init_destroy_qs: + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) + iecm_ctlq_remove(hw, cq); + + return ret_code; } /** @@ -108,7 +266,13 @@ enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, */ enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw) { - /* stub */ + struct iecm_ctlq_info *cq = NULL, *tmp = NULL; + enum iecm_status ret_code = 0; + + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) + iecm_ctlq_remove(hw, cq); + + return ret_code; } /** @@ -128,7 +292,79 @@ enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, u16 num_q_msg, struct iecm_ctlq_msg q_msg[]) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_desc *desc; + int num_desc_avail = 0; + int i = 0; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + mutex_lock(&cq->cq_lock); + + /* Ensure there are enough descriptors to send all messages */ + num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq); + if (num_desc_avail == 0 || num_desc_avail < num_q_msg) { + status = IECM_ERR_CTLQ_FULL; + goto sq_send_command_out; + } + + for (i = 0; i < num_q_msg; i++) { + struct iecm_ctlq_msg *msg = &q_msg[i]; + u64 msg_cookie; + + desc = IECM_CTLQ_DESC(cq, cq->next_to_use); + + desc->opcode = cpu_to_le16(msg->opcode); + desc->pfid_vfid = cpu_to_le16(msg->func_id); + + msg_cookie = *(u64 *)&msg->cookie; + desc->cookie_high = + cpu_to_le32(upper_32_bits(msg_cookie)); + desc->cookie_low = + cpu_to_le32(lower_32_bits(msg_cookie)); + + if (msg->data_len) { + struct iecm_dma_mem *buff = msg->ctx.indirect.payload; + + desc->datalen = cpu_to_le16(msg->data_len); + desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_BUF); + desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_RD); + + /* Update the address values in the desc with the pa + * value for respective buffer + */ + desc->params.indirect.addr_high = + cpu_to_le32(upper_32_bits(buff->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(lower_32_bits(buff->pa)); + + memcpy(&desc->params, msg->ctx.indirect.context, + IECM_INDIRECT_CTX_SIZE); + } else { + memcpy(&desc->params, msg->ctx.direct, + IECM_DIRECT_CTX_SIZE); + } + + /* Store buffer info */ + cq->bi.tx_msg[cq->next_to_use] = msg; + + (cq->next_to_use)++; + if (cq->next_to_use == cq->ring_size) + cq->next_to_use = 0; + } + + /* Force memory write to complete before letting hardware + * know that there are new descriptors to fetch. + */ + dma_wmb(); + + wr32(hw, cq->reg.tail, cq->next_to_use); + +sq_send_command_out: + mutex_unlock(&cq->cq_lock); + + return status; } /** @@ -150,7 +386,58 @@ enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count, struct iecm_ctlq_msg *msg_status[]) { - /* stub */ + enum iecm_status ret = 0; + struct iecm_ctlq_desc *desc; + u16 i = 0, num_to_clean; + u16 ntc, desc_err; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + if (*clean_count == 0) + return 0; + if (*clean_count > cq->ring_size) + return IECM_ERR_PARAM; + + mutex_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + + num_to_clean = *clean_count; + + for (i = 0; i < num_to_clean; i++) { + /* Fetch next descriptor and check if marked as done */ + desc = IECM_CTLQ_DESC(cq, ntc); + if (!(le16_to_cpu(desc->flags) & IECM_CTLQ_FLAG_DD)) + break; + + desc_err = le16_to_cpu(desc->ret_val); + if (desc_err) { + /* strip off FW internal code */ + desc_err &= 0xff; + } + + msg_status[i] = cq->bi.tx_msg[ntc]; + msg_status[i]->status = desc_err; + + cq->bi.tx_msg[ntc] = NULL; + + /* Zero out any stale data */ + memset(desc, 0, sizeof(*desc)); + + ntc++; + if (ntc == cq->ring_size) + ntc = 0; + } + + cq->next_to_clean = ntc; + + mutex_unlock(&cq->cq_lock); + + /* Return number of descriptors actually cleaned */ + *clean_count = i; + + return ret; } /** @@ -173,7 +460,112 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, u16 *buff_count, struct iecm_dma_mem **buffs) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_desc *desc; + u16 ntp = cq->next_to_post; + bool buffs_avail = false; + u16 tbp = ntp + 1; + int i = 0; + + if (*buff_count > cq->ring_size) + return IECM_ERR_PARAM; + + if (*buff_count > 0) + buffs_avail = true; + + mutex_lock(&cq->cq_lock); + + if (tbp >= cq->ring_size) + tbp = 0; + + if (tbp == cq->next_to_clean) + /* Nothing to do */ + goto post_buffs_out; + + /* Post buffers for as many as provided or up until the last one used */ + while (ntp != cq->next_to_clean) { + desc = IECM_CTLQ_DESC(cq, ntp); + + if (cq->bi.rx_buff[ntp]) + goto fill_desc; + if (!buffs_avail) { + /* If the caller hasn't given us any buffers or + * there are none left, search the ring itself + * for an available buffer to move to this + * entry starting at the next entry in the ring + */ + tbp = ntp + 1; + + /* Wrap ring if necessary */ + if (tbp >= cq->ring_size) + tbp = 0; + + while (tbp != cq->next_to_clean) { + if (cq->bi.rx_buff[tbp]) { + cq->bi.rx_buff[ntp] = + cq->bi.rx_buff[tbp]; + cq->bi.rx_buff[tbp] = NULL; + + /* Found a buffer, no need to + * search anymore + */ + break; + } + + /* Wrap ring if necessary */ + tbp++; + if (tbp >= cq->ring_size) + tbp = 0; + } + + if (tbp == cq->next_to_clean) + goto post_buffs_out; + } else { + /* Give back pointer to DMA buffer */ + cq->bi.rx_buff[ntp] = buffs[i]; + i++; + + if (i >= *buff_count) + buffs_avail = false; + } + +fill_desc: + desc->flags = + cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD); + + /* Post buffers to descriptor */ + desc->datalen = cpu_to_le16(cq->bi.rx_buff[ntp]->size); + desc->params.indirect.addr_high = + cpu_to_le32(upper_32_bits(cq->bi.rx_buff[ntp]->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(lower_32_bits(cq->bi.rx_buff[ntp]->pa)); + + ntp++; + if (ntp == cq->ring_size) + ntp = 0; + } + +post_buffs_out: + /* Only update tail if buffers were actually posted */ + if (cq->next_to_post != ntp) { + if (ntp) + /* Update next_to_post to ntp - 1 since current ntp + * will not have a buffer + */ + cq->next_to_post = ntp - 1; + else + /* Wrap to end of end ring since current ntp is 0 */ + cq->next_to_post = cq->ring_size - 1; + + wr32(hw, cq->reg.tail, cq->next_to_post); + } + + mutex_unlock(&cq->cq_lock); + + /* return the number of buffers that were not posted */ + *buff_count = *buff_count - i; + + return status; } /** @@ -190,5 +582,87 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg, struct iecm_ctlq_msg *q_msg) { - /* stub */ + u16 num_to_clean, ntc, ret_val, flags; + enum iecm_status ret_code = 0; + struct iecm_ctlq_desc *desc; + u16 i = 0; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + if (*num_q_msg == 0) + return 0; + else if (*num_q_msg > cq->ring_size) + return IECM_ERR_PARAM; + + /* take the lock before we start messing with the ring */ + mutex_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + + num_to_clean = *num_q_msg; + + for (i = 0; i < num_to_clean; i++) { + u64 msg_cookie; + + /* Fetch next descriptor and check if marked as done */ + desc = IECM_CTLQ_DESC(cq, ntc); + flags = le16_to_cpu(desc->flags); + + if (!(flags & IECM_CTLQ_FLAG_DD)) + break; + + ret_val = le16_to_cpu(desc->ret_val); + + q_msg[i].vmvf_type = (flags & + (IECM_CTLQ_FLAG_FTYPE_VM | + IECM_CTLQ_FLAG_FTYPE_PF)) >> + IECM_CTLQ_FLAG_FTYPE_S; + + if (flags & IECM_CTLQ_FLAG_ERR) + ret_code = IECM_ERR_CTLQ_ERROR; + + msg_cookie = (u64)le32_to_cpu(desc->cookie_high) << 32; + msg_cookie |= (u64)le32_to_cpu(desc->cookie_low); + memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64)); + + q_msg[i].opcode = le16_to_cpu(desc->opcode); + q_msg[i].data_len = le16_to_cpu(desc->datalen); + q_msg[i].status = ret_val; + + if (desc->datalen) { + memcpy(q_msg[i].ctx.indirect.context, + &desc->params.indirect, IECM_INDIRECT_CTX_SIZE); + + /* Assign pointer to DMA buffer to ctlq_msg array + * to be given to upper layer + */ + q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc]; + + /* Zero out pointer to DMA buffer info; + * will be repopulated by post buffers API + */ + cq->bi.rx_buff[ntc] = NULL; + } else { + memcpy(q_msg[i].ctx.direct, desc->params.raw, + IECM_DIRECT_CTX_SIZE); + } + + /* Zero out stale data in descriptor */ + memset(desc, 0, sizeof(struct iecm_ctlq_desc)); + + ntc++; + if (ntc == cq->ring_size) + ntc = 0; + }; + + cq->next_to_clean = ntc; + + mutex_unlock(&cq->cq_lock); + + *num_q_msg = i; + if (*num_q_msg == 0) + ret_code = IECM_ERR_CTLQ_NO_WORK; + + return ret_code; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c index d4402be5aa7f..c6f5c935f803 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c @@ -12,7 +12,13 @@ static enum iecm_status iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc); + + cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size); + if (!cq->desc_ring.va) + return IECM_ERR_NO_MEMORY; + + return 0; } /** @@ -26,7 +32,52 @@ iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + int i = 0; + + /* Do not allocate DMA buffers for transmit queues */ + if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX) + return 0; + + /* We'll be allocating the buffer info memory first, then we can + * allocate the mapped buffers for the event processing + */ + cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct iecm_dma_mem *), + GFP_KERNEL); + if (!cq->bi.rx_buff) + return IECM_ERR_NO_MEMORY; + + /* allocate the mapped buffers (except for the last one) */ + for (i = 0; i < cq->ring_size - 1; i++) { + struct iecm_dma_mem *bi; + int num = 1; /* number of iecm_dma_mem to be allocated */ + + cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct iecm_dma_mem), + GFP_KERNEL); + if (!cq->bi.rx_buff[i]) + goto unwind_alloc_cq_bufs; + + bi = cq->bi.rx_buff[i]; + + bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size); + if (!bi->va) { + /* unwind will not free the failed entry */ + kfree(cq->bi.rx_buff[i]); + goto unwind_alloc_cq_bufs; + } + } + + return 0; + +unwind_alloc_cq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) { + iecm_free_dma_mem(hw, cq->bi.rx_buff[i]); + kfree(cq->bi.rx_buff[i]); + } + kfree(cq->bi.rx_buff); + + return IECM_ERR_NO_MEMORY; } /** @@ -40,7 +91,7 @@ static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + iecm_free_dma_mem(hw, &cq->desc_ring); } /** @@ -53,7 +104,26 @@ static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, */ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + void *bi; + + if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) { + int i; + + /* free DMA buffers for Rx queues*/ + for (i = 0; i < cq->ring_size; i++) { + if (cq->bi.rx_buff[i]) { + iecm_free_dma_mem(hw, cq->bi.rx_buff[i]); + kfree(cq->bi.rx_buff[i]); + } + } + + bi = (void *)cq->bi.rx_buff; + } else { + bi = (void *)cq->bi.tx_msg; + } + + /* free the buffer header */ + kfree(bi); } /** @@ -65,7 +135,9 @@ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) */ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + /* free ring buffers and the ring itself */ + iecm_ctlq_free_bufs(hw, cq); + iecm_ctlq_free_desc_ring(hw, cq); } /** @@ -79,5 +151,26 @@ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + enum iecm_status ret_code; + + /* verify input for valid configuration */ + if (!cq->ring_size || !cq->buf_size) + return IECM_ERR_CFG; + + /* allocate the ring memory */ + ret_code = iecm_ctlq_alloc_desc_ring(hw, cq); + if (ret_code) + return ret_code; + + /* allocate buffers in the rings */ + ret_code = iecm_ctlq_alloc_bufs(hw, cq); + if (ret_code) + goto iecm_init_cq_free_ring; + + /* success! */ + return 0; + +iecm_init_cq_free_ring: + iecm_free_dma_mem(hw, &cq->desc_ring); + return ret_code; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 23eef4a9c7ca..7151aa5fa95c 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -162,7 +162,82 @@ static int iecm_intr_req(struct iecm_adapter *adapter) */ static int iecm_cfg_netdev(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + netdev_features_t dflt_features; + netdev_features_t offloads = 0; + struct iecm_netdev_priv *np; + struct net_device *netdev; + int err; + + netdev = alloc_etherdev_mqs(sizeof(struct iecm_netdev_priv), + IECM_MAX_Q, IECM_MAX_Q); + if (!netdev) + return -ENOMEM; + vport->netdev = netdev; + np = netdev_priv(netdev); + np->vport = vport; + + if (!is_valid_ether_addr(vport->default_mac_addr)) { + eth_hw_addr_random(netdev); + ether_addr_copy(vport->default_mac_addr, netdev->dev_addr); + } else { + ether_addr_copy(netdev->dev_addr, vport->default_mac_addr); + ether_addr_copy(netdev->perm_addr, vport->default_mac_addr); + } + + /* assign netdev_ops */ + if (iecm_is_queue_model_split(vport->txq_model)) + netdev->netdev_ops = &iecm_netdev_ops_splitq; + else + netdev->netdev_ops = &iecm_netdev_ops_singleq; + + /* setup watchdog timeout value to be 5 second */ + netdev->watchdog_timeo = 5 * HZ; + + /* configure default MTU size */ + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = vport->max_mtu; + + dflt_features = NETIF_F_SG | + NETIF_F_HIGHDMA | + NETIF_F_RXHASH; + + if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_STATELESS_OFFLOADS)) { + dflt_features |= + NETIF_F_RXCSUM | + NETIF_F_IP_CSUM | + NETIF_F_IPV6_CSUM | + 0; + offloads |= NETIF_F_TSO | + NETIF_F_TSO6; + } + if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_UDP_SEG_OFFLOAD)) + offloads |= NETIF_F_GSO_UDP_L4; + + netdev->features |= dflt_features; + netdev->hw_features |= dflt_features | offloads; + netdev->hw_enc_features |= dflt_features | offloads; + + iecm_set_ethtool_ops(netdev); + SET_NETDEV_DEV(netdev, &adapter->pdev->dev); + + /* carrier off on init to avoid Tx hangs */ + netif_carrier_off(netdev); + + /* make sure transmit queues start off as stopped */ + netif_tx_stop_all_queues(netdev); + + /* register last */ + err = register_netdev(netdev); + if (err) + goto err; + + return 0; +err: + free_netdev(vport->netdev); + vport->netdev = NULL; + + return err; } /** @@ -796,12 +871,25 @@ static int iecm_change_mtu(struct net_device *netdev, int new_mtu) void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) { - /* stub */ + size_t sz = ALIGN(size, 4096); + struct iecm_adapter *pf = hw->back; + + mem->va = dma_alloc_coherent(&pf->pdev->dev, sz, + &mem->pa, GFP_KERNEL | __GFP_ZERO); + mem->size = size; + + return mem->va; } void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem) { - /* stub */ + struct iecm_adapter *pf = hw->back; + + dma_free_coherent(&pf->pdev->dev, mem->size, + mem->va, mem->pa); + mem->size = 0; + mem->va = NULL; + mem->pa = 0; } static const struct net_device_ops iecm_netdev_ops_splitq = { diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 838bdeba8e61..86de33c308e3 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -39,7 +39,42 @@ struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = { */ static void iecm_recv_event_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum virtchnl_link_speed link_speed; + struct virtchnl_pf_event *vpe; + + vpe = (struct virtchnl_pf_event *)vport->adapter->vc_msg; + + switch (vpe->event) { + case VIRTCHNL_EVENT_LINK_CHANGE: + link_speed = vpe->event_data.link_event.link_speed; + adapter->link_speed = link_speed; + if (adapter->link_up != + vpe->event_data.link_event.link_status) { + adapter->link_up = + vpe->event_data.link_event.link_status; + if (adapter->link_up) { + netif_tx_start_all_queues(vport->netdev); + netif_carrier_on(vport->netdev); + } else { + netif_tx_stop_all_queues(vport->netdev); + netif_carrier_off(vport->netdev); + } + } + break; + case VIRTCHNL_EVENT_RESET_IMPENDING: + mutex_lock(&adapter->reset_lock); + set_bit(__IECM_HR_CORE_RESET, adapter->flags); + queue_delayed_work(adapter->vc_event_wq, + &adapter->vc_event_task, + msecs_to_jiffies(10)); + break; + default: + dev_err(&vport->adapter->pdev->dev, + "Unknown event %d from PF\n", vpe->event); + break; + } + mutex_unlock(&vport->adapter->vc_msg_lock); } /** @@ -52,7 +87,29 @@ static void iecm_recv_event_msg(struct iecm_vport *vport) */ static int iecm_mb_clean(struct iecm_adapter *adapter) { - /* stub */ + u16 i, num_q_msg = IECM_DFLT_MBX_Q_LEN; + struct iecm_ctlq_msg **q_msg; + struct iecm_dma_mem *dma_mem; + int err = 0; + + q_msg = kcalloc(num_q_msg, sizeof(struct iecm_ctlq_msg *), GFP_KERNEL); + if (!q_msg) + return -ENOMEM; + + err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg); + if (err) + goto error; + + for (i = 0; i < num_q_msg; i++) { + dma_mem = q_msg[i]->ctx.indirect.payload; + if (dma_mem) + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, + dma_mem->va, dma_mem->pa); + kfree(q_msg[i]); + } +error: + kfree(q_msg); + return err; } /** @@ -69,7 +126,53 @@ static int iecm_mb_clean(struct iecm_adapter *adapter) int iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, u16 msg_size, u8 *msg) { - /* stub */ + struct iecm_ctlq_msg *ctlq_msg; + struct iecm_dma_mem *dma_mem; + int err = 0; + + err = iecm_mb_clean(adapter); + if (err) + return err; + + ctlq_msg = kzalloc(sizeof(*ctlq_msg), GFP_KERNEL); + if (!ctlq_msg) + return -ENOMEM; + + dma_mem = kzalloc(sizeof(*dma_mem), GFP_KERNEL); + if (!dma_mem) { + err = -ENOMEM; + goto dma_mem_error; + } + + memset(ctlq_msg, 0, sizeof(struct iecm_ctlq_msg)); + ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_cp; + ctlq_msg->func_id = 0; + ctlq_msg->data_len = msg_size; + ctlq_msg->cookie.mbx.chnl_opcode = op; + ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS; + dma_mem->size = IECM_DFLT_MBX_BUF_SIZE; + dma_mem->va = dmam_alloc_coherent(&adapter->pdev->dev, dma_mem->size, + &dma_mem->pa, GFP_KERNEL); + if (!dma_mem->va) { + err = -ENOMEM; + goto dma_alloc_error; + } + memcpy(dma_mem->va, msg, msg_size); + ctlq_msg->ctx.indirect.payload = dma_mem; + + err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); + if (err) + goto send_error; + + return 0; +send_error: + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, dma_mem->va, + dma_mem->pa); +dma_alloc_error: + kfree(dma_mem); +dma_mem_error: + kfree(ctlq_msg); + return err; } EXPORT_SYMBOL(iecm_send_mb_msg); @@ -86,7 +189,264 @@ EXPORT_SYMBOL(iecm_send_mb_msg); int iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, void *msg, int msg_size) { - /* stub */ + struct iecm_ctlq_msg ctlq_msg; + struct iecm_dma_mem *dma_mem; + struct iecm_vport *vport; + bool work_done = false; + int payload_size = 0; + int num_retry = 10; + u16 num_q_msg; + int err = 0; + + vport = adapter->vports[0]; + while (1) { + /* Try to get one message */ + num_q_msg = 1; + dma_mem = NULL; + err = iecm_ctlq_recv(adapter->hw.arq, &num_q_msg, &ctlq_msg); + /* If no message then decide if we have to retry based on + * opcode + */ + if (err || !num_q_msg) { + if (op && num_retry--) { + msleep(20); + continue; + } else { + break; + } + } + + /* If we are here a message is received. Check if we are looking + * for a specific message based on opcode. If it is different + * ignore and post buffers + */ + if (op && ctlq_msg.cookie.mbx.chnl_opcode != op) + goto post_buffs; + + if (ctlq_msg.data_len) + payload_size = ctlq_msg.ctx.indirect.payload->size; + + /* All conditions are met. Either a message requested is + * received or we received a message to be processed + */ + switch (ctlq_msg.cookie.mbx.chnl_opcode) { + case VIRTCHNL_OP_VERSION: + case VIRTCHNL_OP_GET_CAPS: + case VIRTCHNL_OP_CREATE_VPORT: + if (msg) + memcpy(msg, ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, msg_size)); + work_done = true; + break; + case VIRTCHNL_OP_ENABLE_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_ENA_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_ENA_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DISABLE_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DIS_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_DIS_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DESTROY_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DESTROY_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_DESTROY_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_TX_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_TXQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_TXQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_RX_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_RXQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_RXQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_ENA_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_ENA_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DIS_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_DIS_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_ADD_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_ADD_QUEUES_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min((int) + ctlq_msg.ctx.indirect.payload->size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_ADD_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DEL_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DEL_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_DEL_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_MAP_QUEUE_VECTOR: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_MAP_IRQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_MAP_IRQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_UNMAP_IRQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_UNMAP_IRQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_STATS: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_STATS_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_STATS, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_HASH: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_HASH_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_HASH, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_SET_RSS_HASH: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_SET_RSS_HASH_ERR, + adapter->vc_state); + set_bit(IECM_VC_SET_RSS_HASH, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_LUT: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_LUT_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_LUT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_SET_RSS_LUT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_SET_RSS_LUT_ERR, + adapter->vc_state); + set_bit(IECM_VC_SET_RSS_LUT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_KEY: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_KEY_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_KEY, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_RSS_KEY: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_RSS_KEY_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_RSS_KEY, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_EVENT: + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + iecm_recv_event_msg(vport); + break; + default: + if (adapter->dev_ops.vc_ops.recv_mbx_msg) + err = + adapter->dev_ops.vc_ops.recv_mbx_msg(adapter, + msg, + msg_size, + &ctlq_msg, + &work_done); + else + dev_warn(&adapter->pdev->dev, + "Unhandled virtchnl response %d\n", + ctlq_msg.cookie.mbx.chnl_opcode); + break; + } /* switch v_opcode */ +post_buffs: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + num_q_msg = 0; + + err = iecm_ctlq_post_rx_buffs(&adapter->hw, adapter->hw.arq, + &num_q_msg, &dma_mem); + + /* If post failed clear the only buffer we supplied */ + if (err && dma_mem) + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, + dma_mem->va, dma_mem->pa); + /* Applies only if we are looking for a specific opcode */ + if (work_done) + break; + } + + return err; } EXPORT_SYMBOL(iecm_recv_mb_msg); @@ -177,7 +537,23 @@ int iecm_wait_for_event(struct iecm_adapter *adapter, enum iecm_vport_vc_state state, enum iecm_vport_vc_state err_check) { - /* stub */ + int event; + + event = wait_event_timeout(adapter->vchnl_wq, + test_and_clear_bit(state, adapter->vc_state), + msecs_to_jiffies(500)); + if (event) { + if (test_and_clear_bit(err_check, adapter->vc_state)) { + dev_err(&adapter->pdev->dev, + "VC response error %d\n", err_check); + return -EINVAL; + } + return 0; + } + + /* Timeout occurred */ + dev_err(&adapter->pdev->dev, "VC timeout, state = %u\n", state); + return -ETIMEDOUT; } EXPORT_SYMBOL(iecm_wait_for_event); @@ -404,7 +780,14 @@ int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, enum iecm_ctlq_type type, int id) { - /* stub */ + struct iecm_ctlq_info *cq, *tmp; + + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) { + if (cq->q_id == id && cq->cq_type == type) + return cq; + } + + return NULL; } /** @@ -413,7 +796,8 @@ static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, */ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) { - /* stub */ + cancel_delayed_work_sync(&adapter->init_task); + iecm_ctlq_deinit(&adapter->hw); } /** @@ -474,7 +858,21 @@ int iecm_init_dflt_mbx(struct iecm_adapter *adapter) */ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) { - /* stub */ + adapter->vport_params_reqd = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vport_params_reqd), + GFP_KERNEL); + if (!adapter->vport_params_reqd) + return -ENOMEM; + + adapter->vport_params_recvd = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vport_params_recvd), + GFP_KERNEL); + if (!adapter->vport_params_recvd) { + kfree(adapter->vport_params_reqd); + return -ENOMEM; + } + + return 0; } /** From patchwork Mon Aug 24 17:32:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 261996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A403C433DF for ; Mon, 24 Aug 2020 17:35:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 181272067C for ; Mon, 24 Aug 2020 17:35:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728488AbgHXRfU (ORCPT ); Mon, 24 Aug 2020 13:35:20 -0400 Received: from mga09.intel.com ([134.134.136.24]:29865 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728574AbgHXRdt (ORCPT ); Mon, 24 Aug 2020 13:33:49 -0400 IronPort-SDR: kAlCv47aSlcN7OPyaIMKLIbYGSgj+SUjQ5g8NF40vCKfsxeLDH2wC/JsLA0z34lClPFUAIWSLs kPs20lIL+E+A== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008572" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008572" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:24 -0700 IronPort-SDR: OrXJvW2KxZkaET/qYEB3bQgQ3epvICapp84dlV0l9Nnzx6thbDUTht6InkmJ0BF8SOKf9Y7ogF 8IST3iek+U3g== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245350" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:23 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 07/15] iecm: Implement virtchnl commands Date: Mon, 24 Aug 2020 10:32:58 -0700 Message-Id: <20200824173306.3178343-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement various virtchnl commands that enable communication with hardware. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 1151 ++++++++++++++++- 1 file changed, 1124 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 86de33c308e3..2f716ec42cf2 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -458,7 +458,13 @@ EXPORT_SYMBOL(iecm_recv_mb_msg); */ static int iecm_send_ver_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_version_info vvi; + + vvi.major = VIRTCHNL_VERSION_MAJOR; + vvi.minor = VIRTCHNL_VERSION_MINOR; + + return iecm_send_mb_msg(adapter, VIRTCHNL_OP_VERSION, sizeof(vvi), + (u8 *)&vvi); } /** @@ -469,7 +475,19 @@ static int iecm_send_ver_msg(struct iecm_adapter *adapter) */ static int iecm_recv_ver_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_version_info vvi; + int err = 0; + + err = iecm_recv_mb_msg(adapter, VIRTCHNL_OP_VERSION, &vvi, sizeof(vvi)); + if (err) + return err; + + if (vvi.major > VIRTCHNL_VERSION_MAJOR || + (vvi.major == VIRTCHNL_VERSION_MAJOR && + vvi.minor > VIRTCHNL_VERSION_MINOR)) + dev_warn(&adapter->pdev->dev, "Virtchnl version not matched\n"); + + return 0; } /** @@ -481,7 +499,25 @@ static int iecm_recv_ver_msg(struct iecm_adapter *adapter) */ int iecm_send_get_caps_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_get_capabilities caps = {0}; + int buf_size; + + buf_size = sizeof(struct virtchnl_get_capabilities); + adapter->caps = kzalloc(buf_size, GFP_KERNEL); + if (!adapter->caps) + return -ENOMEM; + + caps.cap_flags = VIRTCHNL_CAP_STATELESS_OFFLOADS | + VIRTCHNL_CAP_UDP_SEG_OFFLOAD | + VIRTCHNL_CAP_RSS | + VIRTCHNL_CAP_TCP_RSC | + VIRTCHNL_CAP_HEADER_SPLIT | + VIRTCHNL_CAP_RDMA | + VIRTCHNL_CAP_SRIOV | + VIRTCHNL_CAP_EDT; + + return iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_CAPS, sizeof(caps), + (u8 *)&caps); } EXPORT_SYMBOL(iecm_send_get_caps_msg); @@ -494,7 +530,8 @@ EXPORT_SYMBOL(iecm_send_get_caps_msg); */ static int iecm_recv_get_caps_msg(struct iecm_adapter *adapter) { - /* stub */ + return iecm_recv_mb_msg(adapter, VIRTCHNL_OP_GET_CAPS, adapter->caps, + sizeof(struct virtchnl_get_capabilities)); } /** @@ -507,7 +544,25 @@ static int iecm_recv_get_caps_msg(struct iecm_adapter *adapter) */ static int iecm_send_create_vport_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + int buf_size; + + buf_size = sizeof(struct virtchnl_create_vport); + if (!adapter->vport_params_reqd[0]) { + adapter->vport_params_reqd[0] = kzalloc(buf_size, GFP_KERNEL); + if (!adapter->vport_params_reqd[0]) + return -ENOMEM; + } + + vport_msg = (struct virtchnl_create_vport *) + adapter->vport_params_reqd[0]; + vport_msg->vport_type = VIRTCHNL_VPORT_TYPE_DEFAULT; + vport_msg->txq_model = VIRTCHNL_QUEUE_MODEL_SPLIT; + vport_msg->rxq_model = VIRTCHNL_QUEUE_MODEL_SPLIT; + iecm_vport_calc_total_qs(vport_msg, 0); + + return iecm_send_mb_msg(adapter, VIRTCHNL_OP_CREATE_VPORT, buf_size, + (u8 *)vport_msg); } /** @@ -521,7 +576,25 @@ static int iecm_send_create_vport_msg(struct iecm_adapter *adapter) static int iecm_recv_create_vport_msg(struct iecm_adapter *adapter, int *vport_id) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + int err; + + if (!adapter->vport_params_recvd[0]) { + adapter->vport_params_recvd[0] = kzalloc(IECM_DFLT_MBX_BUF_SIZE, + GFP_KERNEL); + if (!adapter->vport_params_recvd[0]) + return -ENOMEM; + } + + vport_msg = (struct virtchnl_create_vport *) + adapter->vport_params_recvd[0]; + + err = iecm_recv_mb_msg(adapter, VIRTCHNL_OP_CREATE_VPORT, vport_msg, + IECM_DFLT_MBX_BUF_SIZE); + if (err) + return err; + *vport_id = vport_msg->vport_id; + return 0; } /** @@ -566,7 +639,19 @@ EXPORT_SYMBOL(iecm_wait_for_event); */ int iecm_send_destroy_vport_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_vport v_id; + int err; + + v_id.vport_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_DESTROY_VPORT, + sizeof(v_id), (u8 *)&v_id); + + if (err) + return err; + return iecm_wait_for_event(adapter, IECM_VC_DESTROY_VPORT, + IECM_VC_DESTROY_VPORT_ERR); } /** @@ -602,7 +687,119 @@ int iecm_send_disable_vport_msg(struct iecm_vport *vport) */ int iecm_send_config_tx_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_config_tx_queues *ctq = NULL; + struct virtchnl_txq_info_v2 *qi; + int totqs, num_msgs; + int num_qs, err = 0; + int i, k = 0; + + totqs = vport->num_txq + vport->num_complq; + qi = kcalloc(totqs, sizeof(struct virtchnl_txq_info_v2), GFP_KERNEL); + if (!qi) + return -ENOMEM; + + /* Populate the queue info buffer with all queue context info */ + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + int j; + + for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + qi[k].queue_id = tx_qgrp->txqs[j].q_id; + qi[k].model = vport->txq_model; + qi[k].type = tx_qgrp->txqs[j].q_type; + qi[k].ring_len = tx_qgrp->txqs[j].desc_count; + qi[k].dma_ring_addr = tx_qgrp->txqs[j].dma; + if (iecm_is_queue_model_split(vport->txq_model)) { + struct iecm_queue *q = &tx_qgrp->txqs[j]; + + qi[k].tx_compl_queue_id = tx_qgrp->complq->q_id; + qi[k].desc_profile = + VIRTCHNL_TXQ_DESC_PROFILE_NATIVE; + + if (test_bit(__IECM_Q_FLOW_SCH_EN, q->flags)) + qi[k].sched_mode = + VIRTCHNL_TXQ_SCHED_MODE_FLOW; + else + qi[k].sched_mode = + VIRTCHNL_TXQ_SCHED_MODE_QUEUE; + } else { + qi[k].sched_mode = + VIRTCHNL_TXQ_SCHED_MODE_QUEUE; + qi[k].desc_profile = + VIRTCHNL_TXQ_DESC_PROFILE_BASE; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + qi[k].queue_id = tx_qgrp->complq->q_id; + qi[k].model = vport->txq_model; + qi[k].type = tx_qgrp->complq->q_type; + qi[k].desc_profile = VIRTCHNL_TXQ_DESC_PROFILE_NATIVE; + qi[k].ring_len = tx_qgrp->complq->desc_count; + qi[k].dma_ring_addr = tx_qgrp->complq->dma; + k++; + } + } + + /* Make sure accounting agrees */ + if (k != totqs) { + err = -EINVAL; + goto error; + } + + /* Chunk up the queue contexts into multiple messages to avoid + * sending a control queue message buffer that is too large + */ + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + else + num_qs = IECM_NUM_QCTX_PER_MSG; + + num_msgs = totqs / IECM_NUM_QCTX_PER_MSG; + if (totqs % IECM_NUM_QCTX_PER_MSG) + num_msgs++; + + for (i = 0, k = 0; i < num_msgs || num_qs; i++) { + int buf_size = sizeof(struct virtchnl_config_tx_queues) + + (sizeof(struct virtchnl_txq_info_v2) * (num_qs - 1)); + if (!ctq || num_qs != IECM_NUM_QCTX_PER_MSG) { + kfree(ctq); + ctq = kzalloc(buf_size, GFP_KERNEL); + if (!ctq) { + err = -ENOMEM; + goto error; + } + } else { + memset(ctq, 0, buf_size); + } + + ctq->vport_id = vport->vport_id; + ctq->num_qinfo = num_qs; + memcpy(ctq->qinfo, &qi[k], + sizeof(struct virtchnl_txq_info_v2) * num_qs); + + err = iecm_send_mb_msg(vport->adapter, + VIRTCHNL_OP_CONFIG_TX_QUEUES, + buf_size, (u8 *)ctq); + if (err) + goto mbx_error; + + err = iecm_wait_for_event(vport->adapter, IECM_VC_CONFIG_TXQ, + IECM_VC_CONFIG_TXQ_ERR); + if (err) + goto mbx_error; + + k += num_qs; + totqs -= num_qs; + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + } + +mbx_error: + kfree(ctq); +error: + kfree(qi); + return err; } /** @@ -614,7 +811,145 @@ int iecm_send_config_tx_queues_msg(struct iecm_vport *vport) */ int iecm_send_config_rx_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_config_rx_queues *crq = NULL; + int totqs, num_msgs, num_qs, err = 0; + struct virtchnl_rxq_info_v2 *qi; + int i, k = 0; + + totqs = vport->num_rxq + vport->num_bufq; + qi = kcalloc(totqs, sizeof(struct virtchnl_rxq_info_v2), GFP_KERNEL); + if (!qi) + return -ENOMEM; + + /* Populate the queue info buffer with all queue context info */ + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int num_rxq; + int j; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, k++) { + struct iecm_queue *rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + rxq = &rx_qgrp->splitq.rxq_sets[j].rxq; + qi[k].rx_bufq1_id = + rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id; + qi[k].rx_bufq2_id = + rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id; + qi[k].hdr_buffer_size = rxq->rx_hbuf_size; + qi[k].rsc_low_watermark = + rxq->rsc_low_watermark; + + if (rxq->rx_hsplit_en) { + qi[k].queue_flags = + VIRTCHNL_RXQ_HDR_SPLIT; + qi[k].hdr_buffer_size = + rxq->rx_hbuf_size; + } + if (iecm_is_feature_ena(vport, NETIF_F_GRO_HW)) + qi[k].queue_flags |= VIRTCHNL_RXQ_RSC; + } else { + rxq = &rx_qgrp->singleq.rxqs[j]; + } + + qi[k].queue_id = rxq->q_id; + qi[k].model = vport->rxq_model; + qi[k].type = rxq->q_type; + qi[k].desc_profile = VIRTCHNL_TXQ_DESC_PROFILE_BASE; + qi[k].desc_size = VIRTCHNL_RXQ_DESC_SIZE_32BYTE; + qi[k].ring_len = rxq->desc_count; + qi[k].dma_ring_addr = rxq->dma; + qi[k].max_pkt_size = rxq->rx_max_pkt_size; + qi[k].data_buffer_size = rxq->rx_buf_size; + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++, k++) { + struct iecm_queue *bufq = + &rx_qgrp->splitq.bufq_sets[j].bufq; + + qi[k].queue_id = bufq->q_id; + qi[k].model = vport->rxq_model; + qi[k].type = bufq->q_type; + qi[k].desc_profile = + VIRTCHNL_TXQ_DESC_PROFILE_NATIVE; + qi[k].desc_size = + VIRTCHNL_RXQ_DESC_SIZE_32BYTE; + qi[k].ring_len = bufq->desc_count; + qi[k].dma_ring_addr = bufq->dma; + qi[k].data_buffer_size = bufq->rx_buf_size; + qi[k].buffer_notif_stride = + bufq->rx_buf_stride; + qi[k].rsc_low_watermark = + bufq->rsc_low_watermark; + } + } + } + + /* Make sure accounting agrees */ + if (k != totqs) { + err = -EINVAL; + goto error; + } + + /* Chunk up the queue contexts into multiple messages to avoid + * sending a control queue message buffer that is too large + */ + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + else + num_qs = IECM_NUM_QCTX_PER_MSG; + + num_msgs = totqs / IECM_NUM_QCTX_PER_MSG; + if (totqs % IECM_NUM_QCTX_PER_MSG) + num_msgs++; + + for (i = 0, k = 0; i < num_msgs || num_qs; i++) { + int buf_size = sizeof(struct virtchnl_config_rx_queues) + + (sizeof(struct virtchnl_rxq_info_v2) * (num_qs - 1)); + if (!crq || num_qs != IECM_NUM_QCTX_PER_MSG) { + kfree(crq); + crq = kzalloc(buf_size, GFP_KERNEL); + if (!crq) { + err = -ENOMEM; + goto error; + } + } else { + memset(crq, 0, buf_size); + } + + crq->vport_id = vport->vport_id; + crq->num_qinfo = num_qs; + memcpy(crq->qinfo, &qi[k], + sizeof(struct virtchnl_rxq_info_v2) * num_qs); + + err = iecm_send_mb_msg(vport->adapter, + VIRTCHNL_OP_CONFIG_RX_QUEUES, + buf_size, (u8 *)crq); + if (err) + goto mbx_error; + + err = iecm_wait_for_event(vport->adapter, IECM_VC_CONFIG_RXQ, + IECM_VC_CONFIG_RXQ_ERR); + if (err) + goto mbx_error; + + k += num_qs; + totqs -= num_qs; + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + } + +mbx_error: + kfree(crq); +error: + kfree(qi); + return err; } /** @@ -629,7 +964,113 @@ int iecm_send_config_rx_queues_msg(struct iecm_vport *vport) static int iecm_send_ena_dis_queues_msg(struct iecm_vport *vport, enum virtchnl_ops vc_op) { - /* stub */ + int num_txq, num_rxq, num_q, buf_size, err = 0; + struct virtchnl_del_ena_dis_queues *eq; + struct virtchnl_queue_chunk *qc; + int i, j, k = 0; + + /* validate virtchnl op */ + switch (vc_op) { + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + break; + default: + return -EINVAL; + } + + num_txq = vport->num_txq + vport->num_complq; + num_rxq = vport->num_rxq + vport->num_bufq; + num_q = num_txq + num_rxq; + buf_size = sizeof(struct virtchnl_del_ena_dis_queues) + + (sizeof(struct virtchnl_queue_chunk) * (num_q - 1)); + eq = kzalloc(buf_size, GFP_KERNEL); + if (!eq) + return -ENOMEM; + + eq->vport_id = vport->vport_id; + eq->chunks.num_chunks = num_q; + qc = eq->chunks.chunks; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + qc[k].type = tx_qgrp->txqs[j].q_type; + qc[k].start_queue_id = tx_qgrp->txqs[j].q_id; + qc[k].num_queues = 1; + } + } + if (vport->num_txq != k) { + err = -EINVAL; + goto error; + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + for (i = 0; i < vport->num_txq_grp; i++, k++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + qc[k].type = tx_qgrp->complq->q_type; + qc[k].start_queue_id = tx_qgrp->complq->q_id; + qc[k].num_queues = 1; + } + if (vport->num_complq != (k - vport->num_txq)) { + err = -EINVAL; + goto error; + } + } + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, k++) { + if (iecm_is_queue_model_split(vport->rxq_model)) { + qc[k].start_queue_id = + rx_qgrp->splitq.rxq_sets[j].rxq.q_id; + qc[k].type = + rx_qgrp->splitq.rxq_sets[j].rxq.q_type; + } else { + qc[k].start_queue_id = + rx_qgrp->singleq.rxqs[j].q_id; + qc[k].type = + rx_qgrp->singleq.rxqs[j].q_type; + } + qc[k].num_queues = 1; + } + } + if (vport->num_rxq != k - (vport->num_txq + vport->num_complq)) { + err = -EINVAL; + goto error; + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + struct iecm_queue *q; + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++, k++) { + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + qc[k].type = q->q_type; + qc[k].start_queue_id = q->q_id; + qc[k].num_queues = 1; + } + } + if (vport->num_bufq != k - (vport->num_txq + + vport->num_complq + + vport->num_rxq)) { + err = -EINVAL; + goto error; + } + } + + err = iecm_send_mb_msg(vport->adapter, vc_op, buf_size, (u8 *)eq); +error: + kfree(eq); + return err; } /** @@ -644,7 +1085,104 @@ static int iecm_send_ena_dis_queues_msg(struct iecm_vport *vport, static int iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_queue_vector_maps *vqvm; + struct virtchnl_queue_vector *vqv; + int buf_size, num_q, err = 0; + int i, j, k = 0; + + num_q = vport->num_txq + vport->num_rxq; + + buf_size = sizeof(struct virtchnl_queue_vector_maps) + + (sizeof(struct virtchnl_queue_vector) * (num_q - 1)); + vqvm = kzalloc(buf_size, GFP_KERNEL); + if (!vqvm) + return -ENOMEM; + + vqvm->vport_id = vport->vport_id; + vqvm->num_maps = num_q; + vqv = vqvm->qv_maps; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + vqv[k].queue_type = tx_qgrp->txqs[j].q_type; + vqv[k].queue_id = tx_qgrp->txqs[j].q_id; + + if (iecm_is_queue_model_split(vport->txq_model)) { + vqv[k].vector_id = + tx_qgrp->complq->q_vector->v_idx; + vqv[k].itr_idx = + tx_qgrp->complq->itr.itr_idx; + } else { + vqv[k].vector_id = + tx_qgrp->txqs[j].q_vector->v_idx; + vqv[k].itr_idx = + tx_qgrp->txqs[j].itr.itr_idx; + } + } + } + + if (vport->num_txq != k) { + err = -EINVAL; + goto error; + } + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int num_rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, k++) { + struct iecm_queue *rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + rxq = &rx_qgrp->splitq.rxq_sets[j].rxq; + else + rxq = &rx_qgrp->singleq.rxqs[j]; + + vqv[k].queue_type = rxq->q_type; + vqv[k].queue_id = rxq->q_id; + vqv[k].vector_id = rxq->q_vector->v_idx; + vqv[k].itr_idx = rxq->itr.itr_idx; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + if (vport->num_rxq != k - vport->num_complq) { + err = -EINVAL; + goto error; + } + } else { + if (vport->num_rxq != k - vport->num_txq) { + err = -EINVAL; + goto error; + } + } + + if (map) { + err = iecm_send_mb_msg(adapter, + VIRTCHNL_OP_MAP_QUEUE_VECTOR, + buf_size, (u8 *)vqvm); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_MAP_IRQ, + IECM_VC_MAP_IRQ_ERR); + } else { + err = iecm_send_mb_msg(adapter, + VIRTCHNL_OP_UNMAP_QUEUE_VECTOR, + buf_size, (u8 *)vqvm); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_UNMAP_IRQ, + IECM_VC_UNMAP_IRQ_ERR); + } +error: + kfree(vqvm); + return err; } /** @@ -656,7 +1194,16 @@ iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map) */ static int iecm_send_enable_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int err; + + err = iecm_send_ena_dis_queues_msg(vport, + VIRTCHNL_OP_ENABLE_QUEUES_V2); + if (err) + return err; + + return iecm_wait_for_event(adapter, IECM_VC_ENA_QUEUES, + IECM_VC_ENA_QUEUES_ERR); } /** @@ -668,7 +1215,16 @@ static int iecm_send_enable_queues_msg(struct iecm_vport *vport) */ static int iecm_send_disable_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int err; + + err = iecm_send_ena_dis_queues_msg(vport, + VIRTCHNL_OP_DISABLE_QUEUES_V2); + if (err) + return err; + + return iecm_wait_for_event(adapter, IECM_VC_DIS_QUEUES, + IECM_VC_DIS_QUEUES_ERR); } /** @@ -680,7 +1236,48 @@ static int iecm_send_disable_queues_msg(struct iecm_vport *vport) */ int iecm_send_delete_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_create_vport *vport_params; + struct virtchnl_del_ena_dis_queues *eq; + struct virtchnl_queue_chunks *chunks; + int buf_size, num_chunks, err; + + if (vport->adapter->config_data.req_qs_chunks) { + struct virtchnl_add_queues *vc_aq = + (struct virtchnl_add_queues *) + vport->adapter->config_data.req_qs_chunks; + chunks = &vc_aq->chunks; + } else { + vport_params = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + chunks = &vport_params->chunks; + } + + num_chunks = chunks->num_chunks; + buf_size = sizeof(struct virtchnl_del_ena_dis_queues) + + (sizeof(struct virtchnl_queue_chunk) * + (num_chunks - 1)); + + eq = kzalloc(buf_size, GFP_KERNEL); + if (!eq) + return -ENOMEM; + + eq->vport_id = vport->vport_id; + eq->chunks.num_chunks = num_chunks; + + memcpy(eq->chunks.chunks, chunks->chunks, num_chunks * + sizeof(struct virtchnl_queue_chunk)); + + err = iecm_send_mb_msg(vport->adapter, VIRTCHNL_OP_DEL_QUEUES, + buf_size, (u8 *)eq); + if (err) + goto error; + + err = iecm_wait_for_event(adapter, IECM_VC_DEL_QUEUES, + IECM_VC_DEL_QUEUES_ERR); +error: + kfree(eq); + return err; } /** @@ -692,7 +1289,13 @@ int iecm_send_delete_queues_msg(struct iecm_vport *vport) */ static int iecm_send_config_queues_msg(struct iecm_vport *vport) { - /* stub */ + int err; + + err = iecm_send_config_tx_queues_msg(vport); + if (err) + return err; + + return iecm_send_config_rx_queues_msg(vport); } /** @@ -708,7 +1311,56 @@ static int iecm_send_config_queues_msg(struct iecm_vport *vport) int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_add_queues aq = {0}; + struct virtchnl_add_queues *vc_msg; + int size, err; + + vc_msg = (struct virtchnl_add_queues *)adapter->vc_msg; + + aq.vport_id = vport->vport_id; + aq.num_tx_q = num_tx_q; + aq.num_tx_complq = num_complq; + aq.num_rx_q = num_rx_q; + aq.num_rx_bufq = num_rx_bufq; + + err = iecm_send_mb_msg(adapter, + VIRTCHNL_OP_ADD_QUEUES, + sizeof(struct virtchnl_add_queues), (u8 *)&aq); + if (err) + return err; + + err = iecm_wait_for_event(adapter, IECM_VC_ADD_QUEUES, + IECM_VC_ADD_QUEUES_ERR); + if (err) + return err; + + kfree(adapter->config_data.req_qs_chunks); + adapter->config_data.req_qs_chunks = NULL; + + /* compare vc_msg num queues with vport num queues */ + if (vc_msg->num_tx_q != num_tx_q || + vc_msg->num_rx_q != num_rx_q || + vc_msg->num_tx_complq != num_complq || + vc_msg->num_rx_bufq != num_rx_bufq) { + err = -EINVAL; + goto error; + } + + size = sizeof(struct virtchnl_add_queues) + + ((vc_msg->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk)); + adapter->config_data.req_qs_chunks = + kzalloc(size, GFP_KERNEL); + if (!adapter->config_data.req_qs_chunks) { + err = -ENOMEM; + goto error; + } + memcpy(adapter->config_data.req_qs_chunks, + adapter->vc_msg, size); +error: + mutex_unlock(&adapter->vc_msg_lock); + return err; } /** @@ -719,7 +1371,47 @@ int iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, */ int iecm_send_get_stats_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_eth_stats *stats; + struct virtchnl_queue_select vqs; + int err; + + stats = (struct virtchnl_eth_stats *)adapter->vc_msg; + + /* Don't send get_stats message if one is pending or the + * link is down + */ + if (test_bit(IECM_VC_GET_STATS, adapter->vc_state) || + adapter->state <= __IECM_DOWN) + return 0; + + vqs.vsi_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_STATS, + sizeof(vqs), (u8 *)&vqs); + + if (err) + return err; + + err = iecm_wait_for_event(adapter, IECM_VC_GET_STATS, + IECM_VC_GET_STATS_ERR); + if (err) + return err; + + vport->netstats.rx_packets = stats->rx_unicast + + stats->rx_multicast + + stats->rx_broadcast; + vport->netstats.tx_packets = stats->tx_unicast + + stats->tx_multicast + + stats->tx_broadcast; + vport->netstats.rx_bytes = stats->rx_bytes; + vport->netstats.tx_bytes = stats->tx_bytes; + vport->netstats.tx_errors = stats->tx_errors; + vport->netstats.rx_dropped = stats->rx_discards; + vport->netstats.tx_dropped = stats->tx_discards; + mutex_unlock(&adapter->vc_msg_lock); + + return 0; } /** @@ -731,7 +1423,40 @@ int iecm_send_get_stats_msg(struct iecm_vport *vport) */ int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_rss_hash rh = {0}; + int err; + + rh.vport_id = vport->vport_id; + rh.hash = adapter->rss_data.rss_hash; + + if (get) { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_RSS_HASH, + sizeof(rh), (u8 *)&rh); + if (err) + return err; + + err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_HASH, + IECM_VC_GET_RSS_HASH_ERR); + if (err) + return err; + + memcpy(&rh, adapter->vc_msg, sizeof(rh)); + adapter->rss_data.rss_hash = rh.hash; + /* Leave the buffer clean for next message */ + memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + + return 0; + } + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_SET_RSS_HASH, + sizeof(rh), (u8 *)&rh); + if (err) + return err; + + return iecm_wait_for_event(adapter, IECM_VC_SET_RSS_HASH, + IECM_VC_SET_RSS_HASH_ERR); } /** @@ -743,7 +1468,74 @@ int iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) */ int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_rss_lut_v2 *recv_rl; + struct virtchnl_rss_lut_v2 *rl; + int buf_size, lut_buf_size; + int i, err = 0; + + buf_size = sizeof(struct virtchnl_rss_lut_v2) + + (sizeof(u16) * (adapter->rss_data.rss_lut_size - 1)); + rl = kzalloc(buf_size, GFP_KERNEL); + if (!rl) + return -ENOMEM; + + if (!get) { + rl->lut_entries = adapter->rss_data.rss_lut_size; + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) + rl->lut[i] = adapter->rss_data.rss_lut[i]; + } + rl->vport_id = vport->vport_id; + + if (get) { + err = iecm_send_mb_msg(vport->adapter, VIRTCHNL_OP_GET_RSS_LUT, + buf_size, (u8 *)rl); + if (err) + goto error; + + err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_LUT, + IECM_VC_GET_RSS_LUT_ERR); + if (err) + goto error; + + recv_rl = (struct virtchnl_rss_lut_v2 *)adapter->vc_msg; + if (adapter->rss_data.rss_lut_size != + recv_rl->lut_entries) { + adapter->rss_data.rss_lut_size = + recv_rl->lut_entries; + kfree(adapter->rss_data.rss_lut); + + lut_buf_size = adapter->rss_data.rss_lut_size * + sizeof(u16); + adapter->rss_data.rss_lut = kzalloc(lut_buf_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_lut) { + adapter->rss_data.rss_lut_size = 0; + /* Leave the buffer clean */ + memset(adapter->vc_msg, 0, + IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + err = -ENOMEM; + goto error; + } + } + memcpy(adapter->rss_data.rss_lut, adapter->vc_msg, + adapter->rss_data.rss_lut_size); + /* Leave the buffer clean for next message */ + memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + } else { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_SET_RSS_LUT, + buf_size, (u8 *)rl); + if (err) + goto error; + + err = iecm_wait_for_event(adapter, IECM_VC_SET_RSS_LUT, + IECM_VC_SET_RSS_LUT_ERR); + } +error: + kfree(rl); + return err; } /** @@ -755,7 +1547,70 @@ int iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) */ int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_rss_key *recv_rk; + struct virtchnl_rss_key *rk; + int i, buf_size, err = 0; + + buf_size = sizeof(struct virtchnl_rss_key) + + (sizeof(u8) * (adapter->rss_data.rss_key_size - 1)); + rk = kzalloc(buf_size, GFP_KERNEL); + if (!rk) + return -ENOMEM; + rk->vsi_id = vport->vport_id; + + if (get) { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_RSS_KEY, + buf_size, (u8 *)rk); + if (err) + goto error; + + err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_KEY, + IECM_VC_GET_RSS_KEY_ERR); + if (err) + goto error; + + recv_rk = (struct virtchnl_rss_key *)adapter->vc_msg; + if (adapter->rss_data.rss_key_size != + recv_rk->key_len) { + adapter->rss_data.rss_key_size = + min_t(u16, NETDEV_RSS_KEY_LEN, + recv_rk->key_len); + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = (u8 *) + kzalloc(adapter->rss_data.rss_key_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_key) { + adapter->rss_data.rss_key_size = 0; + /* Leave the buffer clean */ + memset(adapter->vc_msg, 0, + IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + err = -ENOMEM; + goto error; + } + } + memcpy(adapter->rss_data.rss_key, adapter->vc_msg, + adapter->rss_data.rss_key_size); + /* Leave the buffer clean for next message */ + memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + } else { + rk->key_len = adapter->rss_data.rss_key_size; + for (i = 0; i < adapter->rss_data.rss_key_size; i++) + rk->key[i] = adapter->rss_data.rss_key[i]; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_CONFIG_RSS_KEY, + buf_size, (u8 *)rk); + if (err) + goto error; + + err = iecm_wait_for_event(adapter, IECM_VC_CONFIG_RSS_KEY, + IECM_VC_CONFIG_RSS_KEY_ERR); + } +error: + kfree(rk); + return err; } /** @@ -766,7 +1621,26 @@ int iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) */ int iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_rx_ptype_decoded *rx_ptype_lkup = vport->rx_ptype_lkup; + static const int ptype_list[IECM_RX_SUPP_PTYPE] = { + 0, 1, + 11, 12, + 22, 23, 24, 25, 26, 27, 28, + 88, 89, 90, 91, 92, 93, 94 + }; + int i; + + for (i = 0; i < IECM_RX_MAX_PTYPE; i++) + rx_ptype_lkup[i] = iecm_rx_ptype_lkup[0]; + + for (i = 0; i < IECM_RX_SUPP_PTYPE; i++) { + int j = ptype_list[i]; + + rx_ptype_lkup[j] = iecm_rx_ptype_lkup[i]; + rx_ptype_lkup[j].ptype = ptype_list[i]; + }; + + return 0; } /** @@ -905,7 +1779,53 @@ void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) */ int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id) { - /* stub */ + switch (adapter->state) { + case __IECM_STARTUP: + if (iecm_send_ver_msg(adapter)) + goto init_failed; + adapter->state = __IECM_VER_CHECK; + goto restart; + case __IECM_VER_CHECK: + if (iecm_recv_ver_msg(adapter)) + goto init_failed; + adapter->state = __IECM_GET_CAPS; + if (adapter->dev_ops.vc_ops.get_caps(adapter)) + goto init_failed; + goto restart; + case __IECM_GET_CAPS: + if (iecm_recv_get_caps_msg(adapter)) + goto init_failed; + if (iecm_send_create_vport_msg(adapter)) + goto init_failed; + adapter->state = __IECM_GET_DFLT_VPORT_PARAMS; + goto restart; + case __IECM_GET_DFLT_VPORT_PARAMS: + if (iecm_recv_create_vport_msg(adapter, vport_id)) + goto init_failed; + adapter->state = __IECM_INIT_SW; + break; + case __IECM_INIT_SW: + break; + default: + dev_err(&adapter->pdev->dev, "Device is in bad state: %d\n", + adapter->state); + goto init_failed; + } + + return 0; +restart: + queue_delayed_work(adapter->init_wq, &adapter->init_task, + msecs_to_jiffies(30)); + /* Not an error. Using try again to continue with state machine */ + return -EAGAIN; +init_failed: + if (++adapter->mb_wait_count > IECM_MB_MAX_ERR) { + dev_err(&adapter->pdev->dev, "Failed to establish mailbox communications with hardware\n"); + return -EFAULT; + } + adapter->state = __IECM_STARTUP; + queue_delayed_work(adapter->init_wq, &adapter->init_task, HZ); + return -EAGAIN; } EXPORT_SYMBOL(iecm_vc_core_init); @@ -953,7 +1873,32 @@ iecm_vport_get_queue_ids(u16 *qids, int num_qids, enum virtchnl_queue_type q_type, struct virtchnl_queue_chunks *chunks) { - /* stub */ + int num_chunks = chunks->num_chunks; + struct virtchnl_queue_chunk *chunk; + int num_q_id_filled = 0; + int start_q_id; + int num_q; + int i; + + while (num_chunks) { + chunk = &chunks->chunks[num_chunks - 1]; + if (chunk->type == q_type) { + num_q = chunk->num_queues; + start_q_id = chunk->start_queue_id; + for (i = 0; i < num_q; i++) { + if ((num_q_id_filled + i) < num_qids) { + qids[num_q_id_filled + i] = start_q_id; + start_q_id++; + } else { + break; + } + } + num_q_id_filled = num_q_id_filled + i; + } + num_chunks--; + } + + return num_q_id_filled; } /** @@ -970,7 +1915,80 @@ static int __iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, int num_qids, enum virtchnl_queue_type q_type) { - /* stub */ + struct iecm_queue *q; + int i, j, k = 0; + + switch (q_type) { + case VIRTCHNL_QUEUE_TYPE_TX: + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + for (j = 0; j < tx_qgrp->num_txq; j++) { + if (k < num_qids) { + tx_qgrp->txqs[j].q_id = qids[k]; + tx_qgrp->txqs[j].q_type = + VIRTCHNL_QUEUE_TYPE_TX; + k++; + } else { + break; + } + } + } + break; + case VIRTCHNL_QUEUE_TYPE_RX: + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int num_rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq && k < num_qids; j++, k++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + else + q = &rx_qgrp->singleq.rxqs[j]; + q->q_id = qids[k]; + q->q_type = VIRTCHNL_QUEUE_TYPE_RX; + } + } + break; + case VIRTCHNL_QUEUE_TYPE_TX_COMPLETION: + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + if (k < num_qids) { + tx_qgrp->complq->q_id = qids[k]; + tx_qgrp->complq->q_type = + VIRTCHNL_QUEUE_TYPE_TX_COMPLETION; + k++; + } else { + break; + } + } + break; + case VIRTCHNL_QUEUE_TYPE_RX_BUFFER: + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + if (k < num_qids) { + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + q->q_id = qids[k]; + q->q_type = + VIRTCHNL_QUEUE_TYPE_RX_BUFFER; + k++; + } else { + break; + } + } + } + break; + } + + return k; } /** @@ -982,7 +2000,76 @@ __iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, */ static int iecm_vport_queue_ids_init(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_create_vport *vport_params; + struct virtchnl_queue_chunks *chunks; + enum virtchnl_queue_type q_type; + /* We may never deal with more that 256 same type of queues */ +#define IECM_MAX_QIDS 256 + u16 qids[IECM_MAX_QIDS]; + int num_ids; + + if (vport->adapter->config_data.num_req_qs) { + struct virtchnl_add_queues *vc_aq = + (struct virtchnl_add_queues *) + vport->adapter->config_data.req_qs_chunks; + chunks = &vc_aq->chunks; + } else { + vport_params = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + chunks = &vport_params->chunks; + /* compare vport_params num queues with vport num queues */ + if (vport_params->num_tx_q != vport->num_txq || + vport_params->num_rx_q != vport->num_rxq || + vport_params->num_tx_complq != vport->num_complq || + vport_params->num_rx_bufq != vport->num_bufq) + return -EINVAL; + } + + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, + VIRTCHNL_QUEUE_TYPE_TX, + chunks); + if (num_ids != vport->num_txq) + return -EINVAL; + num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids, + VIRTCHNL_QUEUE_TYPE_TX); + if (num_ids != vport->num_txq) + return -EINVAL; + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, + VIRTCHNL_QUEUE_TYPE_RX, + chunks); + if (num_ids != vport->num_rxq) + return -EINVAL; + num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids, + VIRTCHNL_QUEUE_TYPE_RX); + if (num_ids != vport->num_rxq) + return -EINVAL; + + if (iecm_is_queue_model_split(vport->txq_model)) { + q_type = VIRTCHNL_QUEUE_TYPE_TX_COMPLETION; + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, q_type, + chunks); + if (num_ids != vport->num_complq) + return -EINVAL; + num_ids = __iecm_vport_queue_ids_init(vport, qids, + num_ids, + q_type); + if (num_ids != vport->num_complq) + return -EINVAL; + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + q_type = VIRTCHNL_QUEUE_TYPE_RX_BUFFER; + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, q_type, + chunks); + if (num_ids != vport->num_bufq) + return -EINVAL; + num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids, + q_type); + if (num_ids != vport->num_bufq) + return -EINVAL; + } + + return 0; } /** @@ -993,7 +2080,16 @@ static int iecm_vport_queue_ids_init(struct iecm_vport *vport) */ void iecm_vport_adjust_qs(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_create_vport vport_msg; + + vport_msg.txq_model = vport->txq_model; + vport_msg.rxq_model = vport->rxq_model; + iecm_vport_calc_total_qs(&vport_msg, + vport->adapter->config_data.num_req_qs); + + iecm_vport_init_num_qs(vport, &vport_msg); + iecm_vport_calc_num_q_groups(vport); + iecm_vport_calc_num_q_vec(vport); } /** @@ -1005,7 +2101,8 @@ void iecm_vport_adjust_qs(struct iecm_vport *vport) */ static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) { - /* stub */ + return ((struct virtchnl_get_capabilities *)adapter->caps)->cap_flags & + flag; } /** From patchwork Mon Aug 24 17:32:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 261994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F52FC433E1 for ; Mon, 24 Aug 2020 17:37:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E19F92067C for ; Mon, 24 Aug 2020 17:37:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726843AbgHXRhq (ORCPT ); Mon, 24 Aug 2020 13:37:46 -0400 Received: from mga09.intel.com ([134.134.136.24]:29863 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728575AbgHXRhm (ORCPT ); Mon, 24 Aug 2020 13:37:42 -0400 IronPort-SDR: k47FZy3ZCmGUpavEr7cIbhSojjb3NE4COhVzq2RPy/c+1S3GWgserliN9JkQlomKD7FMEWkDkA AT1IjG37pXQw== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008578" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008578" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:24 -0700 IronPort-SDR: Matq5tfz5UhSkREDhrgJa4A6Jh5GmFF7Ku7s/CikwsMelObO0VIykDYVY1NsIHDalNYhF5kmkx xbBLwiL1Znmw== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245355" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:23 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 08/15] iecm: Implement vector allocation Date: Mon, 24 Aug 2020 10:32:59 -0700 Message-Id: <20200824173306.3178343-9-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This allocates PCI vectors and maps to interrupt routines. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 63 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 605 +++++++++++++++++- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 24 +- 3 files changed, 668 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 7151aa5fa95c..4037d13e4512 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -14,7 +14,11 @@ static const struct net_device_ops iecm_netdev_ops_singleq; */ static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) { - /* stub */ + int irq_num; + + irq_num = adapter->msix_entries[0].vector; + synchronize_irq(irq_num); + free_irq(irq_num, adapter); } /** @@ -43,7 +47,12 @@ static void iecm_intr_rel(struct iecm_adapter *adapter) */ static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) { - /* stub */ + struct iecm_adapter *adapter = (struct iecm_adapter *)data; + + set_bit(__IECM_MB_INTR_TRIGGER, adapter->flags); + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(0)); + return IRQ_HANDLED; } /** @@ -52,7 +61,12 @@ static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) */ static void iecm_mb_irq_enable(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_hw *hw = &adapter->hw; + struct iecm_intr_reg *intr = &adapter->mb_vector.intr_reg; + u32 val; + + val = intr->dyn_ctl_intena_m | intr->dyn_ctl_itridx_m; + writel_relaxed(val, hw->hw_addr + intr->dyn_ctl); } /** @@ -61,7 +75,22 @@ static void iecm_mb_irq_enable(struct iecm_adapter *adapter) */ static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_q_vector *mb_vector = &adapter->mb_vector; + int irq_num, mb_vidx = 0, err; + + irq_num = adapter->msix_entries[mb_vidx].vector; + snprintf(mb_vector->name, sizeof(mb_vector->name) - 1, + "%s-%s-%d", dev_driver_string(&adapter->pdev->dev), + "Mailbox", mb_vidx); + err = request_irq(irq_num, adapter->irq_mb_handler, 0, + mb_vector->name, adapter); + if (err) { + dev_err(&adapter->pdev->dev, + "Request_irq for mailbox failed, error: %d\n", err); + return err; + } + set_bit(__IECM_MB_INTR_MODE, adapter->flags); + return 0; } /** @@ -73,7 +102,16 @@ static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) */ static void iecm_get_mb_vec_id(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_vector_chunks *vchunks; + struct virtchnl_vector_chunk *chunk; + + if (adapter->req_vec_chunks) { + vchunks = &adapter->req_vec_chunks->vchunks; + chunk = &vchunks->num_vchunk[0]; + adapter->mb_vector.v_idx = chunk->start_vector_id; + } else { + adapter->mb_vector.v_idx = 0; + } } /** @@ -82,7 +120,13 @@ static void iecm_get_mb_vec_id(struct iecm_adapter *adapter) */ static int iecm_mb_intr_init(struct iecm_adapter *adapter) { - /* stub */ + int err = 0; + + iecm_get_mb_vec_id(adapter); + adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter); + adapter->irq_mb_handler = iecm_mb_intr_clean; + err = iecm_mb_intr_req_irq(adapter); + return err; } /** @@ -94,7 +138,12 @@ static int iecm_mb_intr_init(struct iecm_adapter *adapter) */ static void iecm_intr_distribute(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_vport *vport; + + vport = adapter->vports[0]; + if (adapter->num_msix_entries != adapter->num_req_msix) + vport->num_q_vectors = adapter->num_msix_entries - + IECM_MAX_NONQ_VEC - IECM_MIN_RDMA_VEC; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index e2da7dbc2ced..8214de2506af 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -1011,7 +1011,16 @@ iecm_vport_intr_clean_queues(int __always_unused irq, void *data) */ static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) { - /* stub */ + int q_idx; + + if (!vport->netdev) + return; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + + napi_disable(&q_vector->napi); + } } /** @@ -1022,7 +1031,44 @@ static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) */ void iecm_vport_intr_rel(struct iecm_vport *vport) { - /* stub */ + int i, j, v_idx; + + if (!vport->netdev) + return; + + for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[v_idx]; + + if (q_vector) + netif_napi_del(&q_vector->napi); + } + + /* Clean up the mapping of queues to vectors */ + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++) + rx_qgrp->splitq.rxq_sets[j].rxq.q_vector = + NULL; + } else { + for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + rx_qgrp->singleq.rxqs[j].q_vector = NULL; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + for (i = 0; i < vport->num_txq_grp; i++) + vport->txq_grps[i].complq->q_vector = NULL; + } else { + for (i = 0; i < vport->num_txq_grp; i++) { + for (j = 0; j < vport->txq_grps[i].num_txq; j++) + vport->txq_grps[i].txqs[j].q_vector = NULL; + } + } + + kfree(vport->q_vectors); + vport->q_vectors = NULL; } /** @@ -1031,7 +1077,25 @@ void iecm_vport_intr_rel(struct iecm_vport *vport) */ static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int vector; + + for (vector = 0; vector < vport->num_q_vectors; vector++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[vector]; + int irq_num, vidx; + + /* free only the IRQs that were actually requested */ + if (!q_vector) + continue; + + vidx = vector + vport->q_vector_base; + irq_num = adapter->msix_entries[vidx].vector; + + /* clear the affinity_mask in the IRQ descriptor */ + irq_set_affinity_hint(irq_num, NULL); + synchronize_irq(irq_num); + free_irq(irq_num, q_vector); + } } /** @@ -1040,7 +1104,13 @@ static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) */ void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) { - /* stub */ + struct iecm_q_vector *q_vector = vport->q_vectors; + struct iecm_hw *hw = &vport->adapter->hw; + int q_idx; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) + writel_relaxed(0, hw->hw_addr + + q_vector[q_idx].intr_reg.dyn_ctl); } /** @@ -1052,12 +1122,42 @@ void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector, const int type, u16 itr) { - /* stub */ + u32 itr_val; + + itr &= IECM_ITR_MASK; + /* Don't clear PBA because that can cause lost interrupts that + * came in while we were cleaning/polling + */ + itr_val = q_vector->intr_reg.dyn_ctl_intena_m | + (type << q_vector->intr_reg.dyn_ctl_itridx_s) | + (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1)); + + return itr_val; } static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector) { - /* stub */ + unsigned int divisor; + + switch (q_vector->vport->adapter->link_speed) { + case VIRTCHNL_LINK_SPEED_40GB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 1024; + break; + case VIRTCHNL_LINK_SPEED_25GB: + case VIRTCHNL_LINK_SPEED_20GB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 512; + break; + default: + case VIRTCHNL_LINK_SPEED_10GB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 256; + break; + case VIRTCHNL_LINK_SPEED_1GB: + case VIRTCHNL_LINK_SPEED_100MB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 32; + break; + } + + return divisor; } /** @@ -1078,7 +1178,206 @@ static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, struct iecm_itr *itr, enum virtchnl_queue_type q_type) { - /* stub */ + unsigned int avg_wire_size, packets = 0, bytes = 0, new_itr; + unsigned long next_update = jiffies; + + /* If we don't have any queues just leave ourselves set for maximum + * possible latency so we take ourselves out of the equation. + */ + if (!IECM_ITR_IS_DYNAMIC(itr->target_itr)) + return; + + /* For Rx we want to push the delay up and default to low latency. + * for Tx we want to pull the delay down and default to high latency. + */ + new_itr = q_type == VIRTCHNL_QUEUE_TYPE_RX ? + IECM_ITR_ADAPTIVE_MIN_USECS | IECM_ITR_ADAPTIVE_LATENCY : + IECM_ITR_ADAPTIVE_MAX_USECS | IECM_ITR_ADAPTIVE_LATENCY; + + /* If we didn't update within up to 1 - 2 jiffies we can assume + * that either packets are coming in so slow there hasn't been + * any work, or that there is so much work that NAPI is dealing + * with interrupt moderation and we don't need to do anything. + */ + if (time_after(next_update, itr->next_update)) + goto clear_counts; + + /* If itr_countdown is set it means we programmed an ITR within + * the last 4 interrupt cycles. This has a side effect of us + * potentially firing an early interrupt. In order to work around + * this we need to throw out any data received for a few + * interrupts following the update. + */ + if (q_vector->itr_countdown) { + new_itr = itr->target_itr; + goto clear_counts; + } + + if (q_type == VIRTCHNL_QUEUE_TYPE_TX) { + packets = itr->stats.tx.packets; + bytes = itr->stats.tx.bytes; + } + + if (q_type == VIRTCHNL_QUEUE_TYPE_RX) { + packets = itr->stats.rx.packets; + bytes = itr->stats.rx.bytes; + + /* If there are 1 to 4 RX packets and bytes are less than + * 9000 assume insufficient data to use bulk rate limiting + * approach unless Tx is already in bulk rate limiting. We + * are likely latency driven. + */ + if (packets && packets < 4 && bytes < 9000 && + (q_vector->tx[0]->itr.target_itr & + IECM_ITR_ADAPTIVE_LATENCY)) { + new_itr = IECM_ITR_ADAPTIVE_LATENCY; + goto adjust_by_size; + } + } else if (packets < 4) { + /* If we have Tx and Rx ITR maxed and Tx ITR is running in + * bulk mode and we are receiving 4 or fewer packets just + * reset the ITR_ADAPTIVE_LATENCY bit for latency mode so + * that the Rx can relax. + */ + if (itr->target_itr == IECM_ITR_ADAPTIVE_MAX_USECS && + ((q_vector->rx[0]->itr.target_itr & IECM_ITR_MASK) == + IECM_ITR_ADAPTIVE_MAX_USECS)) + goto clear_counts; + } else if (packets > 32) { + /* If we have processed over 32 packets in a single interrupt + * for Tx assume we need to switch over to "bulk" mode. + */ + itr->target_itr &= ~IECM_ITR_ADAPTIVE_LATENCY; + } + + /* We have no packets to actually measure against. This means + * either one of the other queues on this vector is active or + * we are a Tx queue doing TSO with too high of an interrupt rate. + * + * Between 4 and 56 we can assume that our current interrupt delay + * is only slightly too low. As such we should increase it by a small + * fixed amount. + */ + if (packets < 56) { + new_itr = itr->target_itr + IECM_ITR_ADAPTIVE_MIN_INC; + if ((new_itr & IECM_ITR_MASK) > IECM_ITR_ADAPTIVE_MAX_USECS) { + new_itr &= IECM_ITR_ADAPTIVE_LATENCY; + new_itr += IECM_ITR_ADAPTIVE_MAX_USECS; + } + goto clear_counts; + } + + if (packets <= 256) { + new_itr = min(q_vector->tx[0]->itr.current_itr, + q_vector->rx[0]->itr.current_itr); + new_itr &= IECM_ITR_MASK; + + /* Between 56 and 112 is our "goldilocks" zone where we are + * working out "just right". Just report that our current + * ITR is good for us. + */ + if (packets <= 112) + goto clear_counts; + + /* If packet count is 128 or greater we are likely looking + * at a slight overrun of the delay we want. Try halving + * our delay to see if that will cut the number of packets + * in half per interrupt. + */ + new_itr /= 2; + new_itr &= IECM_ITR_MASK; + if (new_itr < IECM_ITR_ADAPTIVE_MIN_USECS) + new_itr = IECM_ITR_ADAPTIVE_MIN_USECS; + + goto clear_counts; + } + + /* The paths below assume we are dealing with a bulk ITR since + * number of packets is greater than 256. We are just going to have + * to compute a value and try to bring the count under control, + * though for smaller packet sizes there isn't much we can do as + * NAPI polling will likely be kicking in sooner rather than later. + */ + new_itr = IECM_ITR_ADAPTIVE_BULK; + +adjust_by_size: + /* If packet counts are 256 or greater we can assume we have a gross + * overestimation of what the rate should be. Instead of trying to fine + * tune it just use the formula below to try and dial in an exact value + * give the current packet size of the frame. + */ + avg_wire_size = bytes / packets; + + /* The following is a crude approximation of: + * wmem_default / (size + overhead) = desired_pkts_per_int + * rate / bits_per_byte / (size + Ethernet overhead) = pkt_rate + * (desired_pkt_rate / pkt_rate) * usecs_per_sec = ITR value + * + * Assuming wmem_default is 212992 and overhead is 640 bytes per + * packet, (256 skb, 64 headroom, 320 shared info), we can reduce the + * formula down to + * + * (170 * (size + 24)) / (size + 640) = ITR + * + * We first do some math on the packet size and then finally bit shift + * by 8 after rounding up. We also have to account for PCIe link speed + * difference as ITR scales based on this. + */ + if (avg_wire_size <= 60) { + /* Start at 250k ints/sec */ + avg_wire_size = 4096; + } else if (avg_wire_size <= 380) { + /* 250K ints/sec to 60K ints/sec */ + avg_wire_size *= 40; + avg_wire_size += 1696; + } else if (avg_wire_size <= 1084) { + /* 60K ints/sec to 36K ints/sec */ + avg_wire_size *= 15; + avg_wire_size += 11452; + } else if (avg_wire_size <= 1980) { + /* 36K ints/sec to 30K ints/sec */ + avg_wire_size *= 5; + avg_wire_size += 22420; + } else { + /* plateau at a limit of 30K ints/sec */ + avg_wire_size = 32256; + } + + /* If we are in low latency mode halve our delay which doubles the + * rate to somewhere between 100K to 16K ints/sec + */ + if (new_itr & IECM_ITR_ADAPTIVE_LATENCY) + avg_wire_size /= 2; + + /* Resultant value is 256 times larger than it needs to be. This + * gives us room to adjust the value as needed to either increase + * or decrease the value based on link speeds of 10G, 2.5G, 1G, etc. + * + * Use addition as we have already recorded the new latency flag + * for the ITR value. + */ + new_itr += DIV_ROUND_UP(avg_wire_size, iecm_itr_divisor(q_vector)) * + IECM_ITR_ADAPTIVE_MIN_INC; + + if ((new_itr & IECM_ITR_MASK) > IECM_ITR_ADAPTIVE_MAX_USECS) { + new_itr &= IECM_ITR_ADAPTIVE_LATENCY; + new_itr += IECM_ITR_ADAPTIVE_MAX_USECS; + } + +clear_counts: + /* write back value */ + itr->target_itr = new_itr; + + /* next update should occur within next jiffy */ + itr->next_update = next_update + 1; + + if (q_type == VIRTCHNL_QUEUE_TYPE_RX) { + itr->stats.rx.bytes = 0; + itr->stats.rx.packets = 0; + } else if (q_type == VIRTCHNL_QUEUE_TYPE_TX) { + itr->stats.tx.bytes = 0; + itr->stats.tx.packets = 0; + } } /** @@ -1087,7 +1386,59 @@ static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, */ void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) { - /* stub */ + struct iecm_hw *hw = &q_vector->vport->adapter->hw; + struct iecm_itr *tx_itr = &q_vector->tx[0]->itr; + struct iecm_itr *rx_itr = &q_vector->rx[0]->itr; + u32 intval; + + /* These will do nothing if dynamic updates are not enabled */ + iecm_vport_intr_set_new_itr(q_vector, tx_itr, q_vector->tx[0]->q_type); + iecm_vport_intr_set_new_itr(q_vector, rx_itr, q_vector->rx[0]->q_type); + + /* This block of logic allows us to get away with only updating + * one ITR value with each interrupt. The idea is to perform a + * pseudo-lazy update with the following criteria. + * + * 1. Rx is given higher priority than Tx if both are in same state + * 2. If we must reduce an ITR that is given highest priority. + * 3. We then give priority to increasing ITR based on amount. + */ + if (rx_itr->target_itr < rx_itr->current_itr) { + /* Rx ITR needs to be reduced, this is highest priority */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + rx_itr->itr_idx, + rx_itr->target_itr); + rx_itr->current_itr = rx_itr->target_itr; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + } else if ((tx_itr->target_itr < tx_itr->current_itr) || + ((rx_itr->target_itr - rx_itr->current_itr) < + (tx_itr->target_itr - tx_itr->current_itr))) { + /* Tx ITR needs to be reduced, this is second priority + * Tx ITR needs to be increased more than Rx, fourth priority + */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + tx_itr->itr_idx, + tx_itr->target_itr); + tx_itr->current_itr = tx_itr->target_itr; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + } else if (rx_itr->current_itr != rx_itr->target_itr) { + /* Rx ITR needs to be increased, third priority */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + rx_itr->itr_idx, + rx_itr->target_itr); + rx_itr->current_itr = rx_itr->target_itr; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + } else { + /* No ITR update, lowest priority */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + VIRTCHNL_ITR_IDX_NO_ITR, + 0); + if (q_vector->itr_countdown) + q_vector->itr_countdown--; + } + + writel_relaxed(intval, hw->hw_addr + + q_vector->intr_reg.dyn_ctl); } /** @@ -1098,7 +1449,40 @@ void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) static int iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int vector, err, irq_num, vidx; + + for (vector = 0; vector < vport->num_q_vectors; vector++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[vector]; + + vidx = vector + vport->q_vector_base; + irq_num = adapter->msix_entries[vidx].vector; + + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s-%d", basename, "TxRx", vidx); + + err = request_irq(irq_num, vport->irq_q_handler, 0, + q_vector->name, q_vector); + if (err) { + netdev_err(vport->netdev, + "Request_irq failed, error: %d\n", err); + goto free_q_irqs; + } + /* assign the mask for this IRQ */ + irq_set_affinity_hint(irq_num, &q_vector->affinity_mask); + } + + return 0; + +free_q_irqs: + while (vector) { + vector--; + vidx = vector + vport->q_vector_base; + irq_num = adapter->msix_entries[vidx].vector, + free_irq(irq_num, + &vport->q_vectors[vector]); + } + return err; } /** @@ -1107,7 +1491,14 @@ iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) */ void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) { - /* stub */ + int q_idx; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + + if (q_vector->num_txq || q_vector->num_rxq) + iecm_vport_intr_update_itr_ena_irq(q_vector); + } } /** @@ -1116,7 +1507,9 @@ void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) */ void iecm_vport_intr_deinit(struct iecm_vport *vport) { - /* stub */ + iecm_vport_intr_napi_dis_all(vport); + iecm_vport_intr_dis_irq_all(vport); + iecm_vport_intr_rel_irq(vport); } /** @@ -1126,7 +1519,16 @@ void iecm_vport_intr_deinit(struct iecm_vport *vport) static void iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) { - /* stub */ + int q_idx; + + if (!vport->netdev) + return; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + + napi_enable(&q_vector->napi); + } } /** @@ -1175,7 +1577,65 @@ static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) */ static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) { - /* stub */ + int i, j, k = 0, num_rxq, num_txq; + struct iecm_rxq_group *rx_qgrp; + struct iecm_txq_group *tx_qgrp; + struct iecm_queue *q; + int q_index; + + for (i = 0; i < vport->num_rxq_grp; i++) { + rx_qgrp = &vport->rxq_grps[i]; + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++) { + if (k >= vport->num_q_vectors) + k = k % vport->num_q_vectors; + + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + else + q = &rx_qgrp->singleq.rxqs[j]; + q->q_vector = &vport->q_vectors[k]; + q_index = q->q_vector->num_rxq; + q->q_vector->rx[q_index] = q; + q->q_vector->num_rxq++; + + k++; + } + } + k = 0; + for (i = 0; i < vport->num_txq_grp; i++) { + tx_qgrp = &vport->txq_grps[i]; + num_txq = tx_qgrp->num_txq; + + if (iecm_is_queue_model_split(vport->txq_model)) { + if (k >= vport->num_q_vectors) + k = k % vport->num_q_vectors; + + q = tx_qgrp->complq; + q->q_vector = &vport->q_vectors[k]; + q_index = q->q_vector->num_txq; + q->q_vector->tx[q_index] = q; + q->q_vector->num_txq++; + k++; + } else { + for (j = 0; j < num_txq; j++) { + if (k >= vport->num_q_vectors) + k = k % vport->num_q_vectors; + + q = &tx_qgrp->txqs[j]; + q->q_vector = &vport->q_vectors[k]; + q_index = q->q_vector->num_txq; + q->q_vector->tx[q_index] = q; + q->q_vector->num_txq++; + + k++; + } + } + } } /** @@ -1186,7 +1646,38 @@ static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) */ static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct iecm_q_vector *q_vector; + int i; + + if (adapter->req_vec_chunks) { + struct virtchnl_vector_chunks *vchunks; + struct virtchnl_alloc_vectors *ac; + /* We may never deal with more that 256 same type of vectors */ +#define IECM_MAX_VECIDS 256 + u16 vecids[IECM_MAX_VECIDS]; + int num_ids; + + ac = adapter->req_vec_chunks; + vchunks = &ac->vchunks; + + num_ids = iecm_vport_get_vec_ids(vecids, IECM_MAX_VECIDS, + vchunks); + if (num_ids != adapter->num_msix_entries) + return -EFAULT; + + for (i = 0; i < vport->num_q_vectors; i++) { + q_vector = &vport->q_vectors[i]; + q_vector->v_idx = vecids[i + vport->q_vector_base]; + } + } else { + for (i = 0; i < vport->num_q_vectors; i++) { + q_vector = &vport->q_vectors[i]; + q_vector->v_idx = i + vport->q_vector_base; + } + } + + return 0; } /** @@ -1198,7 +1689,64 @@ static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) */ int iecm_vport_intr_alloc(struct iecm_vport *vport) { - /* stub */ + int txqs_per_vector, rxqs_per_vector; + struct iecm_q_vector *q_vector; + int v_idx, err = 0; + + vport->q_vectors = kcalloc(vport->num_q_vectors, + sizeof(struct iecm_q_vector), GFP_KERNEL); + + if (!vport->q_vectors) + return -ENOMEM; + + txqs_per_vector = DIV_ROUND_UP(vport->num_txq, vport->num_q_vectors); + rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq, vport->num_q_vectors); + + for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { + q_vector = &vport->q_vectors[v_idx]; + q_vector->vport = vport; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + + q_vector->tx = kcalloc(txqs_per_vector, + sizeof(struct iecm_queue *), + GFP_KERNEL); + if (!q_vector->tx) { + err = -ENOMEM; + goto free_vport_q_vec; + } + + q_vector->rx = kcalloc(rxqs_per_vector, + sizeof(struct iecm_queue *), + GFP_KERNEL); + if (!q_vector->rx) { + err = -ENOMEM; + goto free_vport_q_vec_tx; + } + + /* only set affinity_mask if the CPU is online */ + if (cpu_online(v_idx)) + cpumask_set_cpu(v_idx, &q_vector->affinity_mask); + + /* Register the NAPI handler */ + if (vport->netdev) { + if (iecm_is_queue_model_split(vport->txq_model)) + netif_napi_add(vport->netdev, &q_vector->napi, + iecm_vport_splitq_napi_poll, + NAPI_POLL_WEIGHT); + else + netif_napi_add(vport->netdev, &q_vector->napi, + iecm_vport_singleq_napi_poll, + NAPI_POLL_WEIGHT); + } + } + + return 0; +free_vport_q_vec_tx: + kfree(q_vector->tx); +free_vport_q_vec: + kfree(vport->q_vectors); + + return err; } /** @@ -1209,7 +1757,32 @@ int iecm_vport_intr_alloc(struct iecm_vport *vport) */ int iecm_vport_intr_init(struct iecm_vport *vport) { - /* stub */ + char int_name[IECM_INT_NAME_STR_LEN]; + int err = 0; + + err = iecm_vport_intr_init_vec_idx(vport); + if (err) + goto handle_err; + + iecm_vport_intr_map_vector_to_qs(vport); + iecm_vport_intr_napi_ena_all(vport); + + vport->adapter->dev_ops.reg_ops.intr_reg_init(vport); + + snprintf(int_name, sizeof(int_name) - 1, "%s-%s", + dev_driver_string(&vport->adapter->pdev->dev), + vport->netdev->name); + + err = iecm_vport_intr_req_irq(vport, int_name); + if (err) + goto unroll_vectors_alloc; + + iecm_vport_intr_ena_irq_all(vport); + goto handle_err; +unroll_vectors_alloc: + iecm_vport_intr_rel(vport); +handle_err: + return err; } EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 2f716ec42cf2..b8cf8eb15052 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -1855,7 +1855,29 @@ int iecm_vport_get_vec_ids(u16 *vecids, int num_vecids, struct virtchnl_vector_chunks *chunks) { - /* stub */ + int num_chunks = chunks->num_vector_chunks; + struct virtchnl_vector_chunk *chunk; + int num_vecid_filled = 0; + int start_vecid; + int num_vec; + int i, j; + + for (j = 0; j < num_chunks; j++) { + chunk = &chunks->num_vchunk[j]; + num_vec = chunk->num_vectors; + start_vecid = chunk->start_vector_id; + for (i = 0; i < num_vec; i++) { + if ((num_vecid_filled + i) < num_vecids) { + vecids[num_vecid_filled + i] = start_vecid; + start_vecid++; + } else { + break; + } + } + num_vecid_filled = num_vecid_filled + i; + } + + return num_vecid_filled; } /** From patchwork Mon Aug 24 17:33:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 261995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B51C433E1 for ; Mon, 24 Aug 2020 17:35:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C77AD2067C for ; Mon, 24 Aug 2020 17:35:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728512AbgHXRfl (ORCPT ); Mon, 24 Aug 2020 13:35:41 -0400 Received: from mga09.intel.com ([134.134.136.24]:29869 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726666AbgHXRfC (ORCPT ); Mon, 24 Aug 2020 13:35:02 -0400 IronPort-SDR: Ickmqah2ffhjEdSFrPeC4J1ttgRmwywjUmztT6dQF8ofvIhTG2XCLzC+1MwjWctYR3ZQjJW2vO u7lHhsveFfMQ== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008603" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008603" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:27 -0700 IronPort-SDR: I2hPt4ds6lNO8GB2lgsLNTbytUxSO42B6kSE4TS5u5XNl69jlC576HtrQcerqZmrT0R0Y5dste B3N49SrYlh2A== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245374" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:26 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 13/15] iecm: Add ethtool Date: Mon, 24 Aug 2020 10:33:04 -0700 Message-Id: <20200824173306.3178343-14-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement ethtool interface for the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/iecm/iecm_ethtool.c | 1034 ++++++++++++++++- drivers/net/ethernet/intel/iecm/iecm_lib.c | 143 ++- 2 files changed, 1173 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c index a6532592f2f4..9b23541f81c4 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c +++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c @@ -3,6 +3,1038 @@ #include +/** + * iecm_get_rxnfc - command to get RX flow classification rules + * @netdev: network interface device structure + * @cmd: ethtool rxnfc command + * @rule_locs: pointer to store rule locations + * + * Returns Success if the command is supported. + */ +static int iecm_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd, + u32 __always_unused *rule_locs) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int ret = -EOPNOTSUPP; + + switch (cmd->cmd) { + case ETHTOOL_GRXRINGS: + cmd->data = vport->num_rxq; + ret = 0; + break; + case ETHTOOL_GRXFH: + netdev_info(netdev, "RSS hash info is not available\n"); + break; + default: + break; + } + + return ret; +} + +/** + * iecm_get_rxfh_key_size - get the RSS hash key size + * @netdev: network interface device structure + * + * Returns the table size. + */ +static u32 iecm_get_rxfh_key_size(struct net_device *netdev) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (!iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + + return vport->adapter->rss_data.rss_key_size; +} + +/** + * iecm_get_rxfh_indir_size - get the Rx flow hash indirection table size + * @netdev: network interface device structure + * + * Returns the table size. + */ +static u32 iecm_get_rxfh_indir_size(struct net_device *netdev) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (!iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + + return vport->adapter->rss_data.rss_lut_size; +} + +/** + * iecm_find_virtual_qid - Finds the virtual RX qid from the absolute RX qid + * @vport: virtual port structure + * @qid_list: List of the RX qid's + * @abs_rx_qid: absolute RX qid + * + * Returns the virtual RX QID. + */ +static u32 iecm_find_virtual_qid(struct iecm_vport *vport, u16 *qid_list, + u32 abs_rx_qid) +{ + u32 i; + + for (i = 0; i < vport->num_rxq; i++) + if ((u32)qid_list[i] == abs_rx_qid) + break; + return i; +} + +/** + * iecm_get_rxfh - get the Rx flow hash indirection table + * @netdev: network interface device structure + * @indir: indirection table + * @key: hash key + * @hfunc: hash function in use + * + * Reads the indirection table directly from the hardware. Always returns 0. + */ +static int iecm_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, + u8 *hfunc) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_adapter *adapter; + u16 i, *qid_list; + u32 abs_qid; + + adapter = vport->adapter; + + if (!iecm_is_cap_ena(adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + + if (adapter->state != __IECM_UP) + return 0; + + if (hfunc) + *hfunc = ETH_RSS_HASH_TOP; + + if (key) + memcpy(key, adapter->rss_data.rss_key, + adapter->rss_data.rss_key_size); + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) + return -ENOMEM; + + iecm_get_rx_qid_list(vport, qid_list); + + if (indir) + /* Each 32 bits pointed by 'indir' is stored with a lut entry */ + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) { + abs_qid = (u32)adapter->rss_data.rss_lut[i]; + indir[i] = iecm_find_virtual_qid(vport, qid_list, + abs_qid); + } + + kfree(qid_list); + + return 0; +} + +/** + * iecm_set_rxfh - set the Rx flow hash indirection table + * @netdev: network interface device structure + * @indir: indirection table + * @key: hash key + * @hfunc: hash function to use + * + * Returns -EINVAL if the table specifies an invalid queue id, otherwise + * returns 0 after programming the table. + */ +static int iecm_set_rxfh(struct net_device *netdev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_adapter *adapter; + u16 *qid_list; + u16 lut; + + adapter = vport->adapter; + + if (!iecm_is_cap_ena(adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + if (adapter->state != __IECM_UP) + return 0; + + if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) + return -EOPNOTSUPP; + + if (key) + memcpy(adapter->rss_data.rss_key, key, + adapter->rss_data.rss_key_size); + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) + return -ENOMEM; + + iecm_get_rx_qid_list(vport, qid_list); + + if (indir) { + for (lut = 0; lut < adapter->rss_data.rss_lut_size; lut++) { + int index = indir[lut]; + + adapter->rss_data.rss_lut[lut] = qid_list[index]; + } + } + + kfree(qid_list); + + return iecm_config_rss(vport); +} + +/** + * iecm_get_channels: get the number of channels supported by the device + * @netdev: network interface device structure + * @ch: channel information structure + * + * Report maximum of TX and RX. Report one extra channel to match our mailbox + * Queue. + */ +static void iecm_get_channels(struct net_device *netdev, + struct ethtool_channels *ch) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + unsigned int combined; + + combined = min(vport->num_txq, vport->num_rxq); + + /* Report maximum channels */ + ch->max_combined = IECM_MAX_Q; + + ch->max_other = IECM_MAX_NONQ; + ch->other_count = IECM_MAX_NONQ; + + ch->combined_count = combined; + ch->rx_count = vport->num_rxq - combined; + ch->tx_count = vport->num_txq - combined; +} + +/** + * iecm_set_channels: set the new channel count + * @netdev: network interface device structure + * @ch: channel information structure + * + * Negotiate a new number of channels with CP. Returns 0 on success, negative + * on failure. + */ +static int iecm_set_channels(struct net_device *netdev, + struct ethtool_channels *ch) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int num_req_q = ch->combined_count; + + if (num_req_q == max(vport->num_txq, vport->num_rxq)) + return 0; + + vport->adapter->config_data.num_req_qs = num_req_q; + + return iecm_initiate_soft_reset(vport, __IECM_SR_Q_CHANGE); +} + +/** + * iecm_get_ringparam - Get ring parameters + * @netdev: network interface device structure + * @ring: ethtool ringparam structure + * + * Returns current ring parameters. TX and RX rings are reported separately, + * but the number of rings is not reported. + */ +static void iecm_get_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + ring->rx_max_pending = IECM_MAX_RXQ_DESC; + ring->tx_max_pending = IECM_MAX_TXQ_DESC; + ring->rx_pending = vport->rxq_desc_count; + ring->tx_pending = vport->txq_desc_count; +} + +/** + * iecm_set_ringparam - Set ring parameters + * @netdev: network interface device structure + * @ring: ethtool ringparam structure + * + * Sets ring parameters. TX and RX rings are controlled separately, but the + * number of rings is not specified, so all rings get the same settings. + */ +static int iecm_set_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + u32 new_rx_count, new_tx_count; + + if (ring->rx_mini_pending || ring->rx_jumbo_pending) + return -EINVAL; + + new_tx_count = ALIGN(ring->tx_pending, IECM_REQ_DESC_MULTIPLE); + new_rx_count = ALIGN(ring->rx_pending, IECM_REQ_DESC_MULTIPLE); + + /* if nothing to do return success */ + if (new_tx_count == vport->txq_desc_count && + new_rx_count == vport->rxq_desc_count) + return 0; + + vport->adapter->config_data.num_req_txq_desc = new_tx_count; + vport->adapter->config_data.num_req_rxq_desc = new_rx_count; + + return iecm_initiate_soft_reset(vport, __IECM_SR_Q_DESC_CHANGE); +} + +/** + * struct iecm_stats - definition for an ethtool statistic + * @stat_string: statistic name to display in ethtool -S output + * @sizeof_stat: the sizeof() the stat, must be no greater than sizeof(u64) + * @stat_offset: offsetof() the stat from a base pointer + * + * This structure defines a statistic to be added to the ethtool stats buffer. + * It defines a statistic as offset from a common base pointer. Stats should + * be defined in constant arrays using the IECM_STAT macro, with every element + * of the array using the same _type for calculating the sizeof_stat and + * stat_offset. + * + * The @sizeof_stat is expected to be sizeof(u8), sizeof(u16), sizeof(u32) or + * sizeof(u64). Other sizes are not expected and will produce a WARN_ONCE from + * the iecm_add_ethtool_stat() helper function. + * + * The @stat_string is interpreted as a format string, allowing formatted + * values to be inserted while looping over multiple structures for a given + * statistics array. Thus, every statistic string in an array should have the + * same type and number of format specifiers, to be formatted by variadic + * arguments to the iecm_add_stat_string() helper function. + */ +struct iecm_stats { + char stat_string[ETH_GSTRING_LEN]; + int sizeof_stat; + int stat_offset; +}; + +/* Helper macro to define an iecm_stat structure with proper size and type. + * Use this when defining constant statistics arrays. Note that @_type expects + * only a type name and is used multiple times. + */ +#define IECM_STAT(_type, _name, _stat) { \ + .stat_string = _name, \ + .sizeof_stat = sizeof_field(_type, _stat), \ + .stat_offset = offsetof(_type, _stat) \ +} + +/* Helper macro for defining some statistics related to queues */ +#define IECM_QUEUE_STAT(_name, _stat) \ + IECM_STAT(struct iecm_queue, _name, _stat) + +/* Stats associated with a Tx queue */ +static const struct iecm_stats iecm_gstrings_tx_queue_stats[] = { + IECM_QUEUE_STAT("packets", q_stats.tx.packets), + IECM_QUEUE_STAT("bytes", q_stats.tx.bytes), +}; + +/* Stats associated with an Rx queue */ +static const struct iecm_stats iecm_gstrings_rx_queue_stats[] = { + IECM_QUEUE_STAT("packets", q_stats.rx.packets), + IECM_QUEUE_STAT("bytes", q_stats.rx.bytes), + IECM_QUEUE_STAT("csum_complete", q_stats.rx.csum_complete), + IECM_QUEUE_STAT("csum_unnecessary", q_stats.rx.csum_unnecessary), + IECM_QUEUE_STAT("csum_err", q_stats.rx.csum_err), + IECM_QUEUE_STAT("hsplit", q_stats.rx.hsplit), + IECM_QUEUE_STAT("hsplit_buf_overflow", q_stats.rx.hsplit_hbo), +}; + +#define IECM_TX_QUEUE_STATS_LEN ARRAY_SIZE(iecm_gstrings_tx_queue_stats) +#define IECM_RX_QUEUE_STATS_LEN ARRAY_SIZE(iecm_gstrings_rx_queue_stats) + +/** + * __iecm_add_stat_strings - copy stat strings into ethtool buffer + * @p: ethtool supplied buffer + * @stats: stat definitions array + * @size: size of the stats array + * @type: stat type + * @idx: stat index + * + * Format and copy the strings described by stats into the buffer pointed at + * by p. + */ +static void __iecm_add_stat_strings(u8 **p, const struct iecm_stats stats[], + const unsigned int size, const char *type, + unsigned int idx) +{ + unsigned int i; + + for (i = 0; i < size; i++) { + snprintf((char *)*p, ETH_GSTRING_LEN, + "%.2s-%10u.%.17s", type, idx, stats[i].stat_string); + *p += ETH_GSTRING_LEN; + } +} + +/** + * iecm_add_stat_strings - copy stat strings into ethtool buffer + * @p: ethtool supplied buffer + * @stats: stat definitions array + * @type: stat type + * @idx: stat idx + * + * Format and copy the strings described by the const static stats value into + * the buffer pointed at by p. + * + * The parameter @stats is evaluated twice, so parameters with side effects + * should be avoided. Additionally, stats must be an array such that + * ARRAY_SIZE can be called on it. + */ +#define iecm_add_stat_strings(p, stats, type, idx) \ + __iecm_add_stat_strings(p, stats, ARRAY_SIZE(stats), type, idx) + +/** + * iecm_get_stat_strings - Get stat strings + * @netdev: network interface device structure + * @data: buffer for string data + * + * Builds the statistics string table + */ +static void iecm_get_stat_strings(struct net_device __always_unused *netdev, + u8 *data) +{ + unsigned int i; + + /* It's critical that we always report a constant number of strings and + * that the strings are reported in the same order regardless of how + * many queues are actually in use. + */ + for (i = 0; i < IECM_MAX_Q; i++) + iecm_add_stat_strings(&data, iecm_gstrings_tx_queue_stats, + "tx", i); + for (i = 0; i < IECM_MAX_Q; i++) + iecm_add_stat_strings(&data, iecm_gstrings_rx_queue_stats, + "rx", i); +} + +/** + * iecm_get_strings - Get string set + * @netdev: network interface device structure + * @sset: id of string set + * @data: buffer for string data + * + * Builds string tables for various string sets + */ +static void iecm_get_strings(struct net_device *netdev, u32 sset, u8 *data) +{ + switch (sset) { + case ETH_SS_STATS: + iecm_get_stat_strings(netdev, data); + break; + default: + break; + } +} + +/** + * iecm_get_sset_count - Get length of string set + * @netdev: network interface device structure + * @sset: id of string set + * + * Reports size of various string tables. + */ +static int iecm_get_sset_count(struct net_device __always_unused *netdev, int sset) +{ + if (sset == ETH_SS_STATS) + /* This size reported back here *must* be constant throughout + * the lifecycle of the netdevice, i.e. we must report the + * maximum length even for queues that don't technically exist. + * This is due to the fact that this userspace API uses three + * separate ioctl calls to get stats data but has no way to + * communicate back to userspace when that size has changed, + * which can typically happen as a result of changing number of + * queues. If the number/order of stats change in the middle of + * this call chain it will lead to userspace crashing/accessing + * bad data through buffer under/overflow. + */ + return (IECM_TX_QUEUE_STATS_LEN * IECM_MAX_Q) + + (IECM_RX_QUEUE_STATS_LEN * IECM_MAX_Q); + else + return -EINVAL; +} + +/** + * iecm_add_one_ethtool_stat - copy the stat into the supplied buffer + * @data: location to store the stat value + * @pstat: old stat pointer to copy from + * @stat: the stat definition + * + * Copies the stat data defined by the pointer and stat structure pair into + * the memory supplied as data. Used to implement iecm_add_ethtool_stats and + * iecm_add_queue_stats. If the pointer is null, data will be zero'd. + */ +static void +iecm_add_one_ethtool_stat(u64 *data, void *pstat, + const struct iecm_stats *stat) +{ + char *p; + + if (!pstat) { + /* ensure that the ethtool data buffer is zero'd for any stats + * which don't have a valid pointer. + */ + *data = 0; + return; + } + + p = (char *)pstat + stat->stat_offset; + switch (stat->sizeof_stat) { + case sizeof(u64): + *data = *((u64 *)p); + break; + case sizeof(u32): + *data = *((u32 *)p); + break; + case sizeof(u16): + *data = *((u16 *)p); + break; + case sizeof(u8): + *data = *((u8 *)p); + break; + default: + WARN_ONCE(1, "unexpected stat size for %s", + stat->stat_string); + *data = 0; + } +} + +/** + * iecm_add_queue_stats - copy queue statistics into supplied buffer + * @data: ethtool stats buffer + * @q: the queue to copy + * + * Queue statistics must be copied while protected by + * u64_stats_fetch_begin_irq, so we can't directly use iecm_add_ethtool_stats. + * Assumes that queue stats are defined in iecm_gstrings_queue_stats. If the + * queue pointer is null, zero out the queue stat values and update the data + * pointer. Otherwise safely copy the stats from the queue into the supplied + * buffer and update the data pointer when finished. + * + * This function expects to be called while under rcu_read_lock(). + */ +static void +iecm_add_queue_stats(u64 **data, struct iecm_queue *q) +{ + const struct iecm_stats *stats; + unsigned int start; + unsigned int size; + unsigned int i; + + if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) { + size = IECM_RX_QUEUE_STATS_LEN; + stats = iecm_gstrings_rx_queue_stats; + } else { + size = IECM_TX_QUEUE_STATS_LEN; + stats = iecm_gstrings_tx_queue_stats; + } + + /* To avoid invalid statistics values, ensure that we keep retrying + * the copy until we get a consistent value according to + * u64_stats_fetch_retry_irq. But first, make sure our queue is + * non-null before attempting to access its syncp. + */ + do { + start = u64_stats_fetch_begin_irq(&q->stats_sync); + for (i = 0; i < size; i++) + iecm_add_one_ethtool_stat(&(*data)[i], q, &stats[i]); + } while (u64_stats_fetch_retry_irq(&q->stats_sync, start)); + + /* Once we successfully copy the stats in, update the data pointer */ + *data += size; +} + +/** + * iecm_add_empty_queue_stats - Add stats for a non-existent queue + * @data: pointer to data buffer + * @qtype: type of data queue + * + * We must report a constant length of stats back to userspace regardless of + * how many queues are actually in use because stats collection happens over + * three separate ioctls and there's no way to notify userspace the size + * changed between those calls. This adds empty to data to the stats since we + * don't have a real queue to refer to for this stats slot. + */ +static void +iecm_add_empty_queue_stats(u64 **data, enum virtchnl_queue_type qtype) +{ + unsigned int i; + int stats_len; + + if (qtype == VIRTCHNL_QUEUE_TYPE_RX) + stats_len = IECM_RX_QUEUE_STATS_LEN; + else + stats_len = IECM_TX_QUEUE_STATS_LEN; + + for (i = 0; i < stats_len; i++) + (*data)[i] = 0; + *data += stats_len; +} + +/** + * iecm_get_ethtool_stats - report device statistics + * @netdev: network interface device structure + * @stats: ethtool statistics structure + * @data: pointer to data buffer + * + * All statistics are added to the data buffer as an array of u64. + */ +static void iecm_get_ethtool_stats(struct net_device *netdev, + struct ethtool_stats __always_unused *stats, + u64 *data) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + enum virtchnl_queue_type qtype; + unsigned int total = 0; + unsigned int i, j; + + if (vport->adapter->state != __IECM_UP) + return; + + rcu_read_lock(); + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *txq_grp = &vport->txq_grps[i]; + + qtype = VIRTCHNL_QUEUE_TYPE_TX; + + for (j = 0; j < txq_grp->num_txq; j++, total++) { + struct iecm_queue *txq = &txq_grp->txqs[j]; + + if (!txq) + iecm_add_empty_queue_stats(&data, qtype); + else + iecm_add_queue_stats(&data, txq); + } + } + /* It is critical we provide a constant number of stats back to + * userspace regardless of how many queues are actually in use because + * there is no way to inform userspace the size has changed between + * ioctl calls. This will fill in any missing stats with zero. + */ + for (; total < IECM_MAX_Q; total++) + iecm_add_empty_queue_stats(&data, VIRTCHNL_QUEUE_TYPE_TX); + total = 0; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rxq_grp = &vport->rxq_grps[i]; + int num_rxq; + + qtype = VIRTCHNL_QUEUE_TYPE_RX; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rxq_grp->splitq.num_rxq_sets; + else + num_rxq = rxq_grp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, total++) { + struct iecm_queue *rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + rxq = &rxq_grp->splitq.rxq_sets[j].rxq; + else + rxq = &rxq_grp->singleq.rxqs[j]; + if (!rxq) + iecm_add_empty_queue_stats(&data, qtype); + else + iecm_add_queue_stats(&data, rxq); + } + } + for (; total < IECM_MAX_Q; total++) + iecm_add_empty_queue_stats(&data, VIRTCHNL_QUEUE_TYPE_RX); + rcu_read_unlock(); +} + +/** + * iecm_find_rxq - find rxq from q index + * @vport: virtual port associated to queue + * @q_num: q index used to find queue + * + * returns pointer to Rx queue + */ +static struct iecm_queue * +iecm_find_rxq(struct iecm_vport *vport, int q_num) +{ + struct iecm_queue *rxq; + int q_grp, q_idx; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + q_grp = q_num / IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + q_idx = q_num % IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + + rxq = &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx].rxq; + } else { + q_grp = q_num / IECM_DFLT_SINGLEQ_RXQ_PER_GROUP; + q_idx = q_num % IECM_DFLT_SINGLEQ_RXQ_PER_GROUP; + + rxq = &vport->rxq_grps[q_grp].singleq.rxqs[q_idx]; + } + + return rxq; +} + +/** + * iecm_find_txq - find txq from q index + * @vport: virtual port associated to queue + * @q_num: q index used to find queue + * + * returns pointer to Tx queue + */ +static struct iecm_queue * +iecm_find_txq(struct iecm_vport *vport, int q_num) +{ + struct iecm_queue *txq; + + if (iecm_is_queue_model_split(vport->txq_model)) { + int q_grp = q_num / IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + + txq = vport->txq_grps[q_grp].complq; + } else { + txq = vport->txqs[q_num]; + } + + return txq; +} + +/** + * __iecm_get_q_coalesce - get ITR values for specific queue + * @ec: ethtool structure to fill with driver's coalesce settings + * @q: queue of Rx or Tx + */ +static void +__iecm_get_q_coalesce(struct ethtool_coalesce *ec, struct iecm_queue *q) +{ + u16 itr_setting; + bool dyn_ena; + + itr_setting = IECM_ITR_SETTING(q->itr.target_itr); + dyn_ena = IECM_ITR_IS_DYNAMIC(q->itr.target_itr); + if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) { + ec->use_adaptive_rx_coalesce = dyn_ena; + ec->rx_coalesce_usecs = itr_setting; + } else { + ec->use_adaptive_tx_coalesce = dyn_ena; + ec->tx_coalesce_usecs = itr_setting; + } +} + +/** + * iecm_get_q_coalesce - get ITR values for specific queue + * @netdev: pointer to the netdev associated with this query + * @ec: coalesce settings to program the device with + * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index + * + * Return 0 on success, and negative on failure + */ +static int +iecm_get_q_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec, + u32 q_num) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (vport->adapter->state != __IECM_UP) + return 0; + + if (q_num >= vport->num_rxq && q_num >= vport->num_txq) + return -EINVAL; + + if (q_num < vport->num_rxq) { + struct iecm_queue *rxq = iecm_find_rxq(vport, q_num); + + __iecm_get_q_coalesce(ec, rxq); + } + + if (q_num < vport->num_txq) { + struct iecm_queue *txq = iecm_find_txq(vport, q_num); + + __iecm_get_q_coalesce(ec, txq); + } + + return 0; +} + +/** + * iecm_get_coalesce - get ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @ec: coalesce settings to be filled + * + * Return 0 on success, and negative on failure + */ +static int +iecm_get_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec) +{ + /* Return coalesce based on queue number zero */ + return iecm_get_q_coalesce(netdev, ec, 0); +} + +/** + * iecm_get_per_q_coalesce - get ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @q_num: queue for which the ITR values has to retrieved + * @ec: coalesce settings to be filled + * + * Return 0 on success, and negative on failure + */ + +static int +iecm_get_per_q_coalesce(struct net_device *netdev, u32 q_num, + struct ethtool_coalesce *ec) +{ + return iecm_get_q_coalesce(netdev, ec, q_num); +} + +/** + * __iecm_set_q_coalesce - set ITR values for specific queue + * @ec: ethtool structure from user to update ITR settings + * @q: queue for which ITR values has to be set + * + * Returns 0 on success, negative otherwise. + */ +static int +__iecm_set_q_coalesce(struct ethtool_coalesce *ec, struct iecm_queue *q) +{ + const char *q_type_str = (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) + ? "Rx" : "Tx"; + u32 use_adaptive_coalesce, coalesce_usecs; + struct iecm_vport *vport; + u16 itr_setting; + + itr_setting = IECM_ITR_SETTING(q->itr.target_itr); + vport = q->vport; + if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) { + use_adaptive_coalesce = ec->use_adaptive_rx_coalesce; + coalesce_usecs = ec->rx_coalesce_usecs; + } else { + use_adaptive_coalesce = ec->use_adaptive_tx_coalesce; + coalesce_usecs = ec->tx_coalesce_usecs; + } + + if (itr_setting != coalesce_usecs && use_adaptive_coalesce) { + netdev_info(vport->netdev, "%s ITR cannot be changed if adaptive-%s is enabled\n", + q_type_str, q_type_str); + return -EINVAL; + } + + if (coalesce_usecs > IECM_ITR_MAX) { + netdev_info(vport->netdev, + "Invalid value, %d-usecs range is 0-%d\n", + coalesce_usecs, IECM_ITR_MAX); + return -EINVAL; + } + + if (coalesce_usecs % 2 != 0) { + coalesce_usecs = coalesce_usecs & 0xFFFFFFFE; + netdev_info(vport->netdev, "HW only supports even ITR values, ITR rounded to %d\n", + coalesce_usecs); + } + + q->itr.target_itr = coalesce_usecs; + if (use_adaptive_coalesce) + q->itr.target_itr |= IECM_ITR_DYNAMIC; + /* Update of static/dynamic ITR will be taken care when interrupt is + * fired + */ + return 0; +} + +/** + * iecm_set_q_coalesce - set ITR values for specific queue + * @vport: vport associated to the queue that need updating + * @ec: coalesce settings to program the device with + * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index + * @is_rxq: is queue type Rx + * + * Return 0 on success, and negative on failure + */ +static int +iecm_set_q_coalesce(struct iecm_vport *vport, struct ethtool_coalesce *ec, + int q_num, bool is_rxq) +{ + struct iecm_queue *q; + + if (is_rxq) + q = iecm_find_rxq(vport, q_num); + else + q = iecm_find_txq(vport, q_num); + + if (q && __iecm_set_q_coalesce(ec, q)) + return -EINVAL; + + return 0; +} + +/** + * iecm_set_coalesce - set ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @ec: coalesce settings to program the device with + * + * Return 0 on success, and negative on failure + */ +static int +iecm_set_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int i, err = 0; + + for (i = 0; i < vport->num_txq; i++) { + err = iecm_set_q_coalesce(vport, ec, i, false); + if (err) + return err; + } + + for (i = 0; i < vport->num_rxq; i++) { + err = iecm_set_q_coalesce(vport, ec, i, true); + if (err) + return err; + } + return 0; +} + +/** + * iecm_set_per_q_coalesce - set ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @q_num: queue for which the ITR values has to be set + * @ec: coalesce settings to program the device with + * + * Return 0 on success, and negative on failure + */ +static int +iecm_set_per_q_coalesce(struct net_device *netdev, u32 q_num, + struct ethtool_coalesce *ec) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int err; + + err = iecm_set_q_coalesce(vport, ec, q_num, false); + if (!err) + err = iecm_set_q_coalesce(vport, ec, q_num, true); + + return err; +} + +/** + * iecm_get_msglevel - Get debug message level + * @netdev: network interface device structure + * + * Returns current debug message level. + */ +static u32 iecm_get_msglevel(struct net_device *netdev) +{ + struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev); + + return adapter->msg_enable; +} + +/** + * iecm_set_msglevel - Set debug message level + * @netdev: network interface device structure + * @data: message level + * + * Set current debug message level. Higher values cause the driver to + * be noisier. + */ +static void iecm_set_msglevel(struct net_device *netdev, u32 data) +{ + struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev); + + adapter->msg_enable = data; +} + +/** + * iecm_get_link_ksettings - Get Link Speed and Duplex settings + * @netdev: network interface device structure + * @cmd: ethtool command + * + * Reports speed/duplex settings. + **/ +static int iecm_get_link_ksettings(struct net_device *netdev, + struct ethtool_link_ksettings *cmd) +{ + struct iecm_netdev_priv *np = netdev_priv(netdev); + struct iecm_adapter *adapter = np->vport->adapter; + + ethtool_link_ksettings_zero_link_mode(cmd, supported); + cmd->base.autoneg = AUTONEG_DISABLE; + cmd->base.port = PORT_NONE; + /* Set speed and duplex */ + switch (adapter->link_speed) { + case VIRTCHNL_LINK_SPEED_40GB: + cmd->base.speed = SPEED_40000; + break; + case VIRTCHNL_LINK_SPEED_25GB: + cmd->base.speed = SPEED_25000; + break; + case VIRTCHNL_LINK_SPEED_20GB: + cmd->base.speed = SPEED_20000; + break; + case VIRTCHNL_LINK_SPEED_10GB: + cmd->base.speed = SPEED_10000; + break; + case VIRTCHNL_LINK_SPEED_1GB: + cmd->base.speed = SPEED_1000; + break; + case VIRTCHNL_LINK_SPEED_100MB: + cmd->base.speed = SPEED_100; + break; + default: + break; + } + cmd->base.duplex = DUPLEX_FULL; + + return 0; +} + +/** + * iecm_get_drvinfo - Get driver info + * @netdev: network interface device structure + * @drvinfo: ethtool driver info structure + * + * Returns information about the driver and device for display to the user. + */ +static void iecm_get_drvinfo(struct net_device *netdev, + struct ethtool_drvinfo *drvinfo) +{ + struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev); + + strlcpy(drvinfo->driver, iecm_drv_name, 32); + strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 32); +} + +static const struct ethtool_ops iecm_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_USECS | + ETHTOOL_COALESCE_USE_ADAPTIVE, + .get_drvinfo = iecm_get_drvinfo, + .get_msglevel = iecm_get_msglevel, + .set_msglevel = iecm_set_msglevel, + .get_coalesce = iecm_get_coalesce, + .set_coalesce = iecm_set_coalesce, + .get_per_queue_coalesce = iecm_get_per_q_coalesce, + .set_per_queue_coalesce = iecm_set_per_q_coalesce, + .get_ethtool_stats = iecm_get_ethtool_stats, + .get_strings = iecm_get_strings, + .get_sset_count = iecm_get_sset_count, + .get_rxnfc = iecm_get_rxnfc, + .get_rxfh_key_size = iecm_get_rxfh_key_size, + .get_rxfh_indir_size = iecm_get_rxfh_indir_size, + .get_rxfh = iecm_get_rxfh, + .set_rxfh = iecm_set_rxfh, + .get_channels = iecm_get_channels, + .set_channels = iecm_set_channels, + .get_ringparam = iecm_get_ringparam, + .set_ringparam = iecm_set_ringparam, + .get_link_ksettings = iecm_get_link_ksettings, +}; + /** * iecm_set_ethtool_ops - Initialize ethtool ops struct * @netdev: network interface device structure @@ -12,5 +1044,5 @@ */ void iecm_set_ethtool_ops(struct net_device *netdev) { - /* stub */ + netdev->ethtool_ops = &iecm_ethtool_ops; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 16f92dc4c77a..1a8968c4070c 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -840,7 +840,37 @@ static void iecm_deinit_task(struct iecm_adapter *adapter) */ static int iecm_init_hard_reset(struct iecm_adapter *adapter) { - /* stub */ + int err = 0; + + /* Prepare for reset */ + if (test_bit(__IECM_HR_FUNC_RESET, adapter->flags)) { + iecm_deinit_task(adapter); + adapter->dev_ops.reg_ops.trigger_reset(adapter, + __IECM_HR_FUNC_RESET); + set_bit(__IECM_UP_REQUESTED, adapter->flags); + clear_bit(__IECM_HR_FUNC_RESET, adapter->flags); + } else if (test_bit(__IECM_HR_CORE_RESET, adapter->flags)) { + if (adapter->state == __IECM_UP) + set_bit(__IECM_UP_REQUESTED, adapter->flags); + iecm_deinit_task(adapter); + clear_bit(__IECM_HR_CORE_RESET, adapter->flags); + } else if (test_and_clear_bit(__IECM_HR_DRV_LOAD, adapter->flags)) { + /* Trigger reset */ + } else { + dev_err(&adapter->pdev->dev, "Unhandled hard reset cause\n"); + err = -EBADRQC; + goto handle_err; + } + + /* Reset is complete and so start building the driver resources again */ + err = iecm_init_dflt_mbx(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "Failed to initialize default mailbox: %d\n", + err); + } +handle_err: + mutex_unlock(&adapter->reset_lock); + return err; } /** @@ -869,7 +899,109 @@ static void iecm_vc_event_task(struct work_struct *work) int iecm_initiate_soft_reset(struct iecm_vport *vport, enum iecm_flags reset_cause) { - /* stub */ + enum iecm_state current_state = vport->adapter->state; + struct iecm_adapter *adapter = vport->adapter; + struct iecm_vport *old_vport; + int err = 0; + + /* make sure we do not end up in initiating multiple resets */ + mutex_lock(&adapter->reset_lock); + + /* If the system is low on memory, we can end up in bad state if we + * free all the memory for queue resources and try to allocate them + * again. Instead, we can pre-allocate the new resources before doing + * anything and bailing if the alloc fails. + * + * Here we make a clone to act as a handle to old resources, then do a + * new alloc. If successful then we'll stop the clone, free the old + * resources, and continue with reset on new vport resources. On error + * copy clone back to vport to get back to a good state and return + * error. + * + * We also want to be careful we don't invalidate any pre-existing + * pointers to vports prior to calling this. + */ + old_vport = kzalloc(sizeof(*vport), GFP_KERNEL); + if (!old_vport) { + mutex_unlock(&adapter->reset_lock); + return -ENOMEM; + } + memcpy(old_vport, vport, sizeof(*vport)); + + /* Adjust resource parameters prior to reallocating resources */ + switch (reset_cause) { + case __IECM_SR_Q_CHANGE: + adapter->dev_ops.vc_ops.adjust_qs(vport); + break; + case __IECM_SR_Q_DESC_CHANGE: + /* Update queue parameters before allocating resources */ + iecm_vport_calc_num_q_desc(vport); + break; + case __IECM_SR_Q_SCH_CHANGE: + case __IECM_SR_MTU_CHANGE: + break; + default: + dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n"); + err = -EINVAL; + goto err_reset; + } + + /* It's important we pass in the vport pointer so that when + * back-pointers are setup in queue allocs, they get the right pointer. + */ + err = iecm_vport_res_alloc(vport); + if (err) + goto err_mem_alloc; + + if (!test_bit(__IECM_NO_EXTENDED_CAPS, adapter->flags)) { + if (current_state <= __IECM_DOWN) { + iecm_send_delete_queues_msg(old_vport); + } else { + set_bit(__IECM_DEL_QUEUES, adapter->flags); + iecm_vport_stop(old_vport); + } + + iecm_deinit_rss(old_vport); + err = iecm_send_add_queues_msg(vport, vport->num_txq, + vport->num_complq, + vport->num_rxq, + vport->num_bufq); + if (err) + goto err_reset; + } + + /* Post resource allocation reset */ + switch (reset_cause) { + case __IECM_SR_Q_CHANGE: + iecm_intr_rel(adapter); + iecm_intr_req(adapter); + break; + case __IECM_SR_Q_DESC_CHANGE: + case __IECM_SR_Q_SCH_CHANGE: + case __IECM_SR_MTU_CHANGE: + iecm_vport_stop(old_vport); + break; + default: + dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n"); + err = -EINVAL; + goto err_reset; + } + + /* free the old resources */ + iecm_vport_res_free(old_vport); + kfree(old_vport); + + if (current_state == __IECM_UP) + err = iecm_vport_open(vport); + mutex_unlock(&adapter->reset_lock); + return err; +err_reset: + iecm_vport_res_free(vport); +err_mem_alloc: + memcpy(vport, old_vport, sizeof(*vport)); + kfree(old_vport); + mutex_unlock(&adapter->reset_lock); + return err; } /** @@ -961,6 +1093,7 @@ int iecm_probe(struct pci_dev *pdev, INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task); INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task); + adapter->dev_ops.reg_ops.reset_reg_init(&adapter->reset_reg); mutex_lock(&adapter->reset_lock); set_bit(__IECM_HR_DRV_LOAD, adapter->flags); err = iecm_init_hard_reset(adapter); @@ -1054,7 +1187,11 @@ static int iecm_open(struct net_device *netdev) */ static int iecm_change_mtu(struct net_device *netdev, int new_mtu) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + netdev->mtu = new_mtu; + + return iecm_initiate_soft_reset(vport, __IECM_SR_MTU_CHANGE); } void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) From patchwork Mon Aug 24 17:33:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 261997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20D64C433E1 for ; Mon, 24 Aug 2020 17:35:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0680C2067C for ; Mon, 24 Aug 2020 17:35:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728524AbgHXRfA (ORCPT ); Mon, 24 Aug 2020 13:35:00 -0400 Received: from mga09.intel.com ([134.134.136.24]:29865 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726691AbgHXRey (ORCPT ); Mon, 24 Aug 2020 13:34:54 -0400 IronPort-SDR: 7yiC32gxRwAH1MZh0S12yk9t8URhHh33Nk7fdCx8MvdWF2SIymmzinKrc05I6/1InpTsn8gtF1 8FF6K5WoTRLg== X-IronPort-AV: E=McAfee;i="6000,8403,9723"; a="157008609" X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="157008609" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:27 -0700 IronPort-SDR: gUQsBjYUPovEjy3+aG3Mx4MhjU2P4TY1ix88MNuwnsnAmMnuPBIn9hUi+kndOJl8VPaibcYTau a7oZyAb07QPQ== X-IronPort-AV: E=Sophos;i="5.76,349,1592895600"; d="scan'208";a="336245376" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Aug 2020 10:33:26 -0700 From: Tony Nguyen To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jeffrey.t.kirsher@intel.com, anthony.l.nguyen@intel.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala Subject: [net-next v5 14/15] iecm: Add iecm to the kernel build system Date: Mon, 24 Aug 2020 10:33:05 -0700 Message-Id: <20200824173306.3178343-15-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> References: <20200824173306.3178343-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This introduces iecm as a module to the kernel, and adds relevant documentation. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher Signed-off-by: Tony Nguyen --- .../device_drivers/ethernet/intel/iecm.rst | 93 +++++++++++++++++++ MAINTAINERS | 1 + drivers/net/ethernet/intel/Kconfig | 10 ++ drivers/net/ethernet/intel/Makefile | 1 + drivers/net/ethernet/intel/iecm/Makefile | 18 ++++ 5 files changed, 123 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/intel/iecm.rst create mode 100644 drivers/net/ethernet/intel/iecm/Makefile diff --git a/Documentation/networking/device_drivers/ethernet/intel/iecm.rst b/Documentation/networking/device_drivers/ethernet/intel/iecm.rst new file mode 100644 index 000000000000..aefa6bd1094d --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/intel/iecm.rst @@ -0,0 +1,93 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============================ +Intel Ethernet Common Module +============================ + +The Intel Ethernet Common Module is meant to serve as an abstraction layer +between device specific implementation details and common net device driver +flows. This library provides several function hooks which allow a device driver +to specify register addresses, control queue communications, and other device +specific functionality. Some functions are required to be implemented while +others have a default implementation that is used when none is supplied by the +device driver. Doing this, a device driver can be written to take advantage +of existing code while also giving the flexibility to implement device specific +features. + +The common use case for this library is for a network device driver that wants +to specify its own device specific details but also leverage the more common +code flows found in network device drivers. + +Sections in this document: + Entry Point + Exit Point + Register Operations API + Virtchnl Operations API + +Entry Point +~~~~~~~~~~~ +The primary entry point to the library is the iecm_probe function. Prior to +calling this, device drivers must have allocated an iecm_adapter struct and +initialized it with the required API functions. The adapter struct, along with +the pci_dev struct and the pci_device_id struct, is provided to iecm_probe +which finalizes device initialization and prepares the device for open. + +The iecm_dev_ops struct within the iecm_adapter struct is the primary vehicle +for passing information from device drivers to the common module. A dependent +module must define and assign a reg_ops_init function which will assign the +respective function pointers to initialize register values (see iecm_reg_ops +struct). These are required to be provided by the dependent device driver as +no suitable default can be assumed for register addresses. + +The vc_ops_init function pointer and the related iecm_virtchnl_ops struct are +optional and should only be necessary for device drivers which use a different +method/timing for communicating across a mailbox to the hardware. Within iecm +is a default interface provided in the case where one is not provided by the +device driver. + +Exit Point +~~~~~~~~~~ +When the device driver is being prepared to be removed through the pci_driver +remove callback, it is required for the device driver to call iecm_remove with +the pci_dev struct provided. This is required to ensure all resources are +properly freed and returned to the operating system. + +Register Operations API +~~~~~~~~~~~~~~~~~~~~~~~ +iecm_reg_ops contains three different function pointers relating to initializing +registers for the specific net device using the library. + +ctlq_reg_init relates specifically to setting up registers related to control +queue/mailbox communications. Registers that should be defined include: head, +tail, len, bah, bal, len_mask, len_ena_mask, and head_mask. + +vportq_reg_init relates to setting up queue registers. The tail registers to +be assigned to the iecm_queue struct for each RX/TX queue. + +intr_reg_init relates to any registers needed to setup interrupts. These +include registers needed to enable the interrupt and change ITR settings. + +If the initialization function finds that one or more required function +pointers were not provided, an error will be issued and the device will be +inoperable. + + +Virtchnl Operations API +~~~~~~~~~~~~~~~~~~~~~~~ +The virtchnl is a conduit between driver and hardware that allows device +drivers to send and receive control messages to/from hardware. This is +optional to be specified as there is a general interface that can be assumed +when using this library. However, if a device deviates in some way to +communicate across the mailbox correctly, this interface is provided to allow +that. + +If vc_ops_init is set in the dev_ops field of the iecm_adapter struct, then it +is assumed the device driver is providing its own interface to do virtchnl +communications. While providing vc_ops_init is optional, if it is provided, it +is required that the device driver provide function pointers for those functions +in vc_ops, with exception for the enable_vport, disable_vport, and destroy_vport +functions which may not be required for all devices. + +If the initialization function finds that vc_ops_init was defined but one or +more required function pointers were not provided, an error will be issued and +the device will be inoperable. diff --git a/MAINTAINERS b/MAINTAINERS index 36ec0bd50a8f..88a52a276eee 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8753,6 +8753,7 @@ F: Documentation/networking/device_drivers/ethernet/intel/ F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/*/ F: include/linux/avf/virtchnl.h +F: include/linux/net/intel/ INTEL FRAMEBUFFER DRIVER (excluding 810 and 815) M: Maik Broemme diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 5aa86318ed3e..04de64ecb06d 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -343,4 +343,14 @@ config IGC To compile this driver as a module, choose M here. The module will be called igc. +config IECM + tristate "Intel(R) Ethernet Common Module Support" + default n + depends on PCI + help + To compile this driver as a module, choose M here. The module + will be called iecm. MSI-X interrupt support is required + + More specific information on configuring the driver is in + . endif # NET_VENDOR_INTEL diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile index 3075290063f6..c9eba9cc5087 100644 --- a/drivers/net/ethernet/intel/Makefile +++ b/drivers/net/ethernet/intel/Makefile @@ -16,3 +16,4 @@ obj-$(CONFIG_IXGB) += ixgb/ obj-$(CONFIG_IAVF) += iavf/ obj-$(CONFIG_FM10K) += fm10k/ obj-$(CONFIG_ICE) += ice/ +obj-$(CONFIG_IECM) += iecm/ diff --git a/drivers/net/ethernet/intel/iecm/Makefile b/drivers/net/ethernet/intel/iecm/Makefile new file mode 100644 index 000000000000..ace32cb7472c --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/Makefile @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2020 Intel Corporation + +# +# Makefile for the Intel(R) Ethernet Common Module +# + +obj-$(CONFIG_IECM) += iecm.o + +iecm-y := \ + iecm_lib.o \ + iecm_virtchnl.o \ + iecm_txrx.o \ + iecm_singleq_txrx.o \ + iecm_ethtool.o \ + iecm_controlq.o \ + iecm_controlq_setup.o \ + iecm_main.o