From patchwork Tue Jun 23 22:40:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E25CEC433E0 for ; Tue, 23 Jun 2020 22:49:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1A132078E for ; Tue, 23 Jun 2020 22:49:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389025AbgFWWtw (ORCPT ); Tue, 23 Jun 2020 18:49:52 -0400 Received: from mga01.intel.com ([192.55.52.88]:14605 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387710AbgFWWtu (ORCPT ); Tue, 23 Jun 2020 18:49:50 -0400 IronPort-SDR: GzqDm/126UpQ902k8IDkRn5YJDZbg5GdHGIqucS81/lkWRRAE6NVt6x1ktz3NY1Iu+/PFzU0EY 0dzEXpC2GSlQ== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327480" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327480" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:45 -0700 IronPort-SDR: BqPrOGxmcQIbtS4DtWXGiuKwLDO5Ildvr2UKmAEqbzOgK3Wp12y/L+qZssVAQMfHysYycTB1Od Yq3FSCTq7UUA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045911" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:45 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 01/15] virtchnl: Extend AVF ops Date: Tue, 23 Jun 2020 15:40:29 -0700 Message-Id: <20200623224043.801728-2-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This implements the next generation of virtchnl ops which enable greater functionality and capabilities. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- include/linux/avf/virtchnl.h | 592 +++++++++++++++++++++++++++++++++++ 1 file changed, 592 insertions(+) diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index 40bad71865ea..a967ea2c1248 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -136,6 +136,34 @@ enum virtchnl_ops { VIRTCHNL_OP_DISABLE_CHANNELS = 31, VIRTCHNL_OP_ADD_CLOUD_FILTER = 32, VIRTCHNL_OP_DEL_CLOUD_FILTER = 33, + /* New major set of opcodes introduced and so leaving room for + * old misc opcodes to be added in future. Also these opcodes may only + * be used if both the PF and VF have successfully negotiated the + * VIRTCHNL_VF_CAP_EXT_FEATURES capability during initial capabilities + * exchange. + */ + VIRTCHNL_OP_GET_CAPS = 100, + VIRTCHNL_OP_CREATE_VPORT = 101, + VIRTCHNL_OP_DESTROY_VPORT = 102, + VIRTCHNL_OP_ENABLE_VPORT = 103, + VIRTCHNL_OP_DISABLE_VPORT = 104, + VIRTCHNL_OP_CONFIG_TX_QUEUES = 105, + VIRTCHNL_OP_CONFIG_RX_QUEUES = 106, + VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107, + VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108, + VIRTCHNL_OP_ADD_QUEUES = 109, + VIRTCHNL_OP_DEL_QUEUES = 110, + VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111, + VIRTCHNL_OP_UNMAP_QUEUE_VECTOR = 112, + VIRTCHNL_OP_GET_RSS_KEY = 113, + VIRTCHNL_OP_GET_RSS_LUT = 114, + VIRTCHNL_OP_SET_RSS_LUT = 115, + VIRTCHNL_OP_GET_RSS_HASH = 116, + VIRTCHNL_OP_SET_RSS_HASH = 117, + VIRTCHNL_OP_CREATE_VFS = 118, + VIRTCHNL_OP_DESTROY_VFS = 119, + VIRTCHNL_OP_ALLOC_VECTORS = 120, + VIRTCHNL_OP_DEALLOC_VECTORS = 121, }; /* These macros are used to generate compilation errors if a structure/union @@ -463,6 +491,21 @@ VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info); * PF replies with struct eth_stats in an external buffer. */ +struct virtchnl_eth_stats { + u64 rx_bytes; /* received bytes */ + u64 rx_unicast; /* received unicast pkts */ + u64 rx_multicast; /* received multicast pkts */ + u64 rx_broadcast; /* received broadcast pkts */ + u64 rx_discards; + u64 rx_unknown_protocol; + u64 tx_bytes; /* transmitted bytes */ + u64 tx_unicast; /* transmitted unicast pkts */ + u64 tx_multicast; /* transmitted multicast pkts */ + u64 tx_broadcast; /* transmitted broadcast pkts */ + u64 tx_discards; + u64 tx_errors; +}; + /* VIRTCHNL_OP_CONFIG_RSS_KEY * VIRTCHNL_OP_CONFIG_RSS_LUT * VF sends these messages to configure RSS. Only supported if both PF @@ -668,6 +711,397 @@ enum virtchnl_vfr_states { VIRTCHNL_VFR_VFACTIVE, }; +/* PF capability flags + * VIRTCHNL_CAP_STATELESS_OFFLOADS flag indicates stateless offloads + * such as TX/RX Checksum offloading and TSO for non-tunneled packets. Please + * note that old and new capabilities are exclusive and not supposed to be + * mixed + */ +#define VIRTCHNL_CAP_STATELESS_OFFLOADS BIT(1) +#define VIRTCHNL_CAP_UDP_SEG_OFFLOAD BIT(2) +#define VIRTCHNL_CAP_RSS BIT(3) +#define VIRTCHNL_CAP_TCP_RSC BIT(4) +#define VIRTCHNL_CAP_HEADER_SPLIT BIT(5) +#define VIRTCHNL_CAP_RDMA BIT(6) +#define VIRTCHNL_CAP_SRIOV BIT(7) +/* Earliest Departure Time capability used for Timing Wheel */ +#define VIRTCHNL_CAP_EDT BIT(8) + +/* Type of virtual port */ +enum virtchnl_vport_type { + VIRTCHNL_VPORT_TYPE_DEFAULT = 0, +}; + +/* Type of queue model */ +enum virtchnl_queue_model { + VIRTCHNL_QUEUE_MODEL_SINGLE = 0, + VIRTCHNL_QUEUE_MODEL_SPLIT = 1, +}; + +/* TX and RX queue types are valid in legacy as well as split queue models. + * With Split Queue model, 2 additional types are introduced - TX_COMPLETION + * and RX_BUFFER. In split queue model, RX corresponds to the queue where HW + * posts completions. + */ +enum virtchnl_queue_type { + VIRTCHNL_QUEUE_TYPE_TX = 0, + VIRTCHNL_QUEUE_TYPE_RX = 1, + VIRTCHNL_QUEUE_TYPE_TX_COMPLETION = 2, + VIRTCHNL_QUEUE_TYPE_RX_BUFFER = 3, +}; + +/* RX Queue Feature bits */ +#define VIRTCHNL_RXQ_RSC BIT(1) +#define VIRTCHNL_RXQ_HDR_SPLIT BIT(2) +#define VIRTCHNL_RXQ_IMMEDIATE_WRITE_BACK BIT(4) + +/* RX Queue Descriptor Types */ +enum virtchnl_rxq_desc_size { + VIRTCHNL_RXQ_DESC_SIZE_16BYTE = 0, + VIRTCHNL_RXQ_DESC_SIZE_32BYTE = 1, +}; + +/* TX Queue Scheduling Modes Queue mode is the legacy type i.e. inorder + * and Flow mode is out of order packet processing + */ +enum virtchnl_txq_sched_mode { + VIRTCHNL_TXQ_SCHED_MODE_QUEUE = 0, + VIRTCHNL_TXQ_SCHED_MODE_FLOW = 1, +}; + +/* Queue Descriptor Profiles Base mode is the legacy and Native is the + * flex descriptors + */ +enum virtchnl_desc_profile { + VIRTCHNL_TXQ_DESC_PROFILE_BASE = 0, + VIRTCHNL_TXQ_DESC_PROFILE_NATIVE = 1, +}; + +/* Type of RSS algorithm */ +enum virtchnl_rss_algorithm { + VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0, + VIRTCHNL_RSS_ALG_R_ASYMMETRIC = 1, + VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC = 2, + VIRTCHNL_RSS_ALG_XOR_SYMMETRIC = 3, +}; + +/* VIRTCHNL_OP_GET_CAPS + * PF sends this message to CP to negotiate capabilities by filling + * in the u64 bitmap of its desired capabilities, max_num_vfs and + * num_allocated_vectors. + * CP responds with an updated virtchnl_get_capabilities structure + * with allowed capabilities and the other fields as below. + * If PF sets max_num_vfs as 0, CP will respond with max number of VFs + * that can be created by this PF. For any other value 'n', CP responds + * with max_num_vfs set to max(n, x) where x is the max number of VFs + * allowed by CP's policy. + * If PF sets num_allocated_vectors as 0, CP will respond with 1 which + * is default vector associated with the default mailbox. For any other + * value 'n', CP responds with a value <= n based on the CP's policy of + * max number of vectors for a PF. + */ +struct virtchnl_get_capabilities { + u64 cap_flags; + u16 max_num_vfs; + u16 num_allocated_vectors; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_get_capabilities); + +/* structure to specify a chunk of contiguous queues */ +struct virtchnl_queue_chunk { + enum virtchnl_queue_type type; + u16 start_queue_id; + u16 num_queues; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk); + +/* structure to specify several chunks of contiguous queues */ +struct virtchnl_queue_chunks { + u16 num_chunks; + u16 rsvd; + struct virtchnl_queue_chunk chunks[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_chunks); + +/* VIRTCHNL_OP_CREATE_VPORT + * PF sends this message to CP to create a vport by filling in the first 8 + * fields of virtchnl_create_vport structure (vport type, Tx, Rx queue models + * and desired number of queues and vectors). CP responds with the updated + * virtchnl_create_vport structure containing the number of assigned queues, + * vectors, vport id, max mtu, default mac addr followed by chunks which in turn + * will have an array of num_chunks entries of virtchnl_queue_chunk structures. + */ +struct virtchnl_create_vport { + enum virtchnl_vport_type vport_type; + /* single or split */ + enum virtchnl_queue_model txq_model; + /* single or split */ + enum virtchnl_queue_model rxq_model; + u16 num_tx_q; + /* valid only if txq_model is split Q */ + u16 num_tx_complq; + u16 num_rx_q; + /* valid only if rxq_model is split Q */ + u16 num_rx_bufq; + u16 vport_id; + u16 max_mtu; + u8 default_mac_addr[ETH_ALEN]; + enum virtchnl_rss_algorithm rss_algorithm; + u16 rss_key_size; + u16 rss_lut_size; + u16 qset_handle; + struct virtchnl_queue_chunks chunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(56, virtchnl_create_vport); + +/* VIRTCHNL_OP_DESTROY_VPORT + * VIRTCHNL_OP_ENABLE_VPORT + * VIRTCHNL_OP_DISABLE_VPORT + * PF sends this message to CP to destroy, enable or disable a vport by filling + * in the vport_id in virtchnl_vport structure. + * CP responds with the status of the requested operation. + */ +struct virtchnl_vport { + u16 vport_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(2, virtchnl_vport); + +/* Tx queue config info */ +struct virtchnl_txq_info_v2 { + u16 queue_id; + /* single or split */ + enum virtchnl_queue_model model; + /* Tx or tx_completion */ + enum virtchnl_queue_type type; + /* queue or flow based */ + enum virtchnl_txq_sched_mode sched_mode; + /* base or native */ + enum virtchnl_desc_profile desc_profile; + u16 ring_len; + u64 dma_ring_addr; + /* valid only if queue model is split and type is Tx */ + u16 tx_compl_queue_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_txq_info_v2); + +/* VIRTCHNL_OP_CONFIG_TX_QUEUES + * PF sends this message to set up parameters for one or more TX queues. + * This message contains an array of num_qinfo instances of virtchnl_txq_info_v2 + * structures. CP configures requested queues and returns a status code. If + * num_qinfo specified is greater than the number of queues associated with the + * vport, an error is returned and no queues are configured. + */ +struct virtchnl_config_tx_queues { + u16 vport_id; + u16 num_qinfo; + u32 rsvd; + struct virtchnl_txq_info_v2 qinfo[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_config_tx_queues); + +/* Rx queue config info */ +struct virtchnl_rxq_info_v2 { + u16 queue_id; + /* single or split */ + enum virtchnl_queue_model model; + /* Rx or Rx buffer */ + enum virtchnl_queue_type type; + /* base or native */ + enum virtchnl_desc_profile desc_profile; + /* rsc, header-split, immediate write back */ + u16 queue_flags; + /* 16 or 32 byte */ + enum virtchnl_rxq_desc_size desc_size; + u16 ring_len; + u16 hdr_buffer_size; + u32 data_buffer_size; + u32 max_pkt_size; + u64 dma_ring_addr; + u64 dma_head_wb_addr; + u16 rsc_low_watermark; + u8 buffer_notif_stride; + enum virtchnl_rx_hsplit rx_split_pos; + /* valid only if queue model is split and type is Rx buffer*/ + u16 rx_bufq1_id; + /* valid only if queue model is split and type is Rx buffer*/ + u16 rx_bufq2_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_rxq_info_v2); + +/* VIRTCHNL_OP_CONFIG_RX_QUEUES + * PF sends this message to set up parameters for one or more RX queues. + * This message contains an array of num_qinfo instances of virtchnl_rxq_info_v2 + * structures. CP configures requested queues and returns a status code. + * If the number of queues specified is greater than the number of queues + * associated with the vport, an error is returned and no queues are configured. + */ +struct virtchnl_config_rx_queues { + u16 vport_id; + u16 num_qinfo; + struct virtchnl_rxq_info_v2 qinfo[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_config_rx_queues); + +/* VIRTCHNL_OP_ADD_QUEUES + * PF sends this message to request additional TX/RX queues beyond the ones + * that were assigned via CREATE_VPORT request. virtchnl_add_queues structure is + * used to specify the number of each type of queues. + * CP responds with the same structure with the actual number of queues assigned + * followed by num_chunks of virtchnl_queue_chunk structures. + */ +struct virtchnl_add_queues { + u16 vport_id; + u16 num_tx_q; + u16 num_tx_complq; + u16 num_rx_q; + u16 num_rx_bufq; + struct virtchnl_queue_chunks chunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_add_queues); + +/* VIRTCHNL_OP_ENABLE_QUEUES + * VIRTCHNL_OP_DISABLE_QUEUES + * VIRTCHNL_OP_DEL_QUEUES + * PF sends these messages to enable, disable or delete queues specified in + * chunks. PF sends virtchnl_del_ena_dis_queues struct to specify the queues + * to be enabled/disabled/deleted. Also applicable to single queue RX or + * TX. CP performs requested action and returns status. + */ +struct virtchnl_del_ena_dis_queues { + u16 vport_id; + struct virtchnl_queue_chunks chunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_del_ena_dis_queues); + +/* Virtchannel interrupt throttling rate index */ +enum virtchnl_itr_idx { + VIRTCHNL_ITR_IDX_0 = 0, + VIRTCHNL_ITR_IDX_1 = 1, + VIRTCHNL_ITR_IDX_NO_ITR = 3, +}; + +/* Queue to vector mapping */ +struct virtchnl_queue_vector { + u16 queue_id; + u16 vector_id; + enum virtchnl_itr_idx itr_idx; + enum virtchnl_queue_type queue_type; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_vector); + +/* VIRTCHNL_OP_MAP_QUEUE_VECTOR + * VIRTCHNL_OP_UNMAP_QUEUE_VECTOR + * PF sends this message to map or unmap queues to vectors and ITR index + * registers. External data buffer contains virtchnl_queue_vector_maps structure + * that contains num_maps of virtchnl_queue_vector structures. + * CP maps the requested queue vector maps after validating the queue and vector + * ids and returns a status code. + */ +struct virtchnl_queue_vector_maps { + u16 vport_id; + u16 num_maps; + struct virtchnl_queue_vector qv_maps[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_queue_vector_maps); + +/* Structure to specify a chunk of contiguous interrupt vectors */ +struct virtchnl_vector_chunk { + u16 start_vector_id; + u16 num_vectors; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_vector_chunk); + +/* Structure to specify several chunks of contiguous interrupt vectors */ +struct virtchnl_vector_chunks { + u16 num_vector_chunks; + struct virtchnl_vector_chunk num_vchunk[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vector_chunks); + +/* VIRTCHNL_OP_ALLOC_VECTORS + * PF sends this message to request additional interrupt vectors beyond the + * ones that were assigned via GET_CAPS request. virtchnl_alloc_vectors + * structure is used to specify the number of vectors requested. CP responds + * with the same structure with the actual number of vectors assigned followed + * by virtchnl_vector_chunks structure identifying the vector ids. + */ +struct virtchnl_alloc_vectors { + u16 num_vectors; + struct virtchnl_vector_chunks vchunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_alloc_vectors); + +/* VIRTCHNL_OP_DEALLOC_VECTORS + * PF sends this message to release the vectors. + * PF sends virtchnl_vector_chunks struct to specify the vectors it is giving + * away. CP performs requested action and returns status. + */ + +/* VIRTCHNL_OP_GET_RSS_LUT + * VIRTCHNL_OP_SET_RSS_LUT + * PF sends this message to get or set RSS lookup table. Only supported if + * both PF and CP drivers set the VIRTCHNL_CAP_RSS bit during configuration + * negotiation. Uses the virtchnl_rss_lut_v2 structure + */ +struct virtchnl_rss_lut_v2 { + u16 vport_id; + u16 lut_entries; + u16 lut[1]; /* RSS lookup table */ +}; + +VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut_v2); + +/* VIRTCHNL_OP_GET_RSS_KEY + * PF sends this message to get RSS key. Only supported if + * both PF and CP drivers set the VIRTCHNL_CAP_RSS bit during configuration + * negotiation. Uses the virtchnl_rss_key structure + */ + +/* VIRTCHNL_OP_GET_RSS_HASH + * VIRTCHNL_OP_SET_RSS_HASH + * PF sends these messages to get and set the hash filter enable bits for RSS. + * By default, the CP sets these to all possible traffic types that the + * hardware supports. The PF can query this value if it wants to change the + * traffic types that are hashed by the hardware. + * Only supported if both PF and CP drivers set the VIRTCHNL_CAP_RSS bit + * during configuration negotiation. + */ +struct virtchnl_rss_hash { + u64 hash; + u16 vport_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash); + +/* VIRTCHNL_OP_CREATE_SRIOV_VFS + * VIRTCHNL_OP_DESTROY_SRIOV_VFS + * This message is used to let the CP know how many SRIOV VFs need to be + * created. The actual allocation of resources for the VFs in terms of VSI, + * Queues and Interrupts is done by CP. When this call completes, the APF driver + * calls pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices. + */ +struct virtchnl_sriov_vfs_info { + u16 num_vfs; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(2, virtchnl_sriov_vfs_info); + /** * virtchnl_vc_validate_vf_msg * @ver: Virtchnl version info @@ -828,6 +1262,164 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DEL_CLOUD_FILTER: valid_len = sizeof(struct virtchnl_filter); break; + case VIRTCHNL_OP_GET_CAPS: + valid_len = sizeof(struct virtchnl_get_capabilities); + break; + case VIRTCHNL_OP_CREATE_VPORT: + valid_len = sizeof(struct virtchnl_create_vport); + if (msglen >= valid_len) { + struct virtchnl_create_vport *cvport = + (struct virtchnl_create_vport *)msg; + + if (cvport->chunks.num_chunks == 0) { + /* zero chunks is allowed as input */ + break; + } + + valid_len += (cvport->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk); + } + break; + case VIRTCHNL_OP_DESTROY_VPORT: + case VIRTCHNL_OP_ENABLE_VPORT: + case VIRTCHNL_OP_DISABLE_VPORT: + valid_len = sizeof(struct virtchnl_vport); + break; + case VIRTCHNL_OP_CONFIG_TX_QUEUES: + valid_len = sizeof(struct virtchnl_config_tx_queues); + if (msglen >= valid_len) { + struct virtchnl_config_tx_queues *ctq = + (struct virtchnl_config_tx_queues *)msg; + if (ctq->num_qinfo == 0) { + err_msg_format = true; + break; + } + valid_len += (ctq->num_qinfo - 1) * + sizeof(struct virtchnl_txq_info_v2); + } + break; + case VIRTCHNL_OP_CONFIG_RX_QUEUES: + valid_len = sizeof(struct virtchnl_config_rx_queues); + if (msglen >= valid_len) { + struct virtchnl_config_rx_queues *crq = + (struct virtchnl_config_rx_queues *)msg; + if (crq->num_qinfo == 0) { + err_msg_format = true; + break; + } + valid_len += (crq->num_qinfo - 1) * + sizeof(struct virtchnl_rxq_info_v2); + } + break; + case VIRTCHNL_OP_ADD_QUEUES: + valid_len = sizeof(struct virtchnl_add_queues); + if (msglen >= valid_len) { + struct virtchnl_add_queues *add_q = + (struct virtchnl_add_queues *)msg; + + if (add_q->chunks.num_chunks == 0) { + /* zero chunks is allowed as input */ + break; + } + + valid_len += (add_q->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk); + } + break; + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + case VIRTCHNL_OP_DEL_QUEUES: + valid_len = sizeof(struct virtchnl_del_ena_dis_queues); + if (msglen >= valid_len) { + struct virtchnl_del_ena_dis_queues *qs = + (struct virtchnl_del_ena_dis_queues *)msg; + if (qs->chunks.num_chunks == 0) { + err_msg_format = true; + break; + } + valid_len += (qs->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk); + } + break; + case VIRTCHNL_OP_MAP_QUEUE_VECTOR: + case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR: + valid_len = sizeof(struct virtchnl_queue_vector_maps); + if (msglen >= valid_len) { + struct virtchnl_queue_vector_maps *v_qp = + (struct virtchnl_queue_vector_maps *)msg; + if (v_qp->num_maps == 0) { + err_msg_format = true; + break; + } + valid_len += (v_qp->num_maps - 1) * + sizeof(struct virtchnl_queue_vector); + } + break; + case VIRTCHNL_OP_ALLOC_VECTORS: + valid_len = sizeof(struct virtchnl_alloc_vectors); + if (msglen >= valid_len) { + struct virtchnl_alloc_vectors *v_av = + (struct virtchnl_alloc_vectors *)msg; + + if (v_av->vchunks.num_vector_chunks == 0) { + /* zero chunks is allowed as input */ + break; + } + + valid_len += (v_av->vchunks.num_vector_chunks - 1) * + sizeof(struct virtchnl_vector_chunk); + } + break; + case VIRTCHNL_OP_DEALLOC_VECTORS: + valid_len = sizeof(struct virtchnl_vector_chunks); + if (msglen >= valid_len) { + struct virtchnl_vector_chunks *v_chunks = + (struct virtchnl_vector_chunks *)msg; + if (v_chunks->num_vector_chunks == 0) { + err_msg_format = true; + break; + } + valid_len += (v_chunks->num_vector_chunks - 1) * + sizeof(struct virtchnl_vector_chunk); + } + break; + case VIRTCHNL_OP_GET_RSS_KEY: + valid_len = sizeof(struct virtchnl_rss_key); + if (msglen >= valid_len) { + struct virtchnl_rss_key *vrk = + (struct virtchnl_rss_key *)msg; + + if (vrk->key_len == 0) { + /* zero length is allowed as input */ + break; + } + + valid_len += vrk->key_len - 1; + } + break; + case VIRTCHNL_OP_GET_RSS_LUT: + case VIRTCHNL_OP_SET_RSS_LUT: + valid_len = sizeof(struct virtchnl_rss_lut_v2); + if (msglen >= valid_len) { + struct virtchnl_rss_lut_v2 *vrl = + (struct virtchnl_rss_lut_v2 *)msg; + + if (vrl->lut_entries == 0) { + /* zero entries is allowed as input */ + break; + } + + valid_len += (vrl->lut_entries - 1) * sizeof(u16); + } + break; + case VIRTCHNL_OP_GET_RSS_HASH: + case VIRTCHNL_OP_SET_RSS_HASH: + valid_len = sizeof(struct virtchnl_rss_hash); + break; + case VIRTCHNL_OP_CREATE_VFS: + case VIRTCHNL_OP_DESTROY_VFS: + valid_len = sizeof(struct virtchnl_sriov_vfs_info); + break; /* These are always errors coming from the VF. */ case VIRTCHNL_OP_EVENT: case VIRTCHNL_OP_UNKNOWN: From patchwork Tue Jun 23 22:40:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EBBFC433DF for ; Tue, 23 Jun 2020 22:40:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB74C2078A for ; Tue, 23 Jun 2020 22:40:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387931AbgFWWk4 (ORCPT ); Tue, 23 Jun 2020 18:40:56 -0400 Received: from mga01.intel.com ([192.55.52.88]:14610 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387453AbgFWWkw (ORCPT ); Tue, 23 Jun 2020 18:40:52 -0400 IronPort-SDR: ayIKki7iboSb5vJpIgWG2ecfxpipdEXQbDRV+OfA7o8SPw67XYqC9PvboQgVsGGm08DOpPffCM Zk8PF75aL6pg== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327484" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327484" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:46 -0700 IronPort-SDR: YsQo6oqdKUHOnAMOCv60yMF8L1ZNMgq6hH4tutbcq9WTs7Y8UsF0iyK0fPlJfpi47OTgcwYBDh WF7SN19wDPbg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045920" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:46 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 03/15] iecm: Add TX/RX header files Date: Tue, 23 Jun 2020 15:40:31 -0700 Message-Id: <20200623224043.801728-4-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Introduces the data for the TX/RX paths for use by the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- include/linux/net/intel/iecm_lan_pf_regs.h | 120 ++++ include/linux/net/intel/iecm_lan_txrx.h | 636 +++++++++++++++++++++ include/linux/net/intel/iecm_txrx.h | 581 +++++++++++++++++++ 3 files changed, 1337 insertions(+) create mode 100644 include/linux/net/intel/iecm_lan_pf_regs.h create mode 100644 include/linux/net/intel/iecm_lan_txrx.h create mode 100644 include/linux/net/intel/iecm_txrx.h diff --git a/include/linux/net/intel/iecm_lan_pf_regs.h b/include/linux/net/intel/iecm_lan_pf_regs.h new file mode 100644 index 000000000000..6690a2645608 --- /dev/null +++ b/include/linux/net/intel/iecm_lan_pf_regs.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_LAN_PF_REGS_H_ +#define _IECM_LAN_PF_REGS_H_ + +/* Receive queues */ +#define PF_QRX_BASE 0x00000000 +#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000))) +#define PF_QRX_BUFFQ_BASE 0x03000000 +#define PF_QRX_BUFFQ_TAIL(_QRX) \ + (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000))) + +/* Transmit queues */ +#define PF_QTX_BASE 0x05000000 +#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000)) + +/* Control(PF Mailbox) Queue */ +#define PF_FW_BASE 0x08400000 + +#define PF_FW_ARQBAL (PF_FW_BASE) +#define PF_FW_ARQBAH (PF_FW_BASE + 0x4) +#define PF_FW_ARQLEN (PF_FW_BASE + 0x8) +#define PF_FW_ARQLEN_ARQLEN_S 0 +#define PF_FW_ARQLEN_ARQLEN_M MAKEMASK(0x1FFF, PF_FW_ARQLEN_ARQLEN_S) +#define PF_FW_ARQLEN_ARQVFE_S 28 +#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S) +#define PF_FW_ARQLEN_ARQOVFL_S 29 +#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S) +#define PF_FW_ARQLEN_ARQCRIT_S 30 +#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S) +#define PF_FW_ARQLEN_ARQENABLE_S 31 +#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S) +#define PF_FW_ARQH (PF_FW_BASE + 0xC) +#define PF_FW_ARQH_ARQH_S 0 +#define PF_FW_ARQH_ARQH_M MAKEMASK(0x1FFF, PF_FW_ARQH_ARQH_S) +#define PF_FW_ARQT (PF_FW_BASE + 0x10) + +#define PF_FW_ATQBAL (PF_FW_BASE + 0x14) +#define PF_FW_ATQBAH (PF_FW_BASE + 0x18) +#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C) +#define PF_FW_ATQLEN_ATQLEN_S 0 +#define PF_FW_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, PF_FW_ATQLEN_ATQLEN_S) +#define PF_FW_ATQLEN_ATQVFE_S 28 +#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S) +#define PF_FW_ATQLEN_ATQOVFL_S 29 +#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S) +#define PF_FW_ATQLEN_ATQCRIT_S 30 +#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S) +#define PF_FW_ATQLEN_ATQENABLE_S 31 +#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S) +#define PF_FW_ATQH (PF_FW_BASE + 0x20) +#define PF_FW_ATQH_ATQH_S 0 +#define PF_FW_ATQH_ATQH_M MAKEMASK(0x3FF, PF_FW_ATQH_ATQH_S) +#define PF_FW_ATQT (PF_FW_BASE + 0x24) + +/* Interrupts */ +#define PF_GLINT_BASE 0x08900000 +#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000)) +#define PF_GLINT_DYN_CTL_INTENA_S 0 +#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S) +#define PF_GLINT_DYN_CTL_CLEARPBA_S 1 +#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S) +#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2 +#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S) +#define PF_GLINT_DYN_CTL_ITR_INDX_S 3 +#define PF_GLINT_DYN_CTL_INTERVAL_S 5 +#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S) +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24 +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M \ + BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S) +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25 +#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S) +#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31 +#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S) +#define PF_GLINT_ITR(_i, _INT) (PF_GLINT_BASE + (((_INT) + \ + (((_i) + 1) * 4)) * 0x1000)) +#define PF_GLINT_ITR_MAX_INDEX 2 +#define PF_GLINT_ITR_INTERVAL_S 0 +#define PF_GLINT_ITR_INTERVAL_M MAKEMASK(0xFFF, PF_GLINT_ITR_INTERVAL_S) + +/* Generic registers */ +#define PF_INT_DIR_OICR_ENA 0x08406000 +#define PF_INT_DIR_OICR_ENA_S 0 +#define PF_INT_DIR_OICR_ENA_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_ENA_S) +#define PF_INT_DIR_OICR 0x08406004 +#define PF_INT_DIR_OICR_TSYN_EVNT 0 +#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1) +#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2) +#define PF_INT_DIR_OICR_CAUSE 0x08406008 +#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0 +#define PF_INT_DIR_OICR_CAUSE_CAUSE_M \ + MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_CAUSE_CAUSE_S) +#define PF_INT_PBA_CLEAR 0x0840600C + +#define PF_FUNC_RID 0x08406010 +#define PF_FUNC_RID_FUNCTION_NUMBER_S 0 +#define PF_FUNC_RID_FUNCTION_NUMBER_M \ + MAKEMASK(0x7, PF_FUNC_RID_FUNCTION_NUMBER_S) +#define PF_FUNC_RID_DEVICE_NUMBER_S 3 +#define PF_FUNC_RID_DEVICE_NUMBER_M \ + MAKEMASK(0x1F, PF_FUNC_RID_DEVICE_NUMBER_S) +#define PF_FUNC_RID_BUS_NUMBER_S 8 +#define PF_FUNC_RID_BUS_NUMBER_M MAKEMASK(0xFF, PF_FUNC_RID_BUS_NUMBER_S) + +/* Reset registers */ +#define PFGEN_RTRIG 0x08407000 +#define PFGEN_RTRIG_CORER_S 0 +#define PFGEN_RTRIG_CORER_M BIT(0) +#define PFGEN_RTRIG_LINKR_S 1 +#define PFGEN_RTRIG_LINKR_M BIT(1) +#define PFGEN_RTRIG_IMCR_S 2 +#define PFGEN_RTRIG_IMCR_M BIT(2) +#define PFGEN_RSTAT 0x08407008 /* PFR Status */ +#define PFGEN_RSTAT_PFR_STATE_S 0 +#define PFGEN_RSTAT_PFR_STATE_M MAKEMASK(0x3, PFGEN_RSTAT_PFR_STATE_S) +#define PFGEN_CTRL 0x0840700C +#define PFGEN_CTRL_PFSWR BIT(0) + +#endif diff --git a/include/linux/net/intel/iecm_lan_txrx.h b/include/linux/net/intel/iecm_lan_txrx.h new file mode 100644 index 000000000000..6dc210925c7a --- /dev/null +++ b/include/linux/net/intel/iecm_lan_txrx.h @@ -0,0 +1,636 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_LAN_TXRX_H_ +#define _IECM_LAN_TXRX_H_ +#include + +enum iecm_rss_hash { + /* Values 0 - 28 are reserved for future use */ + IECM_HASH_INVALID = 0, + IECM_HASH_NONF_UNICAST_IPV4_UDP = 29, + IECM_HASH_NONF_MULTICAST_IPV4_UDP, + IECM_HASH_NONF_IPV4_UDP, + IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK, + IECM_HASH_NONF_IPV4_TCP, + IECM_HASH_NONF_IPV4_SCTP, + IECM_HASH_NONF_IPV4_OTHER, + IECM_HASH_FRAG_IPV4, + /* Values 37-38 are reserved */ + IECM_HASH_NONF_UNICAST_IPV6_UDP = 39, + IECM_HASH_NONF_MULTICAST_IPV6_UDP, + IECM_HASH_NONF_IPV6_UDP, + IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK, + IECM_HASH_NONF_IPV6_TCP, + IECM_HASH_NONF_IPV6_SCTP, + IECM_HASH_NONF_IPV6_OTHER, + IECM_HASH_FRAG_IPV6, + IECM_HASH_NONF_RSVD47, + IECM_HASH_NONF_FCOE_OX, + IECM_HASH_NONF_FCOE_RX, + IECM_HASH_NONF_FCOE_OTHER, + /* Values 51-62 are reserved */ + IECM_HASH_L2_PAYLOAD = 63, + IECM_HASH_MAX +}; + +/* Supported RSS offloads */ +#define IECM_DEFAULT_RSS_HASH ( \ + BIT_ULL(IECM_HASH_NONF_IPV4_UDP) | \ + BIT_ULL(IECM_HASH_NONF_IPV4_SCTP) | \ + BIT_ULL(IECM_HASH_NONF_IPV4_TCP) | \ + BIT_ULL(IECM_HASH_NONF_IPV4_OTHER) | \ + BIT_ULL(IECM_HASH_FRAG_IPV4) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_UDP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_TCP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_SCTP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_OTHER) | \ + BIT_ULL(IECM_HASH_FRAG_IPV6) | \ + BIT_ULL(IECM_HASH_L2_PAYLOAD)) + +#define IECM_DEFAULT_RSS_HASH_EXPANDED (IECM_DEFAULT_RSS_HASH | \ + BIT_ULL(IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \ + BIT_ULL(IECM_HASH_NONF_UNICAST_IPV4_UDP) | \ + BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV4_UDP) | \ + BIT_ULL(IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \ + BIT_ULL(IECM_HASH_NONF_UNICAST_IPV6_UDP) | \ + BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV6_UDP)) + +/* For iecm_splitq_base_tx_compl_desc */ +#define IECM_TXD_COMPLQ_GEN_S 15 +#define IECM_TXD_COMPLQ_GEN_M BIT_ULL(IECM_TXD_COMPLQ_GEN_S) +#define IECM_TXD_COMPLQ_COMPL_TYPE_S 11 +#define IECM_TXD_COMPLQ_COMPL_TYPE_M \ + MAKEMASK(0x7UL, IECM_TXD_COMPLQ_COMPL_TYPE_S) +#define IECM_TXD_COMPLQ_QID_S 0 +#define IECM_TXD_COMPLQ_QID_M MAKEMASK(0x3FFUL, IECM_TXD_COMPLQ_QID_S) + +/* For base mode TX descriptors */ +#define IECM_TXD_CTX_QW1_MSS_S 50 +#define IECM_TXD_CTX_QW1_MSS_M \ + MAKEMASK(0x3FFFULL, IECM_TXD_CTX_QW1_MSS_S) +#define IECM_TXD_CTX_QW1_TSO_LEN_S 30 +#define IECM_TXD_CTX_QW1_TSO_LEN_M \ + MAKEMASK(0x3FFFFULL, IECM_TXD_CTX_QW1_TSO_LEN_S) +#define IECM_TXD_CTX_QW1_CMD_S 4 +#define IECM_TXD_CTX_QW1_CMD_M \ + MAKEMASK(0xFFFUL, IECM_TXD_CTX_QW1_CMD_S) +#define IECM_TXD_CTX_QW1_DTYPE_S 0 +#define IECM_TXD_CTX_QW1_DTYPE_M \ + MAKEMASK(0xFUL, IECM_TXD_CTX_QW1_DTYPE_S) +#define IECM_TXD_QW1_L2TAG1_S 48 +#define IECM_TXD_QW1_L2TAG1_M \ + MAKEMASK(0xFFFFULL, IECM_TXD_QW1_L2TAG1_S) +#define IECM_TXD_QW1_TX_BUF_SZ_S 34 +#define IECM_TXD_QW1_TX_BUF_SZ_M \ + MAKEMASK(0x3FFFULL, IECM_TXD_QW1_TX_BUF_SZ_S) +#define IECM_TXD_QW1_OFFSET_S 16 +#define IECM_TXD_QW1_OFFSET_M \ + MAKEMASK(0x3FFFFULL, IECM_TXD_QW1_OFFSET_S) +#define IECM_TXD_QW1_CMD_S 4 +#define IECM_TXD_QW1_CMD_M MAKEMASK(0xFFFUL, IECM_TXD_QW1_CMD_S) +#define IECM_TXD_QW1_DTYPE_S 0 +#define IECM_TXD_QW1_DTYPE_M MAKEMASK(0xFUL, IECM_TXD_QW1_DTYPE_S) + +/* TX Completion Descriptor Completion Types */ +#define IECM_TXD_COMPLT_ITR_FLUSH 0 +#define IECM_TXD_COMPLT_RULE_MISS 1 +#define IECM_TXD_COMPLT_RS 2 +#define IECM_TXD_COMPLT_REINJECTED 3 +#define IECM_TXD_COMPLT_RE 4 +#define IECM_TXD_COMPLT_SW_MARKER 5 + +enum iecm_tx_desc_dtype_value { + IECM_TX_DESC_DTYPE_DATA = 0, + IECM_TX_DESC_DTYPE_CTX = 1, + IECM_TX_DESC_DTYPE_REINJECT_CTX = 2, + IECM_TX_DESC_DTYPE_FLEX_DATA = 3, + IECM_TX_DESC_DTYPE_FLEX_CTX = 4, + IECM_TX_DESC_DTYPE_FLEX_TSO_CTX = 5, + IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 = 6, + IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7, + IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX = 8, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_TSO_CTX = 9, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_CTX = 10, + IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX = 11, + IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_TSO_CTX = 13, + IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_CTX = 14, + /* DESC_DONE - HW has completed write-back of descriptor */ + IECM_TX_DESC_DTYPE_DESC_DONE = 15, +}; + +enum iecm_tx_ctx_desc_cmd_bits { + IECM_TX_CTX_DESC_TSO = 0x01, + IECM_TX_CTX_DESC_TSYN = 0x02, + IECM_TX_CTX_DESC_IL2TAG2 = 0x04, + IECM_TX_CTX_DESC_RSVD = 0x08, + IECM_TX_CTX_DESC_SWTCH_NOTAG = 0x00, + IECM_TX_CTX_DESC_SWTCH_UPLINK = 0x10, + IECM_TX_CTX_DESC_SWTCH_LOCAL = 0x20, + IECM_TX_CTX_DESC_SWTCH_VSI = 0x30, + IECM_TX_CTX_DESC_FILT_AU_EN = 0x40, + IECM_TX_CTX_DESC_FILT_AU_EVICT = 0x80, + IECM_TX_CTX_DESC_RSVD1 = 0xF00 +}; + +enum iecm_tx_desc_len_fields { + /* Note: These are predefined bit offsets */ + IECM_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */ + IECM_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */ + IECM_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */ +}; + +enum iecm_tx_base_desc_cmd_bits { + IECM_TX_DESC_CMD_EOP = 0x0001, + IECM_TX_DESC_CMD_RS = 0x0002, + /* only on VFs else RSVD */ + IECM_TX_DESC_CMD_ICRC = 0x0004, + IECM_TX_DESC_CMD_IL2TAG1 = 0x0008, + IECM_TX_DESC_CMD_RSVD1 = 0x0010, + IECM_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ + IECM_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ + IECM_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ + IECM_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ + IECM_TX_DESC_CMD_RSVD2 = 0x0080, + IECM_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ + IECM_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ + IECM_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ + IECM_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ + IECM_TX_DESC_CMD_RSVD3 = 0x0400, + IECM_TX_DESC_CMD_RSVD4 = 0x0800, +}; + +/* Transmit descriptors */ +/* splitq Tx buf, singleq Tx buf and singleq compl desc */ +struct iecm_base_tx_desc { + __le64 buf_addr; /* Address of descriptor's data buf */ + __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */ +};/* read used with buffer queues*/ + +struct iecm_splitq_tx_compl_desc { + /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */ + __le16 qid_comptype_gen; + union { + __le16 q_head; /* Queue head */ + __le16 compl_tag; /* Completion tag */ + } q_head_compl_tag; + u8 ts[3]; + u8 rsvd; /* Reserved */ +};/* writeback used with completion queues*/ + +/* Context descriptors */ +struct iecm_base_tx_ctx_desc { + struct { + __le32 rsvd0; + __le16 l2tag2; + __le16 rsvd1; + } qw0; + __le64 qw1; /* type_cmd_tlen_mss/rt_hint */ +}; + +/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */ +enum iecm_tx_flex_desc_cmd_bits { + IECM_TX_FLEX_DESC_CMD_EOP = 0x01, + IECM_TX_FLEX_DESC_CMD_RS = 0x02, + IECM_TX_FLEX_DESC_CMD_RE = 0x04, + IECM_TX_FLEX_DESC_CMD_IL2TAG1 = 0x08, + IECM_TX_FLEX_DESC_CMD_DUMMY = 0x10, + IECM_TX_FLEX_DESC_CMD_CS_EN = 0x20, + IECM_TX_FLEX_DESC_CMD_FILT_AU_EN = 0x40, + IECM_TX_FLEX_DESC_CMD_FILT_AU_EVICT = 0x80, +}; + +struct iecm_flex_tx_desc { + __le64 buf_addr; /* Packet buffer address */ + struct { + __le16 cmd_dtype; +#define IECM_FLEX_TXD_QW1_DTYPE_S 0 +#define IECM_FLEX_TXD_QW1_DTYPE_M \ + (0x1FUL << IECM_FLEX_TXD_QW1_DTYPE_S) +#define IECM_FLEX_TXD_QW1_CMD_S 5 +#define IECM_FLEX_TXD_QW1_CMD_M MAKEMASK(0x7FFUL, IECM_TXD_QW1_CMD_S) + union { + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_DATA_(0x03) */ + u8 raw[4]; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 (0x06) */ + struct { + __le16 l2tag1; + u8 flex; + u8 tsync; + } tsync; + + /* DTYPE=IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */ + struct { + __le16 l2tag1; + __le16 l2tag2; + } l2tags; + } flex; + __le16 buf_size; + } qw1; +}; + +struct iecm_flex_tx_sched_desc { + __le64 buf_addr; /* Packet buffer address */ + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */ + struct { + u8 cmd_dtype; +#define IECM_TXD_FLEX_FLOW_DTYPE_M 0x1F +#define IECM_TXD_FLEX_FLOW_CMD_EOP 0x20 +#define IECM_TXD_FLEX_FLOW_CMD_CS_EN 0x40 +#define IECM_TXD_FLEX_FLOW_CMD_RE 0x80 + + /* [23:23] Horizon Overflow bit, [22:0] timestamp */ + u8 ts[3]; +#define IECM_TXD_FLOW_SCH_HORIZON_OVERFLOW_M 0x80 + + __le16 compl_tag; + __le16 rxr_bufsize; +#define IECM_TXD_FLEX_FLOW_RXR 0x4000 +#define IECM_TXD_FLEX_FLOW_BUFSIZE_M 0x3FFF + } qw1; +}; + +/* Common cmd fields for all flex context descriptors + * Note: these defines already account for the 5 bit dtype in the cmd_dtype + * field + */ +enum iecm_tx_flex_ctx_desc_cmd_bits { + IECM_TX_FLEX_CTX_DESC_CMD_TSO = 0x0020, + IECM_TX_FLEX_CTX_DESC_CMD_TSYN_EN = 0x0040, + IECM_TX_FLEX_CTX_DESC_CMD_L2TAG2 = 0x0080, + IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = 0x0200, /* 2 bits */ + IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = 0x0400, /* 2 bits */ + IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = 0x0600, /* 2 bits */ +}; + +/* Standard flex descriptor TSO context quad word */ +struct iecm_flex_tx_tso_ctx_qw { + __le32 flex_tlen; +#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF +#define IECM_TXD_FLEX_TSO_CTX_FLEX_S 24 + __le16 mss_rt; +#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF + u8 hdr_len; + u8 flex; +}; + +union iecm_flex_tx_ctx_desc { + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_CTX (0x04) */ + struct { + u8 qw0_flex[8]; + struct { + __le16 cmd_dtype; + __le16 l2tag1; + u8 qw1_flex[4]; + } qw1; + } gen; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */ + struct { + struct iecm_flex_tx_tso_ctx_qw qw0; + struct { + __le16 cmd_dtype; + u8 flex[6]; + } qw1; + } tso; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX (0x08) */ + struct { + struct iecm_flex_tx_tso_ctx_qw qw0; + struct { + __le16 cmd_dtype; + __le16 l2tag2; + u8 flex0; + u8 ptag; + u8 flex1[2]; + } qw1; + } tso_l2tag2_ptag; + + /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX (0x0B) */ + struct { + u8 qw0_flex[8]; + struct { + __le16 cmd_dtype; + __le16 l2tag2; + u8 flex[4]; + } qw1; + } l2tag2; + + /* DTYPE = IECM_TX_DESC_DTYPE_REINJECT_CTX (0x02) */ + struct { + struct { + __le32 sa_domain; +#define IECM_TXD_FLEX_CTX_SA_DOM_M 0xFFFF +#define IECM_TXD_FLEX_CTX_SA_DOM_VAL 0x10000 + __le32 sa_idx; +#define IECM_TXD_FLEX_CTX_SAIDX_M 0x1FFFFF + } qw0; + struct { + __le16 cmd_dtype; + __le16 txr2comp; +#define IECM_TXD_FLEX_CTX_TXR2COMP 0x1 + __le16 miss_txq_comp_tag; + __le16 miss_txq_id; + } qw1; + } reinjection_pkt; +}; + +/* Host Split Context Descriptors */ +struct iecm_flex_tx_hs_ctx_desc { + union { + struct { + __le32 host_fnum_tlen; +#define IECM_TXD_FLEX_CTX_TLEN_S 0 +#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF +#define IECM_TXD_FLEX_CTX_FNUM_S 18 +#define IECM_TXD_FLEX_CTX_FNUM_M 0x7FF +#define IECM_TXD_FLEX_CTX_HOST_S 29 +#define IECM_TXD_FLEX_CTX_HOST_M 0x7 + __le16 ftype_mss_rt; +#define IECM_TXD_FLEX_CTX_MSS_RT_0 0 +#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF +#define IECM_TXD_FLEX_CTX_FTYPE_S 14 +#define IECM_TXD_FLEX_CTX_FTYPE_VF MAKEMASK(0x0, IECM_TXD_FLEX_CTX_FTYPE_S) +#define IECM_TXD_FLEX_CTX_FTYPE_VDEV MAKEMASK(0x1, IECM_TXD_FLEX_CTX_FTYPE_S) +#define IECM_TXD_FLEX_CTX_FTYPE_PF MAKEMASK(0x2, IECM_TXD_FLEX_CTX_FTYPE_S) + u8 hdr_len; + u8 ptag; + } tso; + struct { + u8 flex0[2]; + __le16 host_fnum_ftype; + u8 flex1[3]; + u8 ptag; + } no_tso; + } qw0; + + __le64 qw1_cmd_dtype; +#define IECM_TXD_FLEX_CTX_QW1_PASID_S 16 +#define IECM_TXD_FLEX_CTX_QW1_PASID_M 0xFFFFF +#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID_S 36 +#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID \ + MAKEMASK(0x1, IECM_TXD_FLEX_CTX_PASID_VALID_S) +#define IECM_TXD_FLEX_CTX_QW1_TPH_S 37 +#define IECM_TXD_FLEX_CTX_QW1_TPH \ + MAKEMASK(0x1, IECM_TXD_FLEX_CTX_TPH_S) +#define IECM_TXD_FLEX_CTX_QW1_PFNUM_S 38 +#define IECM_TXD_FLEX_CTX_QW1_PFNUM_M 0xF +/* The following are only valid for DTYPE = 0x09 and DTYPE = 0x0A */ +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_S 42 +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_M 0x1FFFFF +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S 63 +#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VALID \ + MAKEMASK(0x1, IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S) +/* The following are only valid for DTYPE = 0x0D and DTYPE = 0x0E */ +#define IECM_TXD_FLEX_CTX_QW1_FLEX0_S 48 +#define IECM_TXD_FLEX_CTX_QW1_FLEX0_M 0xFF +#define IECM_TXD_FLEX_CTX_QW1_FLEX1_S 56 +#define IECM_TXD_FLEX_CTX_QW1_FLEX1_M 0xFF +}; + +/* Rx */ +/* For iecm_splitq_base_rx_flex desc members */ +#define IECM_RXD_FLEX_PTYPE_S 0 +#define IECM_RXD_FLEX_PTYPE_M MAKEMASK(0x3FFUL, IECM_RXD_FLEX_PTYPE_S) +#define IECM_RXD_FLEX_UMBCAST_S 10 +#define IECM_RXD_FLEX_UMBCAST_M MAKEMASK(0x3UL, IECM_RXD_FLEX_UMBCAST_S) +#define IECM_RXD_FLEX_FF0_S 12 +#define IECM_RXD_FLEX_FF0_M MAKEMASK(0xFUL, IECM_RXD_FLEX_FF0_S) +#define IECM_RXD_FLEX_LEN_PBUF_S 0 +#define IECM_RXD_FLEX_LEN_PBUF_M \ + MAKEMASK(0x3FFFUL, IECM_RXD_FLEX_LEN_PBUF_S) +#define IECM_RXD_FLEX_GEN_S 14 +#define IECM_RXD_FLEX_GEN_M BIT_ULL(IECM_RXD_FLEX_GEN_S) +#define IECM_RXD_FLEX_BUFQ_ID_S 15 +#define IECM_RXD_FLEX_BUFQ_ID_M BIT_ULL(IECM_RXD_FLEX_BUFQ_ID_S) +#define IECM_RXD_FLEX_LEN_HDR_S 0 +#define IECM_RXD_FLEX_LEN_HDR_M \ + MAKEMASK(0x3FFUL, IECM_RXD_FLEX_LEN_HDR_S) +#define IECM_RXD_FLEX_RSC_S 10 +#define IECM_RXD_FLEX_RSC_M BIT_ULL(IECM_RXD_FLEX_RSC_S) +#define IECM_RXD_FLEX_SPH_S 11 +#define IECM_RXD_FLEX_SPH_M BIT_ULL(IECM_RXD_FLEX_SPH_S) +#define IECM_RXD_FLEX_MISS_S 12 +#define IECM_RXD_FLEX_MISS_M BIT_ULL(IECM_RXD_FLEX_MISS_S) +#define IECM_RXD_FLEX_FF1_S 13 +#define IECM_RXD_FLEX_FF1_M MAKEMASK(0x7UL, IECM_RXD_FLEX_FF1_M) + +/* For iecm_singleq_base_rx_legacy desc members */ +#define IECM_RXD_QW1_LEN_SPH_S 63 +#define IECM_RXD_QW1_LEN_SPH_M BIT_ULL(IECM_RXD_QW1_LEN_SPH_S) +#define IECM_RXD_QW1_LEN_HBUF_S 52 +#define IECM_RXD_QW1_LEN_HBUF_M MAKEMASK(0x7FFULL, IECM_RXD_QW1_LEN_HBUF_S) +#define IECM_RXD_QW1_LEN_PBUF_S 38 +#define IECM_RXD_QW1_LEN_PBUF_M MAKEMASK(0x3FFFULL, IECM_RXD_QW1_LEN_PBUF_S) +#define IECM_RXD_QW1_PTYPE_S 30 +#define IECM_RXD_QW1_PTYPE_M MAKEMASK(0xFFULL, IECM_RXD_QW1_PTYPE_S) +#define IECM_RXD_QW1_ERROR_S 19 +#define IECM_RXD_QW1_ERROR_M MAKEMASK(0xFFUL, IECM_RXD_QW1_ERROR_S) +#define IECM_RXD_QW1_STATUS_S 0 +#define IECM_RXD_QW1_STATUS_M MAKEMASK(0x7FFFFUL, IECM_RXD_QW1_STATUS_S) + +enum iecm_rx_flex_desc_status_error_0_qw1_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_FLEX_DESC_STATUS0_DD_S = 0, + IECM_RX_FLEX_DESC_STATUS0_EOF_S, + IECM_RX_FLEX_DESC_STATUS0_HBO_S, + IECM_RX_FLEX_DESC_STATUS0_L3L4P_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S, + IECM_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S, +}; + +enum iecm_rx_flex_desc_status_error_0_qw0_bits { + IECM_RX_FLEX_DESC_STATUS0_LPBK_S = 0, + IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S, + IECM_RX_FLEX_DESC_STATUS0_RXE_S, + IECM_RX_FLEX_DESC_STATUS0_CRCP_S, + IECM_RX_FLEX_DESC_STATUS0_RSS_VALID_S, + IECM_RX_FLEX_DESC_STATUS0_L2TAG1P_S, + IECM_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S, + IECM_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S, + IECM_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */ +}; + +enum iecm_rx_flex_desc_status_error_1_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_FLEX_DESC_STATUS1_RSVD_S = 0, /* 2 bits */ + IECM_RX_FLEX_DESC_STATUS1_ATRAEFAIL_S = 2, + IECM_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 3, + IECM_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 4, + IECM_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 5, + IECM_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 6, + IECM_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 7, + IECM_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ +}; + +enum iecm_rx_base_desc_status_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_BASE_DESC_STATUS_DD_S = 0, + IECM_RX_BASE_DESC_STATUS_EOF_S = 1, + IECM_RX_BASE_DESC_STATUS_L2TAG1P_S = 2, + IECM_RX_BASE_DESC_STATUS_L3L4P_S = 3, + IECM_RX_BASE_DESC_STATUS_CRCP_S = 4, + IECM_RX_BASE_DESC_STATUS_RSVD_S = 5, /* 3 BITS */ + IECM_RX_BASE_DESC_STATUS_EXT_UDP_0_S = 8, + IECM_RX_BASE_DESC_STATUS_UMBCAST_S = 9, /* 2 BITS */ + IECM_RX_BASE_DESC_STATUS_FLM_S = 11, + IECM_RX_BASE_DESC_STATUS_FLTSTAT_S = 12, /* 2 BITS */ + IECM_RX_BASE_DESC_STATUS_LPBK_S = 14, + IECM_RX_BASE_DESC_STATUS_IPV6EXADD_S = 15, + IECM_RX_BASE_DESC_STATUS_RSVD1_S = 16, /* 2 BITS */ + IECM_RX_BASE_DESC_STATUS_INT_UDP_0_S = 18, + IECM_RX_BASE_DESC_STATUS_LAST /* this entry must be last!!! */ +}; + +enum iecm_rx_desc_fltstat_values { + IECM_RX_DESC_FLTSTAT_NO_DATA = 0, + IECM_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ + IECM_RX_DESC_FLTSTAT_RSV = 2, + IECM_RX_DESC_FLTSTAT_RSS_HASH = 3, +}; + +enum iecm_rx_base_desc_error_bits { + /* Note: These are predefined bit offsets */ + IECM_RX_BASE_DESC_ERROR_RXE_S = 0, + IECM_RX_BASE_DESC_ERROR_ATRAEFAIL_S = 1, + IECM_RX_BASE_DESC_ERROR_HBO_S = 2, + IECM_RX_BASE_DESC_ERROR_L3L4E_S = 3, /* 3 BITS */ + IECM_RX_BASE_DESC_ERROR_IPE_S = 3, + IECM_RX_BASE_DESC_ERROR_L4E_S = 4, + IECM_RX_BASE_DESC_ERROR_EIPE_S = 5, + IECM_RX_BASE_DESC_ERROR_OVERSIZE_S = 6, + IECM_RX_BASE_DESC_ERROR_RSVD_S = 7 +}; + +/* Receive Descriptors */ +/* splitq buf*/ +struct iecm_splitq_rx_buf_desc { + struct { + __le16 buf_id; /* Buffer Identifier */ + __le16 rsvd0; + __le32 rsvd1; + } qword0; + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 rsvd2; +}; /* read used with buffer queues*/ + +/* singleq buf */ +struct iecm_singleq_rx_buf_desc { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + __le64 rsvd1; + __le64 rsvd2; +}; /* read used with buffer queues*/ + +union iecm_rx_buf_desc { + struct iecm_singleq_rx_buf_desc read; + struct iecm_splitq_rx_buf_desc split_rd; +}; + +/* splitq compl */ +struct iecm_flex_rx_desc { + /* Qword 0 */ + u8 rxdid_ucast; /* profile_id=[3:0] */ + /* rsvd=[5:4] */ + /* ucast=[7:6] */ + u8 status_err0_qw0; + __le16 ptype_err_fflags0; /* ptype=[9:0] */ + /* ip_hdr_err=[10:10] */ + /* udp_len_err=[11:11] */ + /* ff0=[15:12] */ + __le16 pktlen_gen_bufq_id; /* plen=[13:0] */ + /* gen=[14:14] only in splitq */ + /* bufq_id=[15:15] only in splitq */ + __le16 hdrlen_flags; /* header=[9:0] */ + /* rsc=[10:10] only in splitq */ + /* sph=[11:11] only in splitq */ + /* ext_udp_0=[12:12] */ + /* int_udp_0=[13:13] */ + /* trunc_mirr=[14:14] */ + /* miss_prepend=[15:15] */ + /* Qword 1 */ + u8 status_err0_qw1; + u8 status_err1; + u8 fflags1; + u8 ts_low; + union { + __le16 fmd0; + __le16 buf_id; /* only in splitq */ + } fmd0_bufid; + union { + __le16 fmd1; + __le16 raw_cs; + __le16 l2tag1; + __le16 rscseglen; + } fmd1_misc; + /* Qword 2 */ + union { + __le16 fmd2; + __le16 hash1; + } fmd2_hash1; + union { + u8 fflags2; + u8 mirrorid; + u8 hash2; + } ff2_mirrid_hash2; + u8 hash3; + union { + __le16 fmd3; + __le16 l2tag2; + } fmd3_l2tag2; + __le16 fmd4; + /* Qword 3 */ + union { + __le16 fmd5; + __le16 l2tag1; + } fmd5_l2tag1; + __le16 fmd6; + union { + struct { + __le16 fmd7_0; + __le16 fmd7_1; + } fmd7; + __le32 ts_high; + } flex_ts; +}; /* writeback */ + +/* singleq wb(compl) */ +struct iecm_singleq_base_rx_desc { + struct { + struct { + __le16 mirroring_status; + __le16 l2tag1; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + __le32 fd_id; /* Flow Director filter id */ + } hi_dword; + } qword0; + struct { + /* status/error/PTYPE/length */ + __le64 status_error_ptype_len; + } qword1; + struct { + __le16 ext_status; /* extended status */ + __le16 rsvd; + __le16 l2tag2_1; + __le16 l2tag2_2; + } qword2; + struct { + __le32 reserved; + __le32 fd_id; + } qword3; +}; /* writeback */ + +union iecm_rx_desc { + struct iecm_singleq_rx_buf_desc read; + struct iecm_singleq_base_rx_desc base_wb; + struct iecm_flex_rx_desc flex_wb; +}; +#endif /* _IECM_LAN_TXRX_H_ */ diff --git a/include/linux/net/intel/iecm_txrx.h b/include/linux/net/intel/iecm_txrx.h new file mode 100644 index 000000000000..08b911a0fb2f --- /dev/null +++ b/include/linux/net/intel/iecm_txrx.h @@ -0,0 +1,581 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IECM_TXRX_H_ +#define _IECM_TXRX_H_ + +#define IECM_MAX_Q 16 +/* Mailbox Queue */ +#define IECM_MAX_NONQ 1 +#define IECM_MAX_TXQ_DESC 512 +#define IECM_MAX_RXQ_DESC 512 +#define IECM_MIN_TXQ_DESC 128 +#define IECM_MIN_RXQ_DESC 128 +#define IECM_REQ_DESC_MULTIPLE 32 + +#define IECM_DFLT_SINGLEQ_TX_Q_GROUPS 1 +#define IECM_DFLT_SINGLEQ_RX_Q_GROUPS 1 +#define IECM_DFLT_SINGLEQ_TXQ_PER_GROUP 4 +#define IECM_DFLT_SINGLEQ_RXQ_PER_GROUP 4 + +#define IECM_COMPLQ_PER_GROUP 1 +#define IECM_BUFQS_PER_RXQ_SET 2 + +#define IECM_DFLT_SPLITQ_TX_Q_GROUPS 4 +#define IECM_DFLT_SPLITQ_RX_Q_GROUPS 4 +#define IECM_DFLT_SPLITQ_TXQ_PER_GROUP 1 +#define IECM_DFLT_SPLITQ_RXQ_PER_GROUP 1 + +/* Default vector sharing */ +#define IECM_MAX_NONQ_VEC 1 +#define IECM_MAX_Q_VEC 4 /* For Tx Completion queue and Rx queue */ +#define IECM_MAX_RDMA_VEC 2 /* To share with RDMA */ +#define IECM_MIN_RDMA_VEC 1 /* Minimum vectors to be shared with RDMA */ +#define IECM_MIN_VEC 3 /* One for mailbox, one for data queues, one + * for RDMA + */ + +#define IECM_DFLT_TX_Q_DESC_COUNT 512 +#define IECM_DFLT_TX_COMPLQ_DESC_COUNT 512 +#define IECM_DFLT_RX_Q_DESC_COUNT 512 +#define IECM_DFLT_RX_BUFQ_DESC_COUNT 512 + +#define IECM_RX_BUF_WRITE 16 /* Must be power of 2 */ +#define IECM_RX_HDR_SIZE 256 +#define IECM_RX_BUF_2048 2048 +#define IECM_RX_BUF_STRIDE 64 +#define IECM_LOW_WATERMARK 64 +#define IECM_HDR_BUF_SIZE 256 +#define IECM_PACKET_HDR_PAD \ + (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2)) +#define IECM_MAX_RXBUFFER 9728 +#define IECM_MAX_MTU \ + (IECM_MAX_RXBUFFER - IECM_PACKET_HDR_PAD) + +#define IECM_SINGLEQ_RX_BUF_DESC(R, i) \ + (&(((struct iecm_singleq_rx_buf_desc *)((R)->desc_ring))[i])) +#define IECM_SPLITQ_RX_BUF_DESC(R, i) \ + (&(((struct iecm_splitq_rx_buf_desc *)((R)->desc_ring))[i])) + +#define IECM_BASE_TX_DESC(R, i) \ + (&(((struct iecm_base_tx_desc *)((R)->desc_ring))[i])) +#define IECM_SPLITQ_TX_COMPLQ_DESC(R, i) \ + (&(((struct iecm_splitq_tx_compl_desc *)((R)->desc_ring))[i])) + +#define IECM_FLEX_TX_DESC(R, i) \ + (&(((union iecm_tx_flex_desc *)((R)->desc_ring))[i])) +#define IECM_FLEX_TX_CTX_DESC(R, i) \ + (&(((union iecm_flex_tx_ctx_desc *)((R)->desc_ring))[i])) + +#define IECM_DESC_UNUSED(R) \ + ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->desc_count) + \ + (R)->next_to_clean - (R)->next_to_use - 1) + +union iecm_tx_flex_desc { + struct iecm_flex_tx_desc q; /* queue based scheduling */ + struct iecm_flex_tx_sched_desc flow; /* flow based scheduling */ +}; + +struct iecm_tx_buf { + struct hlist_node hlist; + void *next_to_watch; + struct sk_buff *skb; + unsigned int bytecount; + unsigned short gso_segs; +#define IECM_TX_FLAGS_TSO BIT(0) + u32 tx_flags; + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); + u16 compl_tag; /* Unique identifier for buffer; used to + * compare with completion tag returned + * in buffer completion event + */ +}; + +struct iecm_buf_lifo { + u16 top; + u16 size; + struct iecm_tx_buf **bufs; +}; + +struct iecm_tx_offload_params { + u16 td_cmd; /* command field to be inserted into descriptor */ + u32 tso_len; /* total length of payload to segment */ + u16 mss; + u8 tso_hdr_len; /* length of headers to be duplicated */ + + /* Flow scheduling offload timestamp, formatting as hw expects it */ +#define IECM_TW_TIME_STAMP_GRAN_512_DIV_S 9 +#define IECM_TW_TIME_STAMP_GRAN_1024_DIV_S 10 +#define IECM_TW_TIME_STAMP_GRAN_2048_DIV_S 11 +#define IECM_TW_TIME_STAMP_GRAN_4096_DIV_S 12 + u64 desc_ts; + + /* For legacy offloads */ + u32 hdr_offsets; +}; + +struct iecm_tx_splitq_params { + /* Descriptor build function pointer */ + void (*splitq_build_ctb)(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *params, + u16 td_cmd, u16 size); + + /* General descriptor info */ + enum iecm_tx_desc_dtype_value dtype; + u16 eop_cmd; + u16 compl_tag; /* only relevant for flow scheduling */ + + struct iecm_tx_offload_params offload; +}; + +#define IECM_TX_COMPLQ_CLEAN_BUDGET 256 +#define IECM_TX_MIN_LEN 17 +#define IECM_TX_DESCS_FOR_SKB_DATA_PTR 1 +#define IECM_TX_MAX_BUF 8 +#define IECM_TX_DESCS_PER_CACHE_LINE 4 +#define IECM_TX_DESCS_FOR_CTX 1 +/* TX descriptors needed, worst case */ +#define IECM_TX_DESC_NEEDED (MAX_SKB_FRAGS + IECM_TX_DESCS_FOR_CTX + \ + IECM_TX_DESCS_PER_CACHE_LINE + \ + IECM_TX_DESCS_FOR_SKB_DATA_PTR) + +/* The size limit for a transmit buffer in a descriptor is (16K - 1). + * In order to align with the read requests we will align the value to + * the nearest 4K which represents our maximum read request size. + */ +#define IECM_TX_MAX_READ_REQ_SIZE 4096 +#define IECM_TX_MAX_DESC_DATA (16 * 1024 - 1) +#define IECM_TX_MAX_DESC_DATA_ALIGNED \ + (~(IECM_TX_MAX_READ_REQ_SIZE - 1) & IECM_TX_MAX_DESC_DATA) + +#define IECM_RX_DMA_ATTR \ + (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) +#define IECM_RX_DESC(R, i) \ + (&(((union iecm_rx_desc *)((R)->desc_ring))[i])) + +struct iecm_rx_buf { + struct sk_buff *skb; + dma_addr_t dma; + struct page *page; + unsigned int page_offset; + u16 pagecnt_bias; + u16 buf_id; +}; + +/* Packet type non-ip values */ +enum iecm_rx_ptype_l2 { + IECM_RX_PTYPE_L2_RESERVED = 0, + IECM_RX_PTYPE_L2_MAC_PAY2 = 1, + IECM_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, + IECM_RX_PTYPE_L2_FIP_PAY2 = 3, + IECM_RX_PTYPE_L2_OUI_PAY2 = 4, + IECM_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, + IECM_RX_PTYPE_L2_LLDP_PAY2 = 6, + IECM_RX_PTYPE_L2_ECP_PAY2 = 7, + IECM_RX_PTYPE_L2_EVB_PAY2 = 8, + IECM_RX_PTYPE_L2_QCN_PAY2 = 9, + IECM_RX_PTYPE_L2_EAPOL_PAY2 = 10, + IECM_RX_PTYPE_L2_ARP = 11, +}; + +enum iecm_rx_ptype_outer_ip { + IECM_RX_PTYPE_OUTER_L2 = 0, + IECM_RX_PTYPE_OUTER_IP = 1, +}; + +enum iecm_rx_ptype_outer_ip_ver { + IECM_RX_PTYPE_OUTER_NONE = 0, + IECM_RX_PTYPE_OUTER_IPV4 = 1, + IECM_RX_PTYPE_OUTER_IPV6 = 2, +}; + +enum iecm_rx_ptype_outer_fragmented { + IECM_RX_PTYPE_NOT_FRAG = 0, + IECM_RX_PTYPE_FRAG = 1, +}; + +enum iecm_rx_ptype_tunnel_type { + IECM_RX_PTYPE_TUNNEL_NONE = 0, + IECM_RX_PTYPE_TUNNEL_IP_IP = 1, + IECM_RX_PTYPE_TUNNEL_IP_GRENAT = 2, + IECM_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, + IECM_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, +}; + +enum iecm_rx_ptype_tunnel_end_prot { + IECM_RX_PTYPE_TUNNEL_END_NONE = 0, + IECM_RX_PTYPE_TUNNEL_END_IPV4 = 1, + IECM_RX_PTYPE_TUNNEL_END_IPV6 = 2, +}; + +enum iecm_rx_ptype_inner_prot { + IECM_RX_PTYPE_INNER_PROT_NONE = 0, + IECM_RX_PTYPE_INNER_PROT_UDP = 1, + IECM_RX_PTYPE_INNER_PROT_TCP = 2, + IECM_RX_PTYPE_INNER_PROT_SCTP = 3, + IECM_RX_PTYPE_INNER_PROT_ICMP = 4, + IECM_RX_PTYPE_INNER_PROT_TIMESYNC = 5, +}; + +enum iecm_rx_ptype_payload_layer { + IECM_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, + IECM_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, + IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, + IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, +}; + +struct iecm_rx_ptype_decoded { + u32 ptype:10; + u32 known:1; + u32 outer_ip:1; + u32 outer_ip_ver:2; + u32 outer_frag:1; + u32 tunnel_type:3; + u32 tunnel_end_prot:2; + u32 tunnel_end_frag:1; + u32 inner_prot:4; + u32 payload_layer:3; +}; + +enum iecm_rx_hsplit { + IECM_RX_NO_HDR_SPLIT = 0, + IECM_RX_HDR_SPLIT = 1, + IECM_RX_HDR_SPLIT_PERF = 2, +}; + +/* The iecm_ptype_lkup table is used to convert from the 10-bit ptype in the + * hardware to a bit-field that can be used by SW to more easily determine the + * packet type. + * + * Macros are used to shorten the table lines and make this table human + * readable. + * + * We store the PTYPE in the top byte of the bit field - this is just so that + * we can check that the table doesn't have a row missing, as the index into + * the table should be the PTYPE. + * + * Typical work flow: + * + * IF NOT iecm_ptype_lkup[ptype].known + * THEN + * Packet is unknown + * ELSE IF iecm_ptype_lkup[ptype].outer_ip == IECM_RX_PTYPE_OUTER_IP + * Use the rest of the fields to look at the tunnels, inner protocols, etc + * ELSE + * Use the enum iecm_rx_ptype_l2 to decode the packet type + * ENDIF + */ +/* macro to make the table lines short */ +#define IECM_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ + { PTYPE, \ + 1, \ + IECM_RX_PTYPE_OUTER_##OUTER_IP, \ + IECM_RX_PTYPE_OUTER_##OUTER_IP_VER, \ + IECM_RX_PTYPE_##OUTER_FRAG, \ + IECM_RX_PTYPE_TUNNEL_##T, \ + IECM_RX_PTYPE_TUNNEL_END_##TE, \ + IECM_RX_PTYPE_##TEF, \ + IECM_RX_PTYPE_INNER_PROT_##I, \ + IECM_RX_PTYPE_PAYLOAD_LAYER_##PL } + +#define IECM_PTT_UNUSED_ENTRY(PTYPE) { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 } + +/* shorter macros makes the table fit but are terse */ +#define IECM_RX_PTYPE_NOF IECM_RX_PTYPE_NOT_FRAG +#define IECM_RX_PTYPE_FRG IECM_RX_PTYPE_FRAG +#define IECM_RX_PTYPE_INNER_PROT_TS IECM_RX_PTYPE_INNER_PROT_TIMESYNC +#define IECM_RX_SUPP_PTYPE 18 +#define IECM_RX_MAX_PTYPE 1024 + +#define IECM_INT_NAME_STR_LEN (IFNAMSIZ + 16) + +enum iecm_queue_flags_t { + __IECM_Q_GEN_CHK, + __IECM_Q_FLOW_SCH_EN, + __IECM_Q_SW_MARKER, + __IECM_Q_FLAGS_NBITS, +}; + +struct iecm_intr_reg { + u32 dyn_ctl; + u32 dyn_ctl_intena_m; + u32 dyn_ctl_clrpba_m; + u32 dyn_ctl_itridx_s; + u32 dyn_ctl_itridx_m; + u32 dyn_ctl_intrvl_s; + u32 itr; +}; + +struct iecm_q_vector { + struct iecm_vport *vport; + cpumask_t affinity_mask; + struct napi_struct napi; + u16 v_idx; /* index in the vport->q_vector array */ + u8 itr_countdown; /* when 0 should adjust ITR */ + struct iecm_intr_reg intr_reg; + int num_txq; + struct iecm_queue **tx; + int num_rxq; + struct iecm_queue **rx; + char name[IECM_INT_NAME_STR_LEN]; +}; + +struct iecm_rx_queue_stats { + u64 packets; + u64 bytes; + u64 generic_csum; + u64 basic_csum; + u64 csum_err; + u64 hsplit_hbo; +}; + +struct iecm_tx_queue_stats { + u64 packets; + u64 bytes; +}; + +union iecm_queue_stats { + struct iecm_rx_queue_stats rx; + struct iecm_tx_queue_stats tx; +}; + +enum iecm_latency_range { + IECM_LOWEST_LATENCY = 0, + IECM_LOW_LATENCY = 1, + IECM_BULK_LATENCY = 2, +}; + +struct iecm_itr { + u16 current_itr; + u16 target_itr; + enum virtchnl_itr_idx itr_idx; + union iecm_queue_stats stats; /* will reset to 0 when adjusting ITR */ + enum iecm_latency_range latency_range; + unsigned long next_update; /* jiffies of last ITR update */ +}; + +/* indices into GLINT_ITR registers */ +#define IECM_ITR_ADAPTIVE_MIN_INC 0x0002 +#define IECM_ITR_ADAPTIVE_MIN_USECS 0x0002 +#define IECM_ITR_ADAPTIVE_MAX_USECS 0x007e +#define IECM_ITR_ADAPTIVE_LATENCY 0x8000 +#define IECM_ITR_ADAPTIVE_BULK 0x0000 +#define ITR_IS_BULK(x) (!((x) & IECM_ITR_ADAPTIVE_LATENCY)) + +#define IECM_ITR_DYNAMIC 0X8000 /* use top bit as a flag */ +#define IECM_ITR_MAX 0x1FE0 +#define IECM_ITR_100K 0x000A +#define IECM_ITR_50K 0x0014 +#define IECM_ITR_20K 0x0032 +#define IECM_ITR_18K 0x003C +#define IECM_ITR_GRAN_S 1 /* Assume ITR granularity is 2us */ +#define IECM_ITR_MASK 0x1FFE /* ITR register value alignment mask */ +#define ITR_REG_ALIGN(setting) __ALIGN_MASK(setting, ~IECM_ITR_MASK) +#define IECM_ITR_IS_DYNAMIC(setting) (!!((setting) & IECM_ITR_DYNAMIC)) +#define IECM_ITR_SETTING(setting) ((setting) & ~IECM_ITR_DYNAMIC) +#define ITR_COUNTDOWN_START 100 +#define IECM_ITR_TX_DEF IECM_ITR_20K +#define IECM_ITR_RX_DEF IECM_ITR_50K + +/* queue associated with a vport */ +struct iecm_queue { + struct device *dev; /* Used for DMA mapping */ + struct iecm_vport *vport; /* Back reference to associated vport */ + union { + struct iecm_txq_group *txq_grp; + struct iecm_rxq_group *rxq_grp; + }; + /* bufq: Used as group id, either 0 or 1, on clean Buf Q uses this + * index to determine which group of refill queues to clean. + * Bufqs are use in splitq only. + * txq: Index to map between Tx Q group and hot path Tx ptrs stored in + * vport. Used in both single Q/split Q. + */ + u16 idx; + /* Used for both Q models single and split. In split Q model relevant + * only to Tx Q and Rx Q + */ + u8 __iomem *tail; + /* Used in both single and split Q. In single Q, Tx Q uses tx_buf and + * Rx Q uses rx_buf. In split Q, Tx Q uses tx_buf, Rx Q uses skb, and + * Buf Q uses rx_buf. + */ + union { + struct iecm_tx_buf *tx_buf; + struct { + struct iecm_rx_buf *buf; + struct iecm_rx_buf *hdr_buf; + } rx_buf; + struct sk_buff *skb; + }; + enum virtchnl_queue_type q_type; + /* Queue id(Tx/Tx compl/Rx/Bufq) */ + u16 q_id; + u16 desc_count; /* Number of descriptors */ + + /* Relevant in both split & single Tx Q & Buf Q*/ + u16 next_to_use; + /* In split q model only relevant for Tx Compl Q and Rx Q */ + u16 next_to_clean; /* used in interrupt processing */ + /* Used only for Rx. In split Q model only relevant to Rx Q */ + u16 next_to_alloc; + /* Generation bit check stored, as HW flips the bit at Queue end */ + DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS); + + union iecm_queue_stats q_stats; + struct u64_stats_sync stats_sync; + + enum iecm_rx_hsplit rx_hsplit_en; + + u16 rx_hbuf_size; /* Header buffer size */ + u16 rx_buf_size; + u16 rx_max_pkt_size; + u16 rx_buf_stride; + u8 rsc_low_watermark; + /* Used for both Q models single and split. In split Q model relevant + * only to Tx compl Q and Rx compl Q + */ + struct iecm_q_vector *q_vector; /* Back reference to associated vector */ + struct iecm_itr itr; + unsigned int size; /* length of descriptor ring in bytes */ + dma_addr_t dma; /* physical address of ring */ + void *desc_ring; /* Descriptor ring memory */ + + struct iecm_buf_lifo buf_stack; /* Stack of empty buffers to store + * buffer info for out of order + * buffer completions + */ + u16 tx_buf_key; /* 16 bit unique "identifier" (index) + * to be used as the completion tag when + * queue is using flow based scheduling + */ + DECLARE_HASHTABLE(sched_buf_hash, 12); +} ____cacheline_internodealigned_in_smp; + +/* Software queues are used in splitq mode to manage buffers between rxq + * producer and the bufq consumer. These are required in order to maintain a + * lockless buffer management system and are strictly software only constructs. + */ +struct iecm_sw_queue { + u16 next_to_clean ____cacheline_aligned_in_smp; + u16 next_to_alloc ____cacheline_aligned_in_smp; + DECLARE_BITMAP(flags, __IECM_Q_FLAGS_NBITS) + ____cacheline_aligned_in_smp; + u16 *ring ____cacheline_aligned_in_smp; + u16 q_entries; +} ____cacheline_internodealigned_in_smp; + +/* Splitq only. iecm_rxq_set associates an rxq with at most two refillqs. + * Each rxq needs a refillq to return used buffers back to the respective bufq. + * Bufqs then clean these refillqs for buffers to give to hardware. + */ +struct iecm_rxq_set { + struct iecm_queue rxq; + /* refillqs assoc with bufqX mapped to this rxq */ + struct iecm_sw_queue *refillq0; + struct iecm_sw_queue *refillq1; +}; + +/* Splitq only. iecm_bufq_set associates a bufq to an overflow and array of + * refillqs. In this bufq_set, there will be one refillq for each rxq in this + * rxq_group. Used buffers received by rxqs will be put on refillqs which + * bufqs will clean to return new buffers back to hardware. + * + * Buffers needed by some number of rxqs associated in this rxq_group are + * managed by at most two bufqs (depending on performance configuration). + */ +struct iecm_bufq_set { + struct iecm_queue bufq; + struct iecm_sw_queue overflowq; + /* This is always equal to num_rxq_sets in idfp_rxq_group */ + int num_refillqs; + struct iecm_sw_queue *refillqs; +}; + +/* In singleq mode, an rxq_group is simply an array of rxqs. In splitq, a + * rxq_group contains all the rxqs, bufqs, refillqs, and overflowqs needed to + * manage buffers in splitq mode. + */ +struct iecm_rxq_group { + struct iecm_vport *vport; /* back pointer */ + + union { + struct { + int num_rxq; + struct iecm_queue *rxqs; + } singleq; + struct { + int num_rxq_sets; + struct iecm_rxq_set *rxq_sets; + struct iecm_bufq_set *bufq_sets; + } splitq; + }; +}; + +/* Between singleq and splitq, a txq_group is largely the same except for the + * complq. In splitq a single complq is responsible for handling completions + * for some number of txqs associated in this txq_group. + */ +struct iecm_txq_group { + struct iecm_vport *vport; /* back pointer */ + + int num_txq; + struct iecm_queue *txqs; + + /* splitq only */ + struct iecm_queue *complq; +}; + +int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget); +void iecm_vport_init_num_qs(struct iecm_vport *vport, + struct virtchnl_create_vport *vport_msg); +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport); +void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, + int num_req_qs); +void iecm_vport_calc_num_q_groups(struct iecm_vport *vport); +int iecm_vport_queues_alloc(struct iecm_vport *vport); +void iecm_vport_queues_rel(struct iecm_vport *vport); +void iecm_vport_calc_num_q_vec(struct iecm_vport *vport); +void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport); +void iecm_vport_intr_clear_dflt_itr(struct iecm_vport *vport); +void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector); +void iecm_vport_intr_deinit(struct iecm_vport *vport); +int iecm_vport_intr_init(struct iecm_vport *vport); +irqreturn_t +iecm_vport_intr_clean_queues(int __always_unused irq, void *data); +void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport); +int iecm_config_rss(struct iecm_vport *vport); +void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list); +void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list); +int iecm_init_rss(struct iecm_vport *vport); +void iecm_deinit_rss(struct iecm_vport *vport); +int iecm_config_rss(struct iecm_vport *vport); +void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *old_buf); +void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, + unsigned int size); +struct sk_buff *iecm_rx_construct_skb(struct iecm_queue *rxq, + struct iecm_rx_buf *rx_buf, + unsigned int size); +bool iecm_rx_cleanup_headers(struct sk_buff *skb); +bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *rx_buf); +void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb); +bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf); +void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val); +void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val); +void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf); +unsigned int iecm_tx_desc_count_required(struct sk_buff *skb); +int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size); +void iecm_tx_timeout(struct net_device *netdev, + unsigned int __always_unused txqueue); +netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, + struct net_device *netdev); +netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb, + struct net_device *netdev); +bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rxq, + u16 cleaned_count); +void iecm_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats); +#endif /* !_IECM_TXRX_H_ */ From patchwork Tue Jun 23 22:40:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DD41C433DF for ; Tue, 23 Jun 2020 22:41:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E5EB2078A for ; Tue, 23 Jun 2020 22:41:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388055AbgFWWlN (ORCPT ); Tue, 23 Jun 2020 18:41:13 -0400 Received: from mga01.intel.com ([192.55.52.88]:14604 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387814AbgFWWlM (ORCPT ); Tue, 23 Jun 2020 18:41:12 -0400 IronPort-SDR: jQBK3aQ6OUHd3CF41GEIjE16G44A1yu6G0qXdL0lH6lg7MLUcXaazWWT+8JeHxEKO3KY3vXJER aqlMuBFP41bA== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327487" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327487" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:46 -0700 IronPort-SDR: az/L3xpR8qRJJqreIzFvhWmG+xXezLM34nJv48ZLRpMoUrOP2GYxetsNqaV9ZvBB97xOTMPbEw cBxtZ40hPawg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045924" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:46 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 04/15] iecm: Common module introduction and function stubs Date: Tue, 23 Jun 2020 15:40:32 -0700 Message-Id: <20200623224043.801728-5-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This introduces function stubs for the framework of the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../net/ethernet/intel/iecm/iecm_controlq.c | 196 +++ .../ethernet/intel/iecm/iecm_controlq_setup.c | 84 ++ .../net/ethernet/intel/iecm/iecm_ethtool.c | 16 + drivers/net/ethernet/intel/iecm/iecm_lib.c | 407 ++++++ drivers/net/ethernet/intel/iecm/iecm_main.c | 47 + drivers/net/ethernet/intel/iecm/iecm_osdep.c | 15 + .../ethernet/intel/iecm/iecm_singleq_txrx.c | 258 ++++ drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1257 +++++++++++++++++ .../net/ethernet/intel/iecm/iecm_virtchnl.c | 599 ++++++++ 9 files changed, 2879 insertions(+) create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_ethtool.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_lib.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_main.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_osdep.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_txrx.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_virtchnl.c diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c new file mode 100644 index 000000000000..85ae216e8350 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020, Intel Corporation. */ + +#include +#include +#include + +/** + * iecm_ctlq_setup_regs - initialize control queue registers + * @cq: pointer to the specific control queue + * @q_create_info: structs containing info for each queue to be initialized + */ +static void +iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq, + struct iecm_ctlq_create_info *q_create_info) +{ + /* stub */ +} + +/** + * iecm_ctlq_init_regs - Initialize control queue registers + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * @is_rxq: true if receive control queue, false otherwise + * + * Initialize registers. The caller is expected to have already initialized the + * descriptor ring memory and buffer memory + */ +static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + bool is_rxq) +{ + /* stub */ +} + +/** + * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf + * @cq: pointer to the specific Control queue + * + * Record the address of the receive queue DMA buffers in the descriptors. + * The buffers must have been previously allocated. + */ +static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_shutdown - shutdown the CQ + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * The main shutdown routine for any controq queue + */ +static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_add - add one control queue + * @hw: pointer to hardware struct + * @qinfo: info for queue to be created + * @cq: (output) double pointer to control queue to be created + * + * Allocate and initialize a control queue and add it to the control queue list. + * The cq parameter will be allocated/initialized and passed back to the caller + * if no errors occur. + * + * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add + */ +enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, + struct iecm_ctlq_create_info *qinfo, + struct iecm_ctlq_info **cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_remove - deallocate and remove specified control queue + * @hw: pointer to hardware struct + * @cq: pointer to control queue to be removed + */ +void iecm_ctlq_remove(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_init - main initialization routine for all control queues + * @hw: pointer to hardware struct + * @num_q: number of queues to initialize + * @q_info: array of structs containing info for each queue to be initialized + * + * This initializes any number and any type of control queues. This is an all + * or nothing routine; if one fails, all previously allocated queues will be + * destroyed. This must be called prior to using the individual add/remove + * APIs. + */ +enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, + struct iecm_ctlq_create_info *q_info) +{ + /* stub */ +} + +/** + * iecm_ctlq_deinit - destroy all control queues + * @hw: pointer to hw struct + */ +enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw) +{ + /* stub */ +} + +/** + * iecm_ctlq_send - send command to Control Queue (CTQ) + * @hw: pointer to hw struct + * @cq: handle to control queue struct to send on + * @num_q_msg: number of messages to send on control queue + * @q_msg: pointer to array of queue messages to be sent + * + * The caller is expected to allocate DMAable buffers and pass them to the + * send routine via the q_msg struct / control queue specific data struct. + * The control queue will hold a reference to each send message until + * the completion for that message has been cleaned. + */ +enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 num_q_msg, + struct iecm_ctlq_msg q_msg[]) +{ + /* stub */ +} + +/** + * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the + * requested queue + * @cq: pointer to the specific Control queue + * @clean_count: (input|output) number of descriptors to clean as input, and + * number of descriptors actually cleaned as output + * @msg_status: (output) pointer to msg pointer array to be populated; needs + * to be allocated by caller + * + * Returns an an array of message pointers associated with the cleaned + * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned + * descriptors. The status will be returned for each; any messages that failed + * to send will have a non-zero status. The caller is expected to free original + * ctlq_msgs and free or reuse the DMA buffers. + */ +enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, + u16 *clean_count, + struct iecm_ctlq_msg *msg_status[]) +{ + /* stub */ +} + +/** + * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring + * @hw: pointer to hw struct + * @cq: pointer to control queue handle + * @buff_count: (input|output) input is number of buffers caller is trying to + * return; output is number of buffers that were not posted + * @buffs: array of pointers to DMA mem structs to be given to hardware + * + * Caller uses this function to return DMA buffers to the descriptor ring after + * consuming them; buff_count will be the number of buffers. + * + * Note: this function needs to be called after a receive call even + * if there are no DMA buffers to be returned, i.e. buff_count = 0, + * buffs = NULL to support direct commands + */ +enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 *buff_count, + struct iecm_dma_mem **buffs) +{ + /* stub */ +} + +/** + * iecm_ctlq_recv - receive control queue message call back + * @cq: pointer to control queue handle to receive on + * @num_q_msg: (input|output) input number of messages that should be received; + * output number of messages actually received + * @q_msg: (output) array of received control queue messages on this q; + * needs to be pre-allocated by caller for as many messages as requested + * + * Called by interrupt handler or polling mechanism. Caller is expected + * to free buffers + */ +enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq, + u16 *num_q_msg, struct iecm_ctlq_msg *q_msg) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c new file mode 100644 index 000000000000..2fd6e3d15a1a --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020, Intel Corporation. */ + +#include +#include + +/** + * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + */ +static enum iecm_status +iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Allocate the buffer head for all control queues, and if it's a receive + * queue, allocate DMA buffers + */ +static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * This assumes the posted send buffers have already been cleaned + * and de-allocated + */ +static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_free_bufs - Free CQ buffer info elements + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX + * queues. The upper layers are expected to manage freeing of TX DMA buffers + */ +static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Free the memory used by the ring, buffers and other related structures + */ +void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs + * @hw: pointer to hw struct + * @cq: pointer to control queue struct + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + */ +enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c new file mode 100644 index 000000000000..a6532592f2f4 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c @@ -0,0 +1,16 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_set_ethtool_ops - Initialize ethtool ops struct + * @netdev: network interface device structure + * + * Sets ethtool ops struct in our netdev so that ethtool can call + * our functions. + */ +void iecm_set_ethtool_ops(struct net_device *netdev) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c new file mode 100644 index 000000000000..be944f655045 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -0,0 +1,407 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include + +static const struct net_device_ops iecm_netdev_ops_splitq; +static const struct net_device_ops iecm_netdev_ops_singleq; +extern int debug; + +/** + * iecm_mb_intr_rel_irq - Free the IRQ association with the OS + * @adapter: adapter structure + */ +static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_rel - Release interrupt capabilities and free memory + * @adapter: adapter to disable interrupts on + */ +static void iecm_intr_rel(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_clean - Interrupt handler for the mailbox + * @irq: interrupt number + * @data: pointer to the adapter structure + */ +static irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) +{ + /* stub */ +} + +/** + * iecm_mb_irq_enable - Enable MSIX interrupt for the mailbox + * @adapter: adapter to get the hardware address for register write + */ +static void iecm_mb_irq_enable(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_req_irq - Request IRQ for the mailbox interrupt + * @adapter: adapter structure to pass to the mailbox IRQ handler + */ +static int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_get_mb_vec_id - Get vector index for mailbox + * @adapter: adapter structure to access the vector chunks + * + * The first vector id in the requested vector chunks from the CP is for + * the mailbox + */ +static void iecm_get_mb_vec_id(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_init - Initialize the mailbox interrupt + * @adapter: adapter structure to store the mailbox vector + */ +static int iecm_mb_intr_init(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_distribute - Distribute MSIX vectors + * @adapter: adapter structure to get the vports + * + * Distribute the MSIX vectors acquired from the OS to the vports based on the + * num of vectors requested by each vport + */ +static void iecm_intr_distribute(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_req - Request interrupt capabilities + * @adapter: adapter to enable interrupts on + * + * Returns 0 on success, negative on failure + */ +static int iecm_intr_req(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_cfg_netdev - Allocate, configure and register a netdev + * @vport: main vport structure + * + * Returns 0 on success, negative value on failure + */ +static int iecm_cfg_netdev(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_cfg_hw - Initialize HW struct + * @adapter: adapter to setup hw struct for + * + * Returns 0 on success, negative on failure + */ +static int iecm_cfg_hw(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_get_free_slot - get the next non-NULL location index in array + * @array: array to search + * @size: size of the array + * @curr: last known occupied index to be used as a search hint + * + * void * is being used to keep the functionality generic. This lets us use this + * function on any array of pointers. + */ +static int iecm_get_free_slot(void *array, int size, int curr) +{ + /* stub */ +} + +/** + * iecm_netdev_to_vport - get a vport handle from a netdev + * @netdev: network interface device structure + */ +struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_netdev_to_adapter - get an adapter handle from a netdev + * @netdev: network interface device structure + */ +struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_vport_stop - Disable a vport + * @vport: vport to disable + */ +static void iecm_vport_stop(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_stop - Disables a network interface + * @netdev: network interface device structure + * + * The stop entry point is called when an interface is de-activated by the OS, + * and the netdevice enters the DOWN state. The hardware is still under the + * driver's control, but the netdev interface is disabled. + * + * Returns success only - not allowed to fail + */ +static int iecm_stop(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_vport_rel - Delete a vport and free its resources + * @vport: the vport being removed + * + * Returns 0 on success or < 0 on error + */ +static int iecm_vport_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_rel_all - Delete all vports + * @adapter: adapter from which all vports are being removed + */ +static void iecm_vport_rel_all(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_set_hsplit - enable or disable header split on a given vport + * @vport: virtual port + * @prog: bpf_program attached to an interface or NULL + */ +void iecm_vport_set_hsplit(struct iecm_vport *vport, + struct bpf_prog __always_unused *prog) +{ + /* stub */ +} + +/** + * iecm_vport_alloc - Allocates the next available struct vport in the adapter + * @adapter: board private structure + * @vport_id: vport identifier + * + * returns a pointer to a vport on success, NULL on failure. + */ +static struct iecm_vport * +iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) +{ + /* stub */ +} + +/** + * iecm_service_task - Delayed task for handling mailbox responses + * @work: work_struct handle to our data + * + */ +static void iecm_service_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_up_complete - Complete interface up sequence + * @vport: virtual port structure + * + */ +static void iecm_up_complete(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_open - Bring up a vport + * @vport: vport to bring up + */ +static int iecm_vport_open(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_init_task - Delayed initialization task + * @work: work_struct handle to our data + * + * Init task finishes up pending work started in probe. Due to the asynchronous + * nature in which the device communicates with hardware, we may have to wait + * several milliseconds to get a response. Instead of busy polling in probe, + * pulling it out into a delayed work task prevents us from bogging down the + * whole system waiting for a response from hardware. + */ +static void iecm_init_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_api_init - Initialize and verify device API + * @adapter: driver specific private structure + * + * Returns 0 on success, negative on failure + */ +static int iecm_api_init(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_deinit_task - Device deinit routine + * @adapter: Driver specific private structure + * + * Extended remove logic which will be used for + * hard reset as well + */ +static void iecm_deinit_task(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_init_hard_reset - Initiate a hardware reset + * @adapter: Driver specific private structure + * + * Deallocate the vports and all the resources associated with them and + * reallocate. Also reinitialize the mailbox + */ +static enum iecm_status +iecm_init_hard_reset(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vc_event_task - Handle virtchannel event logic + * @work: work queue struct + */ +static void iecm_vc_event_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_initiate_soft_reset - Initiate a software reset + * @vport: virtual port data struct + * @reset_cause: reason for the soft reset + * + * Soft reset does not involve bringing down the mailbox queue and also we do + * not destroy vport. Only queue resources are touched + */ +int iecm_initiate_soft_reset(struct iecm_vport *vport, + enum iecm_flags reset_cause) +{ + /* stub */ +} + +/** + * iecm_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in iecm_pci_tbl + * @adapter: driver specific private structure + * + * Returns 0 on success, negative on failure + */ +int iecm_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent, + struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_probe); + +/** + * iecm_remove - Device removal routine + * @pdev: PCI device information struct + */ +void iecm_remove(struct pci_dev *pdev) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_remove); + +/** + * iecm_shutdown - PCI callback for shutting down device + * @pdev: PCI device information struct + */ +void iecm_shutdown(struct pci_dev *pdev) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_shutdown); + +/** + * iecm_open - Called when a network interface becomes active + * @netdev: network interface device structure + * + * The open entry point is called when a network interface is made + * active by the system (IFF_UP). At this point all resources needed + * for transmit and receive operations are allocated, the interrupt + * handler is registered with the OS, the netdev watchdog is enabled, + * and the stack is notified that the interface is ready. + * + * Returns 0 on success, negative value on failure + */ +static int iecm_open(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_change_mtu - NDO callback to change the MTU + * @netdev: network interface device structure + * @new_mtu: new value for maximum frame size + * + * Returns 0 on success, negative on failure + */ +static int iecm_change_mtu(struct net_device *netdev, int new_mtu) +{ + /* stub */ +} + +static const struct net_device_ops iecm_netdev_ops_splitq = { + .ndo_open = iecm_open, + .ndo_stop = iecm_stop, + .ndo_start_xmit = iecm_tx_splitq_start, + .ndo_validate_addr = eth_validate_addr, + .ndo_get_stats64 = iecm_get_stats64, +}; + +static const struct net_device_ops iecm_netdev_ops_singleq = { + .ndo_open = iecm_open, + .ndo_stop = iecm_stop, + .ndo_start_xmit = iecm_tx_singleq_start, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = iecm_change_mtu, + .ndo_get_stats64 = iecm_get_stats64, +}; diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c new file mode 100644 index 000000000000..0644581fc746 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_main.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include + +char iecm_drv_name[] = "iecm"; +#define DRV_SUMMARY "Intel(R) Data Plane Function Linux Driver" +static const char iecm_driver_string[] = DRV_SUMMARY; +static const char iecm_copyright[] = "Copyright (c) 2020, Intel Corporation."; + +MODULE_AUTHOR("Intel Corporation, "); +MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_LICENSE("GPL v2"); + +int debug = -1; +module_param(debug, int, 0644); +#ifndef CONFIG_DYNAMIC_DEBUG +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXXXXX)"); +#else +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); +#endif /* !CONFIG_DYNAMIC_DEBUG */ + +/** + * iecm_module_init - Driver registration routine + * + * iecm_module_init is the first routine called when the driver is + * loaded. All it does is register with the PCI subsystem. + */ +static int __init iecm_module_init(void) +{ + /* stub */ +} +module_init(iecm_module_init); + +/** + * iecm_module_exit - Driver exit cleanup routine + * + * iecm_module_exit is called just before the driver is removed + * from memory. + */ +static void __exit iecm_module_exit(void) +{ + /* stub */ +} +module_exit(iecm_module_exit); diff --git a/drivers/net/ethernet/intel/iecm/iecm_osdep.c b/drivers/net/ethernet/intel/iecm/iecm_osdep.c new file mode 100644 index 000000000000..d0534df357d0 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_osdep.c @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020 Intel Corporation. */ + +#include +#include + +void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) +{ + /* stub */ +} + +void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c new file mode 100644 index 000000000000..063c35274f38 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include +#include + +/** + * iecm_tx_singleq_build_ctob - populate command tag offset and size + * @td_cmd: Command to be filled in desc + * @td_offset: Offset to be filled in desc + * @size: Size of the buffer + * @td_tag: VLAN tag to be filled + * + * Returns the 64 bit value populated with the input parameters + */ +static __le64 +iecm_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, + u64 td_tag) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_csum - Enable Tx checksum offloads + * @first: pointer to first descriptor + * @off: pointer to struct that holds offload parameters + * + * Returns 0 or error (negative) if checksum offload + */ +static +int iecm_tx_singleq_csum(struct iecm_tx_buf *first, + struct iecm_tx_offload_params *off) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_map - Build the Tx base descriptor + * @tx_q: queue to send buffer on + * @first: first buffer info buffer to use + * @offloads: pointer to struct that holds offload parameters + * + * This function loops over the skb data pointed to by *first + * and gets a physical address for each memory location and programs + * it and the length into the transmit base mode descriptor. + */ +static void +iecm_tx_singleq_map(struct iecm_queue *tx_q, struct iecm_tx_buf *first, + struct iecm_tx_offload_params *offloads) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_frame - Sends buffer on Tx ring using base descriptors + * @skb: send buffer + * @tx_q: queue to send buffer on + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +static netdev_tx_t +iecm_tx_singleq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_start - Selects the right Tx queue to send buffer + * @skb: send buffer + * @netdev: network interface device structure + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb, + struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_clean - Reclaim resources from queue + * @tx_q: Tx queue to clean + * @napi_budget: Used to determine if we are in netpoll + * + */ +static bool iecm_tx_singleq_clean(struct iecm_queue *tx_q, int napi_budget) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_clean_all - Clean all Tx queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_test_staterr - tests bits in Rx descriptor + * status and error fields + * @rx_desc: pointer to receive descriptor (in le64 format) + * @stat_err_bits: value to mask + * + * This function does some fast chicanery in order to return the + * value of the mask which is really only used for boolean tests. + * The status_error_ptype_len doesn't need to be shifted because it begins + * at offset zero. + */ +static bool +iecm_rx_singleq_test_staterr(struct iecm_singleq_base_rx_desc *rx_desc, + const u64 stat_err_bits) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_is_non_eop - process handling of non-EOP buffers + * @rxq: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * @skb: Current socket buffer containing buffer in progress + */ +static bool iecm_rx_singleq_is_non_eop(struct iecm_queue *rxq, + struct iecm_singleq_base_rx_desc + *rx_desc, struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_csum - Indicate in skb if checksum is good + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: skb currently being received and modified + * @rx_desc: the receive descriptor + * @ptype: the packet type decoded by hardware + * + * skb->protocol must be set before this function is called + */ +static void iecm_rx_singleq_csum(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_singleq_base_rx_desc *rx_desc, + u8 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_process_skb_fields - Populate skb header fields from Rx + * descriptor + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: descriptor for skb + * @ptype: packet type + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, protocol, and + * other fields within the skb. + */ +static void +iecm_rx_singleq_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_singleq_base_rx_desc *rx_desc, + u8 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_buf_hw_alloc_all - Replace used receive buffers + * @rx_q: queue for which the hw buffers are allocated + * @cleaned_count: number of buffers to replace + * + * Returns false if all allocations were successful, true if any fail + */ +bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rx_q, + u16 cleaned_count) +{ + /* stub */ +} + +/** + * iecm_singleq_rx_put_buf - wrapper function to clean and recycle buffers + * @rx_bufq: Rx descriptor queue to transact packets on + * @rx_buf: Rx buffer to pull data from + * + * This function will update the next_to_use/next_to_alloc if the current + * buffer is recycled. + */ +static void iecm_singleq_rx_put_buf(struct iecm_queue *rx_bufq, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value + * @q: queue to bump + */ +static void iecm_singleq_rx_bump_ntc(struct iecm_queue *q) +{ + /* stub */ +} + +/** + * iecm_singleq_rx_get_buf_page - Fetch Rx buffer page and synchronize data + * @dev: device struct + * @rx_buf: Rx buf to fetch page for + * @size: size of buffer to add to skb + * + * This function will pull an Rx buffer page from the ring and synchronize it + * for use by the CPU. + */ +static struct sk_buff * +iecm_singleq_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, + const unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_clean - Reclaim resources after receive completes + * @rx_q: Rx queue to clean + * @budget: Total limit on number of packets to process + * + * Returns true if there's any budget left (e.g. the clean is finished) + */ +static int iecm_rx_singleq_clean(struct iecm_queue *rx_q, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_clean_all - Clean all Rx queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * @cleaned: returns number of packets cleaned + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_rx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget, + int *cleaned) +{ + /* stub */ +} + +/** + * iecm_vport_singleq_napi_poll - NAPI handler + * @napi: struct from which you get q_vector + * @budget: budget provided by stack + */ +int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c new file mode 100644 index 000000000000..9ad9c4a6bc84 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -0,0 +1,1257 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_buf_lifo_push - push a buffer pointer onto stack + * @stack: pointer to stack struct + * @buf: pointer to buf to push + **/ +static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, + struct iecm_tx_buf *buf) +{ + /* stub */ +} + +/** + * iecm_buf_lifo_pop - pop a buffer pointer from stack + * @stack: pointer to stack struct + **/ +static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) +{ + /* stub */ +} + +/** + * iecm_get_stats64 - get statistics for network device structure + * @netdev: network interface device structure + * @stats: main device statistics structure + */ +void iecm_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + /* stub */ +} + +/** + * iecm_tx_buf_rel - Release a Tx buffer + * @tx_q: the queue that owns the buffer + * @tx_buf: the buffer to free + */ +void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf) +{ + /* stub */ +} + +/** + * iecm_tx_buf_rel all - Free any empty Tx buffers + * @txq: queue to be cleaned + */ +static void iecm_tx_buf_rel_all(struct iecm_queue *txq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_rel - Free Tx resources per queue + * @txq: Tx descriptor ring for a specific queue + * @bufq: buffer q or completion q + * + * Free all transmit software resources + */ +static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_rel_all - Free Tx Resources for All Queues + * @vport: virtual port structure + * + * Free all transmit software resources + */ +static void iecm_tx_desc_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_buf_alloc_all - Allocate memory for all buffer resources + * @tx_q: queue for which the buffers are allocated + */ +static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_desc_alloc - Allocate the Tx descriptors + * @tx_q: the Tx ring to set up + * @bufq: buffer or completion queue + */ +static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_alloc_all - allocate all queues Tx resources + * @vport: virtual port private structure + */ +static enum iecm_status iecm_tx_desc_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_buf_rel - Release a Rx buffer + * @rxq: the queue that owns the buffer + * @rx_buf: the buffer to free + */ +static void iecm_rx_buf_rel(struct iecm_queue *rxq, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_buf_rel_all - Free all Rx buffer resources for a queue + * @rxq: queue to be cleaned + */ +static void iecm_rx_buf_rel_all(struct iecm_queue *rxq) +{ + /* stub */ +} + +/** + * iecm_rx_desc_rel - Free a specific Rx q resources + * @rxq: queue to clean the resources from + * @bufq: buffer q or completion q + * @q_model: single or split q model + * + * Free a specific Rx queue resources + */ +static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq, + enum virtchnl_queue_model q_model) +{ + /* stub */ +} + +/** + * iecm_rx_desc_rel_all - Free Rx Resources for All Queues + * @vport: virtual port structure + * + * Free all Rx queues resources + */ +static void iecm_rx_desc_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_update - Store the new tail and head values + * @rxq: queue to bump + * @val: new head index + */ +void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_alloc - recycle or make a new page + * @rxq: ring to use + * @buf: rx_buffer struct to modify + * + * Returns true if the page was successfully allocated or + * reused. + */ +bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) +{ + /* stub */ +} + +/** + * iecm_rx_hdr_buf_hw_alloc - recycle or make a new page for header buffer + * @rxq: ring to use + * @hdr_buf: rx_buffer struct to modify + * + * Returns true if the page was successfully allocated or + * reused. + */ +static bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq, + struct iecm_rx_buf *hdr_buf) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_alloc_all - Replace used receive buffers + * @rxq: queue for which the hw buffers are allocated + * @cleaned_count: number of buffers to replace + * + * Returns false if all allocations were successful, true if any fail + */ +static bool +iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, + u16 cleaned_count) +{ + /* stub */ +} + +/** + * iecm_rx_buf_alloc_all - Allocate memory for all buffer resources + * @rxq: queue for which the buffers are allocated + */ +static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) +{ + /* stub */ +} + +/** + * iecm_rx_desc_alloc - Allocate queue Rx resources + * @rxq: Rx queue for which the resources are setup + * @bufq: buffer or completion queue + * @q_model: single or split queue model + */ +static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, + enum virtchnl_queue_model q_model) +{ + /* stub */ +} + +/** + * iecm_rx_desc_alloc_all - allocate all RX queues resources + * @vport: virtual port structure + */ +static enum iecm_status iecm_rx_desc_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_txq_group_rel - Release all resources for txq groups + * @vport: vport to release txq groups on + */ +static void iecm_txq_group_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rxq_group_rel - Release all resources for rxq groups + * @vport: vport to release rxq groups on + */ +static void iecm_rxq_group_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queue_grp_rel_all - Release all queue groups + * @vport: vport to release queue groups for + */ +static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queues_rel - Free memory for all queues + * @vport: virtual port + * + * Free the memory allocated for queues associated to a vport + */ +void iecm_vport_queues_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_init_fast_path_txqs - Initialize fast path txq array + * @vport: vport to init txqs on + * + * We get a queue index from skb->queue_mapping and we need a fast way to + * dereference the queue from queue groups. This allows us to quickly pull a + * txq based on a queue index. + */ +static enum iecm_status +iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_init_num_qs - Initialize number of queues + * @vport: vport to initialize qs + * @vport_msg: data to be filled into vport + */ +void iecm_vport_init_num_qs(struct iecm_vport *vport, + struct virtchnl_create_vport *vport_msg) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_desc - Calculate number of queue groups + * @vport: vport to calculate q groups for + */ +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); + +/** + * iecm_vport_calc_total_qs - Calculate total number of queues + * @vport_msg: message to fill with data + * @num_req_qs: user requested queues + */ +void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, + int num_req_qs) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_groups - Calculate number of queue groups + * @vport: vport to calculate q groups for + */ +void iecm_vport_calc_num_q_groups(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); + +/** + * iecm_vport_calc_numq_per_grp - Calculate number of queues per group + * @vport: vport to calculate queues for + * @num_txq: int return parameter + * @num_rxq: int return parameter + */ +static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, + int *num_txq, int *num_rxq) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_vec - Calculate total number of vectors required for + * this vport + * @vport: virtual port + * + */ +void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_txq_group_alloc - Allocate all txq group resources + * @vport: vport to allocate txq groups for + * @num_txq: number of txqs to allocate for each group + */ +static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, + int num_txq) +{ + /* stub */ +} + +/** + * iecm_rxq_group_alloc - Allocate all rxq group resources + * @vport: vport to allocate rxq groups for + * @num_rxq: number of rxqs to allocate for each group + */ +static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, + int num_rxq) +{ + /* stub */ +} + +/** + * iecm_vport_queue_grp_alloc_all - Allocate all queue groups/resources + * @vport: vport with qgrps to allocate + */ +static enum iecm_status +iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queues_alloc - Allocate memory for all queues + * @vport: virtual port + * + * Allocate memory for queues associated with a vport + */ +enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_find_q - Find the Tx q based on q id + * @vport: the vport we care about + * @q_id: Id of the queue + * + * Returns queue ptr if found else returns NULL + */ +static struct iecm_queue * +iecm_tx_find_q(struct iecm_vport *vport, int q_id) +{ + /* stub */ +} + +/** + * iecm_tx_handle_sw_marker - Handle queue marker packet + * @tx_q: Tx queue to handle software marker + */ +static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean_buf - Clean TX buffer resources + * @tx_q: Tx queue to clean buffer from + * @tx_buf: buffer to be cleaned + * @napi_budget: Used to determine if we are in netpoll + * + * Returns the stats (bytes/packets) cleaned from this buffer + */ +static struct iecm_tx_queue_stats +iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, + int napi_budget) +{ + /* stub */ +} + +/** + * iecm_stash_flow_sch_buffers - store buffere parameter info to be freed at a + * later time (only relevant for flow scheduling mode) + * @txq: Tx queue to clean + * @tx_buf: buffer to store + */ +static int +iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean - Reclaim resources from buffer queue + * @tx_q: Tx queue to clean + * @end: queue index until which it should be cleaned + * @napi_budget: Used to determine if we are in netpoll + * @descs_only: true if queue is using flow-based scheduling and should + * not clean buffers at this time + * + * Cleans the queue descriptor ring. If the queue is using queue-based + * scheduling, the buffers will be cleaned as well and this function will + * return the number of bytes/packets cleaned. If the queue is using flow-based + * scheduling, only the descriptors are cleaned at this time. Separate packet + * completion events will be reported on the completion queue, and the + * buffers will be cleaned separately. The stats returned from this function + * when using flow-based scheduling are irrelevant. + */ +static struct iecm_tx_queue_stats +iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, + bool descs_only) +{ + /* stub */ +} + +/** + * iecm_tx_hw_tstamp - report hw timestamp from completion desc to stack + * @skb: original skb + * @desc_ts: pointer to 3 byte timestamp from descriptor + */ +static inline void iecm_tx_hw_tstamp(struct sk_buff *skb, u8 *desc_ts) +{ + /* stub */ +} + +/** + * iecm_tx_clean_flow_sch_bufs - clean bufs that were stored for + * out of order completions + * @txq: queue to clean + * @compl_tag: completion tag of packet to clean (from completion descriptor) + * @desc_ts: pointer to 3 byte timestamp from descriptor + * @budget: Used to determine if we are in netpoll + */ +static struct iecm_tx_queue_stats +iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, + u8 *desc_ts, int budget) +{ + /* stub */ +} + +/** + * iecm_tx_clean_complq - Reclaim resources on completion queue + * @complq: Tx ring to clean + * @budget: Used to determine if we are in netpoll + * + * Returns true if there's any budget left (e.g. the clean is finished) + */ +static bool +iecm_tx_clean_complq(struct iecm_queue *complq, int budget) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_build_ctb - populate command tag and size for queue + * based scheduling descriptors + * @desc: descriptor to populate + * @parms: pointer to Tx params struct + * @td_cmd: command to be filled in desc + * @size: size of buffer + */ +static inline void +iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *parms, + u16 td_cmd, u16 size) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_build_flow_desc - populate command tag and size for flow + * scheduling descriptors + * @desc: descriptor to populate + * @parms: pointer to Tx params struct + * @td_cmd: command to be filled in desc + * @size: size of buffer + */ +static inline void +iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *parms, + u16 td_cmd, u16 size) +{ + /* stub */ +} + +/** + * __iecm_tx_maybe_stop - 2nd level check for Tx stop conditions + * @tx_q: the queue to be checked + * @size: the size buffer we want to assure is available + * + * Returns -EBUSY if a stop is needed, else 0 + */ +static int +__iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_maybe_stop - 1st level check for Tx stop conditions + * @tx_q: the queue to be checked + * @size: number of descriptors we want to assure is available + * + * Returns 0 if stop is not needed + */ +int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_buf_hw_update - Store the new tail and head values + * @tx_q: queue to bump + * @val: new head index + */ +void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val) +{ + /* stub */ +} + +/** + * __iecm_tx_desc_count required - Get the number of descriptors needed for Tx + * @size: transmit request size in bytes + * + * Due to hardware alignment restrictions (4K alignment), we need to + * assume that we can have no more than 12K of data per descriptor, even + * though each descriptor can take up to 16K - 1 bytes of aligned memory. + * Thus, we need to divide by 12K. But division is slow! Instead, + * we decompose the operation into shifts and one relatively cheap + * multiply operation. + * + * To divide by 12K, we first divide by 4K, then divide by 3: + * To divide by 4K, shift right by 12 bits + * To divide by 3, multiply by 85, then divide by 256 + * (Divide by 256 is done by shifting right by 8 bits) + * Finally, we add one to round up. Because 256 isn't an exact multiple of + * 3, we'll underestimate near each multiple of 12K. This is actually more + * accurate as we have 4K - 1 of wiggle room that we can fit into the last + * segment. For our purposes this is accurate out to 1M which is orders of + * magnitude greater than our largest possible GSO size. + * + * This would then be implemented as: + * return (((size >> 12) * 85) >> 8) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; + * + * Since multiplication and division are commutative, we can reorder + * operations into: + * return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; + */ +static unsigned int __iecm_tx_desc_count_required(unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_desc_count_required - calculate number of Tx descriptors needed + * @skb: send buffer + * + * Returns number of data descriptors needed for this skb. + */ +unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_map - Build the Tx flex descriptor + * @tx_q: queue to send buffer on + * @off: pointer to offload params struct + * @first: first buffer info buffer to use + * + * This function loops over the skb data pointed to by *first + * and gets a physical address for each memory location and programs + * it and the length into the transmit flex descriptor. + */ +static void +iecm_tx_splitq_map(struct iecm_queue *tx_q, + struct iecm_tx_offload_params *off, + struct iecm_tx_buf *first) +{ + /* stub */ +} + +/** + * iecm_tso - computes mss and TSO length to prepare for TSO + * @first: pointer to struct iecm_tx_buf + * @off: pointer to struct that holds offload parameters + * + * Returns error (negative) if TSO doesn't apply to the given skb, + * 0 otherwise. + * + * Note: this function can be used in the splitq and singleq paths + */ +static int iecm_tso(struct iecm_tx_buf *first, + struct iecm_tx_offload_params *off) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors + * @skb: send buffer + * @tx_q: queue to send buffer on + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +static netdev_tx_t +iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_start - Selects the right Tx queue to send buffer + * @skb: send buffer + * @netdev: network interface device structure + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, + struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_ptype_to_htype - get a hash type + * @vport: virtual port data + * @ptype: the ptype value from the descriptor + * + * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by + * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of + * Rx desc. + */ +static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport, + u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_hash - set the hash value in the skb + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + */ +static void +iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_csum - Indicate in skb if checksum is good + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + * + * skb->protocol must be set before this function is called + */ +static void +iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_rsc - Set the RSC fields in the skb + * @rxq : Rx descriptor ring packet is being transacted on + * @skb : pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + * + * Populate the skb fields with the total number of RSC segments, RSC payload + * length and packet type. + */ +static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_hwtstamp - check for an RX timestamp and pass up + * the stack + * @rx_desc: pointer to Rx descriptor containing timestamp + * @skb: skb to put timestamp in + */ +static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc, + struct sk_buff __maybe_unused *skb) +{ + /* stub */ +} + +/** + * iecm_rx_process_skb_fields - Populate skb header fields from Rx descriptor + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, protocol, and + * other fields within the skb. + */ +static bool +iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc) +{ + /* stub */ +} + +/** + * iecm_rx_skb - Send a completed packet up the stack + * @rxq: Rx ring in play + * @skb: packet to send up + * + * This function sends the completed packet (via. skb) up the stack using + * GRO receive functions + */ +void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_page_is_reserved - check if reuse is possible + * @page: page struct to check + */ +static bool iecm_rx_page_is_reserved(struct page *page) +{ + /* stub */ +} + +/** + * iecm_rx_buf_adjust_pg_offset - Prepare Rx buffer for reuse + * @rx_buf: Rx buffer to adjust + * @size: Size of adjustment + * + * Update the offset within page so that Rx buf will be ready to be reused. + * For systems with PAGE_SIZE < 8192 this function will flip the page offset + * so the second half of page assigned to Rx buffer will be used, otherwise + * the offset is moved by the @size bytes + */ +static void +iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_can_reuse_page - Determine if page can be reused for another Rx + * @rx_buf: buffer containing the page + * + * If page is reusable, we have a green light for calling iecm_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the DMA mapping needs to be destroyed and + * page freed + */ +static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_add_frag - Add contents of Rx buffer to sk_buff as a frag + * @rx_buf: buffer containing page to add + * @skb: sk_buff to place the data into + * @size: packet length from rx_desc + * + * This function will add the data contained in rx_buf->page to the skb. + * It will just attach the page as a frag to the skb. + * The function will then update the page offset. + */ +void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, + unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_reuse_page - page flip buffer and store it back on the queue + * @rx_bufq: Rx descriptor ring to store buffers on + * @hsplit: true if header buffer, false otherwise + * @old_buf: donor buffer to have page reused + * + * Synchronizes page for reuse by the adapter + */ +void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, + bool hsplit, + struct iecm_rx_buf *old_buf) +{ + /* stub */ +} + +/** + * iecm_rx_get_buf_page - Fetch Rx buffer page and synchronize data for use + * @dev: device struct + * @rx_buf: Rx buf to fetch page for + * @size: size of buffer to add to skb + * @dev: device struct + * + * This function will pull an Rx buffer page from the ring and synchronize it + * for use by the CPU. + */ +static void +iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, + const unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_construct_skb - Allocate skb and populate it + * @rxq: Rx descriptor queue + * @rx_buf: Rx buffer to pull data from + * @size: the length of the packet + * + * This function allocates an skb. It then populates it with the page + * data from the current receive descriptor, taking care to set up the + * skb correctly. + */ +struct sk_buff * +iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, + unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_cleanup_headers - Correct empty headers + * @skb: pointer to current skb being fixed + * + * Also address the case where we are pulling data in on pages only + * and as such no data is present in the skb header. + * + * In addition if skb is not at least 60 bytes we need to pad it so that + * it is large enough to qualify as a valid Ethernet frame. + * + * Returns true if an error was encountered and skb was freed. + */ +bool iecm_rx_cleanup_headers(struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_test_staterr - tests bits in Rx descriptor + * status and error fields + * @stat_err_field: field from descriptor to test bits in + * @stat_err_bits: value to mask + * + */ +static bool +iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_is_non_eop - process handling of non-EOP buffers + * @rx_desc: Rx descriptor for current buffer + * + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ +static bool +iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) +{ + /* stub */ +} + +/** + * iecm_rx_recycle_buf - Clean up used buffer and either recycle or free + * @rx_bufq: Rx descriptor queue to transact packets on + * @hsplit: true if buffer is a header buffer + * @rx_buf: Rx buffer to pull data from + * + * This function will clean up the contents of the rx_buf. It will either + * recycle the buffer or unmap it and free the associated resources. + * + * Returns true if the buffer is reused, false if the buffer is freed. + */ +bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_put_bufs - wrapper function to clean and recycle buffers + * @rx_bufq: Rx descriptor queue to transact packets on + * @hdr_buf: Rx header buffer to pull data from + * @rx_buf: Rx buffer to pull data from + * + * This function will update the next_to_use/next_to_alloc if the current + * buffer is recycled. + */ +static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, + struct iecm_rx_buf *hdr_buf, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value + * @q: queue to bump + */ +static void iecm_rx_bump_ntc(struct iecm_queue *q) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_clean - Clean completed descriptors from Rx queue + * @rxq: Rx descriptor queue to retrieve receive buffer queue + * @budget: Total limit on number of packets to process + * + * This function provides a "bounce buffer" approach to Rx interrupt + * processing. The advantage to this is that on systems that have + * expensive overhead for IOMMU access this provides a means of avoiding + * it by maintaining the mapping of the page to the system. + * + * Returns amount of work completed + */ +static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) +{ + /* stub */ +} + +/** + * iecm_vport_intr_clean_queues - MSIX mode Interrupt Handler + * @irq: interrupt number + * @data: pointer to a q_vector + * + */ +irqreturn_t +iecm_vport_intr_clean_queues(int __always_unused irq, void *data) +{ + /* stub */ +} + +/** + * iecm_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport + * @vport: main vport structure + */ +static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_rel - Free memory allocated for interrupt vectors + * @vport: virtual port + * + * Free the memory allocated for interrupt vectors associated to a vport + */ +static void iecm_vport_intr_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_rel_irq - Free the IRQ association with the OS + * @vport: main vport structure + */ +static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_dis_irq_all - Disable each interrupt + * @vport: main vport structure + */ +void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_buildreg_itr - Enable default interrupt generation settings + * @q_vector: pointer to q_vector + * @type: ITR index + * @itr: ITR value + */ +static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector, + const int type, u16 itr) +{ + /* stub */ +} + +static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector) +{ + /* stub */ +} + +/** + * iecm_vport_intr_set_new_itr - update the ITR value based on statistics + * @q_vector: structure containing interrupt and ring information + * @itr: structure containing queue performance data + * @q_type: queue type + * + * Stores a new ITR value based on packets and byte + * counts during the last interrupt. The advantage of per interrupt + * computation is faster updates and more accurate ITR for the current + * traffic pattern. Constants in this function were computed + * based on theoretical maximum wire speed and thresholds were set based + * on testing data as well as attempting to minimize response time + * while increasing bulk throughput. + */ +static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, + struct iecm_itr *itr, + enum virtchnl_queue_type q_type) +{ + /* stub */ +} + +/** + * iecm_vport_intr_update_itr_ena_irq - Update ITR and re-enable MSIX interrupt + * @q_vector: q_vector for which ITR is being updated and interrupt enabled + */ +void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) +{ + /* stub */ +} + +/** + * iecm_vport_intr_req_irq - get MSI-X vectors from the OS for the vport + * @vport: main vport structure + * @basename: name for the vector + */ +static int +iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) +{ + /* stub */ +} + +/** + * iecm_vport_intr_ena_irq_all - Enable IRQ for the given vport + * @vport: main vport structure + */ +void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_deinit - Release all vector associations for the vport + * @vport: main vport structure + */ +void iecm_vport_intr_deinit(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport + * @vport: main vport structure + */ +static void +iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean_all- Clean completion queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_clean_all- Clean completion queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * @cleaned: returns number of packets cleaned + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, + int *cleaned) +{ + /* stub */ +} + +/** + * iecm_vport_splitq_napi_poll - NAPI handler + * @napi: struct from which you get q_vector + * @budget: budget provided by stack + */ +static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) +{ + /* stub */ +} + +/** + * iecm_vport_intr_map_vector_to_qs - Map vectors to queues + * @vport: virtual port + * + * Mapping for vectors to queues + */ +static void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_init_vec_idx - Initialize the vector indexes + * @vport: virtual port + * + * Initialize vector indexes with values returned over mailbox + */ +static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_alloc - Allocate memory for interrupt vectors + * @vport: virtual port + * + * We allocate one q_vector per queue interrupt. If allocation fails we + * return -ENOMEM. + */ +static int iecm_vport_intr_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_init - Setup all vectors for the given vport + * @vport: virtual port + * + * Returns 0 on success or negative on failure + */ +int iecm_vport_intr_init(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); + +/** + * iecm_config_rss - Prepare for RSS + * @vport: virtual port + * + * Return 0 on success, negative on failure + */ +int iecm_config_rss(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_get_rx_qid_list - Create a list of RX QIDs + * @vport: virtual port + * @qid_list: list of qids + * + * qid_list must be allocated for maximum entries to prevent buffer overflow. + */ +void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) +{ + /* stub */ +} + +/** + * iecm_fill_dflt_rss_lut - Fill the indirection table with the default values + * @vport: virtual port structure + * @qid_list: List of the RX qid's + * + * qid_list is created and freed by the caller + */ +void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) +{ + /* stub */ +} + +/** + * iecm_init_rss - Prepare for RSS + * @vport: virtual port + * + * Return 0 on success, negative on failure + */ +int iecm_init_rss(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_deinit_rss - Prepare for RSS + * @vport: virtual port + * + */ +void iecm_deinit_rss(struct iecm_vport *vport) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c new file mode 100644 index 000000000000..2db9f53efb87 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -0,0 +1,599 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/* Lookup table mapping the HW PTYPE to the bit field for decoding */ +static const +struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = { + /* L2 Packet types */ + IECM_PTT_UNUSED_ENTRY(0), + IECM_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IECM_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), + IECM_PTT_UNUSED_ENTRY(12), + + /* Non Tunneled IPv4 */ + IECM_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), + IECM_PTT_UNUSED_ENTRY(25), + IECM_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), + IECM_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), + IECM_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), + + /* Non Tunneled IPv6 */ + IECM_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), + IECM_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3), + IECM_PTT_UNUSED_ENTRY(91), + IECM_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), + IECM_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), + IECM_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), +}; + +/** + * iecm_recv_event_msg - Receive virtchnl event message + * @vport: virtual port structure + * + * Receive virtchnl event message + */ +static void iecm_recv_event_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_mb_clean - Reclaim the send mailbox queue entries + * @adapter: Driver specific private structure + * + * Reclaim the send mailbox queue entries to be used to send further messages + * + * Returns success or failure + */ +static enum iecm_status +iecm_mb_clean(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_mb_msg - Send message over mailbox + * @adapter: Driver specific private structure + * @op: virtchnl opcode + * @msg_size: size of the payload + * @msg: pointer to buffer holding the payload + * + * Will prepare the control queue message and initiates the send API + * + * Returns success or failure + */ +enum iecm_status +iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + u16 msg_size, u8 *msg) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_send_mb_msg); + +/** + * iecm_recv_mb_msg - Receive message over mailbox + * @adapter: Driver specific private structure + * @op: virtchnl operation code + * @msg: Received message holding buffer + * @msg_size: message size + * + * Will receive control queue message and posts the receive buffer + */ +enum iecm_status +iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + void *msg, int msg_size) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_recv_mb_msg); + +/** + * iecm_send_ver_msg - send virtchnl version message + * @adapter: Driver specific private structure + * + * Send virtchnl version message + */ +static enum iecm_status +iecm_send_ver_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_recv_ver_msg - Receive virtchnl version message + * @adapter: Driver specific private structure + * + * Receive virtchnl version message + */ +static enum iecm_status +iecm_recv_ver_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_get_caps_msg - Send virtchnl get capabilities message + * @adapter: Driver specific private structure + * + * send virtchnl get capabilities message + */ +enum iecm_status +iecm_send_get_caps_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_send_get_caps_msg); + +/** + * iecm_recv_get_caps_msg - Receive virtchnl get capabilities message + * @adapter: Driver specific private structure + * + * Receive virtchnl get capabilities message + */ +static enum iecm_status +iecm_recv_get_caps_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_create_vport_msg - Send virtchnl create vport message + * @adapter: Driver specific private structure + * + * send virtchnl create vport message + * + * Returns success or failure + */ +static enum iecm_status +iecm_send_create_vport_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_recv_create_vport_msg - Receive virtchnl create vport message + * @adapter: Driver specific private structure + * @vport_id: Virtual port identifier + * + * Receive virtchnl create vport message + * + * Returns success or failure + */ +static enum iecm_status +iecm_recv_create_vport_msg(struct iecm_adapter *adapter, + int *vport_id) +{ + /* stub */ +} + +/** + * iecm_wait_for_event - wait for virtchnl response + * @adapter: Driver private data structure + * @state: check on state upon timeout after 500ms + * @err_check: check if this specific error bit is set + * + * checks if state is set upon expiry of timeout + * + * Returns success or failure + */ +enum iecm_status +iecm_wait_for_event(struct iecm_adapter *adapter, + enum iecm_vport_vc_state state, + enum iecm_vport_vc_state err_check) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_wait_for_event); + +/** + * iecm_send_destroy_vport_msg - Send virtchnl destroy vport message + * @vport: virtual port data structure + * + * send virtchnl destroy vport message + */ +enum iecm_status +iecm_send_destroy_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_enable_vport_msg - Send virtchnl enable vport message + * @vport: virtual port data structure + * + * send enable vport virtchnl message + */ +enum iecm_status +iecm_send_enable_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_disable_vport_msg - Send virtchnl disable vport message + * @vport: virtual port data structure + * + * send disable vport virtchnl message + */ +enum iecm_status +iecm_send_disable_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_tx_queues_msg - Send virtchnl config Tx queues message + * @vport: virtual port data structure + * + * send config Tx queues virtchnl message + * + * Returns success or failure + */ +enum iecm_status +iecm_send_config_tx_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_rx_queues_msg - Send virtchnl config Rx queues message + * @vport: virtual port data structure + * + * send config Rx queues virtchnl message + * + * Returns success or failure + */ +enum iecm_status +iecm_send_config_rx_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_ena_dis_queues_msg - Send virtchnl enable or disable + * queues message + * @vport: virtual port data structure + * @vc_op: virtchnl op code to send + * + * send enable or disable queues virtchnl message + * + * Returns success or failure + */ +static enum iecm_status +iecm_send_ena_dis_queues_msg(struct iecm_vport *vport, + enum virtchnl_ops vc_op) +{ + /* stub */ +} + +/** + * iecm_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue + * vector message + * @vport: virtual port data structure + * @map: true for map and false for unmap + * + * send map or unmap queue vector virtchnl message + * + * Returns success or failure + */ +static enum iecm_status +iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, + bool map) +{ + /* stub */ +} + +/** + * iecm_send_enable_queues_msg - send enable queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send enable queues virtchnl message + */ +static enum iecm_status +iecm_send_enable_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_disable_queues_msg - send disable queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send disable queues virtchnl message + */ +static enum iecm_status +iecm_send_disable_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_delete_queues_msg - send delete queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send delete queues virtchnl message + */ +enum iecm_status +iecm_send_delete_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_queues_msg - Send config queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send config queues virtchnl message + */ +static enum iecm_status +iecm_send_config_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_add_queues_msg - Send virtchnl add queues message + * @vport: Virtual port private data structure + * @num_tx_q: number of transmit queues + * @num_complq: number of transmit completion queues + * @num_rx_q: number of receive queues + * @num_rx_bufq: number of receive buffer queues + * + * Returns success or failure + */ +enum iecm_status +iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, + u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) +{ + /* stub */ +} + +/** + * iecm_send_get_stats_msg - Send virtchnl get statistics message + * @vport: vport to get stats for + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_stats_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_hash_msg - Send set or get RSS hash message + * @vport: virtual port data structure + * @get: flag to get or set RSS hash + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message + * @vport: virtual port data structure + * @get: flag to set or get RSS look up table + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message + * @vport: virtual port data structure + * @get: flag to set or get RSS look up table + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_rx_ptype_msg - Send virtchnl get or set RSS key message + * @vport: virtual port data structure + * + * Returns success or failure + */ +enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_find_ctlq - Given a type and id, find ctlq info + * @hw: hardware struct + * @type: type of ctrlq to find + * @id: ctlq id to find + * + * Returns pointer to found ctlq info struct, NULL otherwise. + */ +static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, + enum iecm_ctlq_type type, int id) +{ + /* stub */ +} + +/** + * iecm_deinit_dflt_mbx - De initialize mailbox + * @adapter: adapter info struct + */ +void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_init_dflt_mbx - Setup default mailbox parameters and make request + * @adapter: adapter info struct + * + * Returns 0 on success, negative otherwise + */ +enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_params_buf_alloc - Allocate memory for mailbox resources + * @adapter: Driver specific private data structure + * + * Will alloc memory to hold the vport parameters received on mailbox + */ +int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_params_buf_rel - Release memory for mailbox resources + * @adapter: Driver specific private data structure + * + * Will release memory to hold the vport parameters received on mailbox + */ +void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vc_core_init - Initialize mailbox and get resources + * @adapter: Driver specific private structure + * @vport_id: Virtual port identifier + * + * Will check if HW is ready with reset complete. Initializes the mailbox and + * communicate with master to get all the default vport parameters. + */ +int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vc_core_init); + +/** + * iecm_vport_init - Initialize virtual port + * @vport: virtual port to be initialized + * @vport_id: Unique identification number of vport + * + * Will initialize vport with the info received through MB earlier + */ +static void iecm_vport_init(struct iecm_vport *vport, + __always_unused int vport_id) +{ + /* stub */ +} + +/** + * iecm_vport_get_vec_ids - Initialize vector id from Mailbox parameters + * @vecids: Array of vector ids + * @num_vecids: number of vector ids + * @chunks: vector ids received over mailbox + * + * Will initialize all vector ids with ids received as mailbox parameters + * Returns number of ids filled + */ +int +iecm_vport_get_vec_ids(u16 *vecids, int num_vecids, + struct virtchnl_vector_chunks *chunks) +{ + /* stub */ +} + +/** + * iecm_vport_get_queue_ids - Initialize queue id from Mailbox parameters + * @qids: Array of queue ids + * @num_qids: number of queue ids + * @q_type: queue model + * @chunks: queue ids received over mailbox + * + * Will initialize all queue ids with ids received as mailbox parameters + * Returns number of ids filled + */ +static int +iecm_vport_get_queue_ids(u16 *qids, int num_qids, + enum virtchnl_queue_type q_type, + struct virtchnl_queue_chunks *chunks) +{ + /* stub */ +} + +/** + * __iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters + * @vport: virtual port for which the queues ids are initialized + * @qids: queue ids + * @num_qids: number of queue ids + * @q_type: type of queue + * + * Will initialize all queue ids with ids received as mailbox + * parameters. Returns number of queue ids initialized. + */ +static int +__iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, + int num_qids, enum virtchnl_queue_type q_type) +{ + /* stub */ +} + +/** + * iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters + * @vport: virtual port for which the queues ids are initialized + * + * Will initialize all queue ids with ids received as mailbox + * parameters. Returns error if all the queues are not initialized + */ +static +enum iecm_status iecm_vport_queue_ids_init(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_adjust_qs - Adjust to new requested queues + * @vport: virtual port data struct + * + * Renegotiate queues + */ +enum iecm_status iecm_vport_adjust_qs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_is_capability_ena - Default implementation of capability checking + * @adapter: Private data struct + * @flag: flag to check + * + * Return true if capability is supported, false otherwise + */ +static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) +{ + /* stub */ +} + +/** + * iecm_vc_ops_init - Initialize virtchnl common API + * @adapter: Driver specific private structure + * + * Initialize the function pointers with the extended feature set functions + * as APF will deal only with new set of opcodes. + */ +void iecm_vc_ops_init(struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vc_ops_init); From patchwork Tue Jun 23 22:40:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53D8FC433E0 for ; Tue, 23 Jun 2020 22:41:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1733F20888 for ; Tue, 23 Jun 2020 22:41:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387975AbgFWWlG (ORCPT ); Tue, 23 Jun 2020 18:41:06 -0400 Received: from mga01.intel.com ([192.55.52.88]:14610 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387869AbgFWWlD (ORCPT ); Tue, 23 Jun 2020 18:41:03 -0400 IronPort-SDR: eUFZla1jsHzv1qY2Tc4XosNmGUv8NMIaYdDK/Z23mkVagfxGWOT7FxpZi+GSelggGrKeEFRlzO uakXyW6ojkcQ== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327493" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327493" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:47 -0700 IronPort-SDR: yBQ0ycVkGRAk24EM/Bv8ed99CagjtnV5/A2RAeHjAfnoEogddMqMZm+0uP8RjYgnjBsQzBBM+E qsAC7lKbuVhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045933" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:47 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 06/15] iecm: Implement mailbox functionality Date: Tue, 23 Jun 2020 15:40:34 -0700 Message-Id: <20200623224043.801728-7-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement mailbox setup, take down, and commands. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../net/ethernet/intel/iecm/iecm_controlq.c | 497 +++++++++++++++++- .../ethernet/intel/iecm/iecm_controlq_setup.c | 105 +++- drivers/net/ethernet/intel/iecm/iecm_lib.c | 71 ++- drivers/net/ethernet/intel/iecm/iecm_osdep.c | 17 +- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 427 ++++++++++++++- 5 files changed, 1088 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c index 85ae216e8350..1ce5f4d4c348 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_controlq.c +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c @@ -14,7 +14,15 @@ static void iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq, struct iecm_ctlq_create_info *q_create_info) { - /* stub */ + /* set head and tail registers in our local struct */ + cq->reg.head = q_create_info->reg.head; + cq->reg.tail = q_create_info->reg.tail; + cq->reg.len = q_create_info->reg.len; + cq->reg.bah = q_create_info->reg.bah; + cq->reg.bal = q_create_info->reg.bal; + cq->reg.len_mask = q_create_info->reg.len_mask; + cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask; + cq->reg.head_mask = q_create_info->reg.head_mask; } /** @@ -30,7 +38,32 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, struct iecm_ctlq_info *cq, bool is_rxq) { - /* stub */ + u32 reg = 0; + + if (is_rxq) + /* Update tail to post pre-allocated buffers for Rx queues */ + wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1)); + else + wr32(hw, cq->reg.tail, 0); + + /* For non-Mailbox control queues only TAIL need to be set */ + if (cq->q_id != -1) + return 0; + + /* Clear Head for both send or receive */ + wr32(hw, cq->reg.head, 0); + + /* set starting point */ + wr32(hw, cq->reg.bal, IECM_LO_DWORD(cq->desc_ring.pa)); + wr32(hw, cq->reg.bah, IECM_HI_DWORD(cq->desc_ring.pa)); + wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask)); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, cq->reg.bah); + if (reg != IECM_HI_DWORD(cq->desc_ring.pa)) + return IECM_ERR_CTLQ_ERROR; + + return 0; } /** @@ -42,7 +75,30 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, */ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) { - /* stub */ + int i = 0; + + for (i = 0; i < cq->ring_size; i++) { + struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i); + struct iecm_dma_mem *bi = cq->bi.rx_buff[i]; + + /* No buffer to post to descriptor, continue */ + if (!bi) + continue; + + desc->flags = + cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD); + desc->opcode = 0; + desc->datalen = (__le16)cpu_to_le16(bi->size); + desc->ret_val = 0; + desc->cookie_high = 0; + desc->cookie_low = 0; + desc->params.indirect.addr_high = + cpu_to_le32(IECM_HI_DWORD(bi->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(IECM_LO_DWORD(bi->pa)); + desc->params.indirect.param0 = 0; + desc->params.indirect.param1 = 0; + } } /** @@ -54,7 +110,20 @@ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) */ static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + mutex_lock(&cq->cq_lock); + + if (!cq->ring_size) + goto shutdown_sq_out; + + /* free ring buffers and the ring itself */ + iecm_ctlq_dealloc_ring_res(hw, cq); + + /* Set ring_size to 0 to indicate uninitialized queue */ + cq->ring_size = 0; + +shutdown_sq_out: + mutex_unlock(&cq->cq_lock); + mutex_destroy(&cq->cq_lock); } /** @@ -73,7 +142,74 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, struct iecm_ctlq_create_info *qinfo, struct iecm_ctlq_info **cq) { - /* stub */ + enum iecm_status status = 0; + bool is_rxq = false; + + if (!qinfo->len || !qinfo->buf_size || + qinfo->len > IECM_CTLQ_MAX_RING_SIZE || + qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN) + return IECM_ERR_CFG; + + *cq = kcalloc(1, sizeof(struct iecm_ctlq_info), GFP_KERNEL); + if (!(*cq)) + return IECM_ERR_NO_MEMORY; + + (*cq)->cq_type = qinfo->type; + (*cq)->q_id = qinfo->id; + (*cq)->buf_size = qinfo->buf_size; + (*cq)->ring_size = qinfo->len; + + (*cq)->next_to_use = 0; + (*cq)->next_to_clean = 0; + (*cq)->next_to_post = (*cq)->ring_size - 1; + + switch (qinfo->type) { + case IECM_CTLQ_TYPE_MAILBOX_RX: + is_rxq = true; + fallthrough; + case IECM_CTLQ_TYPE_MAILBOX_TX: + status = iecm_ctlq_alloc_ring_res(hw, *cq); + break; + default: + status = IECM_ERR_PARAM; + break; + } + + if (status) + goto init_free_q; + + if (is_rxq) { + iecm_ctlq_init_rxq_bufs(*cq); + } else { + /* Allocate the array of msg pointers for TX queues */ + (*cq)->bi.tx_msg = kcalloc(qinfo->len, + sizeof(struct iecm_ctlq_msg *), + GFP_KERNEL); + if (!(*cq)->bi.tx_msg) { + status = IECM_ERR_NO_MEMORY; + goto init_dealloc_q_mem; + } + } + + iecm_ctlq_setup_regs(*cq, qinfo); + + status = iecm_ctlq_init_regs(hw, *cq, is_rxq); + if (status) + goto init_dealloc_q_mem; + + mutex_init(&(*cq)->cq_lock); + + list_add(&(*cq)->cq_list, &hw->cq_list_head); + + return status; + +init_dealloc_q_mem: + /* free ring buffers and the ring itself */ + iecm_ctlq_dealloc_ring_res(hw, *cq); +init_free_q: + kfree(*cq); + + return status; } /** @@ -84,7 +220,9 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, void iecm_ctlq_remove(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + list_del(&cq->cq_list); + iecm_ctlq_shutdown(hw, cq); + kfree(cq); } /** @@ -101,7 +239,27 @@ void iecm_ctlq_remove(struct iecm_hw *hw, enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, struct iecm_ctlq_create_info *q_info) { - /* stub */ + struct iecm_ctlq_info *cq = NULL, *tmp = NULL; + enum iecm_status ret_code = 0; + int i = 0; + + INIT_LIST_HEAD(&hw->cq_list_head); + + for (i = 0; i < num_q; i++) { + struct iecm_ctlq_create_info *qinfo = q_info + i; + + ret_code = iecm_ctlq_add(hw, qinfo, &cq); + if (ret_code) + goto init_destroy_qs; + } + + return ret_code; + +init_destroy_qs: + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) + iecm_ctlq_remove(hw, cq); + + return ret_code; } /** @@ -110,7 +268,13 @@ enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, */ enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw) { - /* stub */ + struct iecm_ctlq_info *cq = NULL, *tmp = NULL; + enum iecm_status ret_code = 0; + + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) + iecm_ctlq_remove(hw, cq); + + return ret_code; } /** @@ -130,7 +294,79 @@ enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, u16 num_q_msg, struct iecm_ctlq_msg q_msg[]) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_desc *desc; + int num_desc_avail = 0; + int i = 0; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + mutex_lock(&cq->cq_lock); + + /* Ensure there are enough descriptors to send all messages */ + num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq); + if (num_desc_avail == 0 || num_desc_avail < num_q_msg) { + status = IECM_ERR_CTLQ_FULL; + goto sq_send_command_out; + } + + for (i = 0; i < num_q_msg; i++) { + struct iecm_ctlq_msg *msg = &q_msg[i]; + u64 msg_cookie; + + desc = IECM_CTLQ_DESC(cq, cq->next_to_use); + + desc->opcode = cpu_to_le16(msg->opcode); + desc->pfid_vfid = cpu_to_le16(msg->func_id); + + msg_cookie = *(u64 *)&msg->cookie; + desc->cookie_high = + cpu_to_le32(IECM_HI_DWORD(msg_cookie)); + desc->cookie_low = + cpu_to_le32(IECM_LO_DWORD(msg_cookie)); + + if (msg->data_len) { + struct iecm_dma_mem *buff = msg->ctx.indirect.payload; + + desc->datalen = cpu_to_le16(msg->data_len); + desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_BUF); + desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_RD); + + /* Update the address values in the desc with the pa + * value for respective buffer + */ + desc->params.indirect.addr_high = + cpu_to_le32(IECM_HI_DWORD(buff->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(IECM_LO_DWORD(buff->pa)); + + memcpy(&desc->params, msg->ctx.indirect.context, + IECM_INDIRECT_CTX_SIZE); + } else { + memcpy(&desc->params, msg->ctx.direct, + IECM_DIRECT_CTX_SIZE); + } + + /* Store buffer info */ + cq->bi.tx_msg[cq->next_to_use] = msg; + + (cq->next_to_use)++; + if (cq->next_to_use == cq->ring_size) + cq->next_to_use = 0; + } + + /* Force memory write to complete before letting hardware + * know that there are new descriptors to fetch. + */ + iecm_wmb(); + + wr32(hw, cq->reg.tail, cq->next_to_use); + +sq_send_command_out: + mutex_unlock(&cq->cq_lock); + + return status; } /** @@ -152,7 +388,58 @@ enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count, struct iecm_ctlq_msg *msg_status[]) { - /* stub */ + enum iecm_status ret = 0; + struct iecm_ctlq_desc *desc; + u16 i = 0, num_to_clean; + u16 ntc, desc_err; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + if (*clean_count == 0) + return 0; + else if (*clean_count > cq->ring_size) + return IECM_ERR_PARAM; + + mutex_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + + num_to_clean = *clean_count; + + for (i = 0; i < num_to_clean; i++) { + /* Fetch next descriptor and check if marked as done */ + desc = IECM_CTLQ_DESC(cq, ntc); + if (!(le16_to_cpu(desc->flags) & IECM_CTLQ_FLAG_DD)) + break; + + desc_err = le16_to_cpu(desc->ret_val); + if (desc_err) { + /* strip off FW internal code */ + desc_err &= 0xff; + } + + msg_status[i] = cq->bi.tx_msg[ntc]; + msg_status[i]->status = desc_err; + + cq->bi.tx_msg[ntc] = NULL; + + /* Zero out any stale data */ + memset(desc, 0, sizeof(*desc)); + + ntc++; + if (ntc == cq->ring_size) + ntc = 0; + } + + cq->next_to_clean = ntc; + + mutex_unlock(&cq->cq_lock); + + /* Return number of descriptors actually cleaned */ + *clean_count = i; + + return ret; } /** @@ -175,7 +462,111 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, u16 *buff_count, struct iecm_dma_mem **buffs) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_desc *desc; + u16 ntp = cq->next_to_post; + bool buffs_avail = false; + u16 tbp = ntp + 1; + int i = 0; + + if (*buff_count > cq->ring_size) + return IECM_ERR_PARAM; + + if (*buff_count > 0) + buffs_avail = true; + + mutex_lock(&cq->cq_lock); + + if (tbp >= cq->ring_size) + tbp = 0; + + if (tbp == cq->next_to_clean) + /* Nothing to do */ + goto post_buffs_out; + + /* Post buffers for as many as provided or up until the last one used */ + while (ntp != cq->next_to_clean) { + desc = IECM_CTLQ_DESC(cq, ntp); + + if (!cq->bi.rx_buff[ntp]) { + if (!buffs_avail) { + /* If the caller hasn't given us any buffers or + * there are none left, search the ring itself + * for an available buffer to move to this + * entry starting at the next entry in the ring + */ + tbp = ntp + 1; + + /* Wrap ring if necessary */ + if (tbp >= cq->ring_size) + tbp = 0; + + while (tbp != cq->next_to_clean) { + if (cq->bi.rx_buff[tbp]) { + cq->bi.rx_buff[ntp] = + cq->bi.rx_buff[tbp]; + cq->bi.rx_buff[tbp] = NULL; + + /* Found a buffer, no need to + * search anymore + */ + break; + } + + /* Wrap ring if necessary */ + tbp++; + if (tbp >= cq->ring_size) + tbp = 0; + } + + if (tbp == cq->next_to_clean) + goto post_buffs_out; + } else { + /* Give back pointer to DMA buffer */ + cq->bi.rx_buff[ntp] = buffs[i]; + i++; + + if (i >= *buff_count) + buffs_avail = false; + } + } + + desc->flags = + cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD); + + /* Post buffers to descriptor */ + desc->datalen = cpu_to_le16(cq->bi.rx_buff[ntp]->size); + desc->params.indirect.addr_high = + cpu_to_le32(IECM_HI_DWORD(cq->bi.rx_buff[ntp]->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(IECM_LO_DWORD(cq->bi.rx_buff[ntp]->pa)); + + ntp++; + if (ntp == cq->ring_size) + ntp = 0; + } + +post_buffs_out: + /* Only update tail if buffers were actually posted */ + if (cq->next_to_post != ntp) { + if (ntp) + /* Update next_to_post to ntp - 1 since current ntp + * will not have a buffer + */ + cq->next_to_post = ntp - 1; + else + /* Wrap to end of end ring since current ntp is 0 */ + cq->next_to_post = cq->ring_size - 1; + + wr32(hw, cq->reg.tail, cq->next_to_post); + } + + mutex_unlock(&cq->cq_lock); + + /* return the number of buffers that were not posted */ + *buff_count = *buff_count - i; + + return status; } /** @@ -192,5 +583,87 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg, struct iecm_ctlq_msg *q_msg) { - /* stub */ + u16 num_to_clean, ntc, ret_val, flags; + enum iecm_status ret_code = 0; + struct iecm_ctlq_desc *desc; + u16 i = 0; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + if (*num_q_msg == 0) + return 0; + else if (*num_q_msg > cq->ring_size) + return IECM_ERR_PARAM; + + /* take the lock before we start messing with the ring */ + mutex_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + + num_to_clean = *num_q_msg; + + for (i = 0; i < num_to_clean; i++) { + u64 msg_cookie; + + /* Fetch next descriptor and check if marked as done */ + desc = IECM_CTLQ_DESC(cq, ntc); + flags = le16_to_cpu(desc->flags); + + if (!(flags & IECM_CTLQ_FLAG_DD)) + break; + + ret_val = le16_to_cpu(desc->ret_val); + + q_msg[i].vmvf_type = (flags & + (IECM_CTLQ_FLAG_FTYPE_VM | + IECM_CTLQ_FLAG_FTYPE_PF)) >> + IECM_CTLQ_FLAG_FTYPE_S; + + if (flags & IECM_CTLQ_FLAG_ERR) + ret_code = IECM_ERR_CTLQ_ERROR; + + msg_cookie = (u64)le32_to_cpu(desc->cookie_high) << 32; + msg_cookie |= (u64)le32_to_cpu(desc->cookie_low); + memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64)); + + q_msg[i].opcode = le16_to_cpu(desc->opcode); + q_msg[i].data_len = le16_to_cpu(desc->datalen); + q_msg[i].status = ret_val; + + if (desc->datalen) { + memcpy(q_msg[i].ctx.indirect.context, + &desc->params.indirect, IECM_INDIRECT_CTX_SIZE); + + /* Assign pointer to DMA buffer to ctlq_msg array + * to be given to upper layer + */ + q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc]; + + /* Zero out pointer to DMA buffer info; + * will be repopulated by post buffers API + */ + cq->bi.rx_buff[ntc] = NULL; + } else { + memcpy(q_msg[i].ctx.direct, desc->params.raw, + IECM_DIRECT_CTX_SIZE); + } + + /* Zero out stale data in descriptor */ + memset(desc, 0, sizeof(struct iecm_ctlq_desc)); + + ntc++; + if (ntc == cq->ring_size) + ntc = 0; + }; + + cq->next_to_clean = ntc; + + mutex_unlock(&cq->cq_lock); + + *num_q_msg = i; + if (*num_q_msg == 0) + ret_code = IECM_ERR_CTLQ_NO_WORK; + + return ret_code; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c index 2fd6e3d15a1a..f230f2044a48 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c @@ -13,7 +13,13 @@ static enum iecm_status iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc); + + cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size); + if (!cq->desc_ring.va) + return IECM_ERR_NO_MEMORY; + + return 0; } /** @@ -27,7 +33,52 @@ iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + int i = 0; + + /* Do not allocate DMA buffers for transmit queues */ + if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX) + return 0; + + /* We'll be allocating the buffer info memory first, then we can + * allocate the mapped buffers for the event processing + */ + cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct iecm_dma_mem *), + GFP_KERNEL); + if (!cq->bi.rx_buff) + return IECM_ERR_NO_MEMORY; + + /* allocate the mapped buffers (except for the last one) */ + for (i = 0; i < cq->ring_size - 1; i++) { + struct iecm_dma_mem *bi; + int num = 1; /* number of iecm_dma_mem to be allocated */ + + cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct iecm_dma_mem), + GFP_KERNEL); + if (!cq->bi.rx_buff[i]) + goto unwind_alloc_cq_bufs; + + bi = cq->bi.rx_buff[i]; + + bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size); + if (!bi->va) { + /* unwind will not free the failed entry */ + kfree(cq->bi.rx_buff[i]); + goto unwind_alloc_cq_bufs; + } + } + + return 0; + +unwind_alloc_cq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) { + iecm_free_dma_mem(hw, cq->bi.rx_buff[i]); + kfree(cq->bi.rx_buff[i]); + } + kfree(cq->bi.rx_buff); + + return IECM_ERR_NO_MEMORY; } /** @@ -41,7 +92,7 @@ static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + iecm_free_dma_mem(hw, &cq->desc_ring); } /** @@ -54,7 +105,26 @@ static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, */ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + void *bi; + + if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) { + int i; + + /* free DMA buffers for Rx queues*/ + for (i = 0; i < cq->ring_size; i++) { + if (cq->bi.rx_buff[i]) { + iecm_free_dma_mem(hw, cq->bi.rx_buff[i]); + kfree(cq->bi.rx_buff[i]); + } + } + + bi = (void *)cq->bi.rx_buff; + } else { + bi = (void *)cq->bi.tx_msg; + } + + /* free the buffer header */ + kfree(bi); } /** @@ -66,7 +136,9 @@ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) */ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + /* free ring buffers and the ring itself */ + iecm_ctlq_free_bufs(hw, cq); + iecm_ctlq_free_desc_ring(hw, cq); } /** @@ -80,5 +152,26 @@ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + enum iecm_status ret_code; + + /* verify input for valid configuration */ + if (!cq->ring_size || !cq->buf_size) + return IECM_ERR_CFG; + + /* allocate the ring memory */ + ret_code = iecm_ctlq_alloc_desc_ring(hw, cq); + if (ret_code) + return ret_code; + + /* allocate buffers in the rings */ + ret_code = iecm_ctlq_alloc_bufs(hw, cq); + if (ret_code) + goto iecm_init_cq_free_ring; + + /* success! */ + return 0; + +iecm_init_cq_free_ring: + iecm_free_dma_mem(hw, &cq->desc_ring); + return ret_code; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 9aa944430187..afceab24aeda 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -163,7 +163,76 @@ static int iecm_intr_req(struct iecm_adapter *adapter) */ static int iecm_cfg_netdev(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + netdev_features_t dflt_features; + netdev_features_t offloads = 0; + struct iecm_netdev_priv *np; + struct net_device *netdev; + int err; + + netdev = alloc_etherdev_mqs(sizeof(struct iecm_netdev_priv), + IECM_MAX_Q, IECM_MAX_Q); + if (!netdev) + return -ENOMEM; + vport->netdev = netdev; + np = netdev_priv(netdev); + np->vport = vport; + + if (!is_valid_ether_addr(vport->default_mac_addr)) { + eth_hw_addr_random(netdev); + ether_addr_copy(vport->default_mac_addr, netdev->dev_addr); + } else { + ether_addr_copy(netdev->dev_addr, vport->default_mac_addr); + ether_addr_copy(netdev->perm_addr, vport->default_mac_addr); + } + + /* assign netdev_ops */ + if (iecm_is_queue_model_split(vport->txq_model)) + netdev->netdev_ops = &iecm_netdev_ops_splitq; + else + netdev->netdev_ops = &iecm_netdev_ops_singleq; + + /* setup watchdog timeout value to be 5 second */ + netdev->watchdog_timeo = 5 * HZ; + + /* configure default MTU size */ + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = vport->max_mtu; + + dflt_features = NETIF_F_SG | + NETIF_F_HIGHDMA | + NETIF_F_RXHASH; + + if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_STATELESS_OFFLOADS)) { + dflt_features |= + NETIF_F_RXCSUM | + NETIF_F_IP_CSUM | + NETIF_F_IPV6_CSUM | + 0; + offloads |= NETIF_F_TSO | + NETIF_F_TSO6; + } + if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_UDP_SEG_OFFLOAD)) + offloads |= NETIF_F_GSO_UDP_L4; + + netdev->features |= dflt_features; + netdev->hw_features |= dflt_features | offloads; + netdev->hw_enc_features |= dflt_features | offloads; + + iecm_set_ethtool_ops(netdev); + SET_NETDEV_DEV(netdev, &adapter->pdev->dev); + + err = register_netdev(netdev); + if (err) + return err; + + /* carrier off on init to avoid Tx hangs */ + netif_carrier_off(netdev); + + /* make sure transmit queues start off as stopped */ + netif_tx_stop_all_queues(netdev); + + return 0; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_osdep.c b/drivers/net/ethernet/intel/iecm/iecm_osdep.c index d0534df357d0..be5dbe1d23b2 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_osdep.c +++ b/drivers/net/ethernet/intel/iecm/iecm_osdep.c @@ -6,10 +6,23 @@ void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) { - /* stub */ + size_t sz = ALIGN(size, 4096); + struct iecm_adapter *pf = hw->back; + + mem->va = dma_alloc_coherent(&pf->pdev->dev, sz, + &mem->pa, GFP_KERNEL | __GFP_ZERO); + mem->size = size; + + return mem->va; } void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem) { - /* stub */ + struct iecm_adapter *pf = hw->back; + + dma_free_coherent(&pf->pdev->dev, mem->size, + mem->va, mem->pa); + mem->size = 0; + mem->va = NULL; + mem->pa = 0; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 6fa441e34e21..c5061778ce74 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -39,7 +39,42 @@ struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = { */ static void iecm_recv_event_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum virtchnl_link_speed link_speed; + struct virtchnl_pf_event *vpe; + + vpe = (struct virtchnl_pf_event *)vport->adapter->vc_msg; + + switch (vpe->event) { + case VIRTCHNL_EVENT_LINK_CHANGE: + link_speed = vpe->event_data.link_event.link_speed; + adapter->link_speed = link_speed; + if (adapter->link_up != + vpe->event_data.link_event.link_status) { + adapter->link_up = + vpe->event_data.link_event.link_status; + if (adapter->link_up) { + netif_tx_start_all_queues(vport->netdev); + netif_carrier_on(vport->netdev); + } else { + netif_tx_stop_all_queues(vport->netdev); + netif_carrier_off(vport->netdev); + } + } + break; + case VIRTCHNL_EVENT_RESET_IMPENDING: + mutex_lock(&adapter->reset_lock); + set_bit(__IECM_HR_CORE_RESET, adapter->flags); + queue_delayed_work(adapter->vc_event_wq, + &adapter->vc_event_task, + msecs_to_jiffies(10)); + break; + default: + dev_err(&vport->adapter->pdev->dev, + "Unknown event %d from PF\n", vpe->event); + break; + } + mutex_unlock(&vport->adapter->vc_msg_lock); } /** @@ -53,7 +88,34 @@ static void iecm_recv_event_msg(struct iecm_vport *vport) static enum iecm_status iecm_mb_clean(struct iecm_adapter *adapter) { - /* stub */ + enum iecm_status err = 0; + u16 num_q_msg = IECM_DFLT_MBX_Q_LEN; + struct iecm_ctlq_msg **q_msg; + struct iecm_dma_mem *dma_mem; + u16 i; + + q_msg = kcalloc(num_q_msg, sizeof(struct iecm_ctlq_msg *), GFP_KERNEL); + if (!q_msg) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, + q_msg); + if (err) + goto error; + + for (i = 0; i < num_q_msg; i++) { + dma_mem = q_msg[i]->ctx.indirect.payload; + if (dma_mem) + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, + dma_mem->va, dma_mem->pa); + kfree(q_msg[i]); + } + kfree(q_msg); + +error: + return err; } /** @@ -71,7 +133,56 @@ enum iecm_status iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, u16 msg_size, u8 *msg) { - /* stub */ + enum iecm_status err = 0; + struct iecm_ctlq_msg *ctlq_msg; + struct iecm_dma_mem *dma_mem; + + err = iecm_mb_clean(adapter); + if (err) + goto err; + + ctlq_msg = kzalloc(sizeof(struct iecm_ctlq_msg), GFP_KERNEL); + if (!ctlq_msg) { + err = IECM_ERR_NO_MEMORY; + goto err; + } + + dma_mem = kzalloc(sizeof(struct iecm_dma_mem), GFP_KERNEL); + if (!dma_mem) { + err = IECM_ERR_NO_MEMORY; + goto dma_mem_error; + } + + memset(ctlq_msg, 0, sizeof(struct iecm_ctlq_msg)); + ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_cp; + ctlq_msg->func_id = 0; + ctlq_msg->data_len = msg_size; + ctlq_msg->cookie.mbx.chnl_opcode = op; + ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS; + dma_mem->size = IECM_DFLT_MBX_BUF_SIZE; + dma_mem->va = dmam_alloc_coherent(&adapter->pdev->dev, dma_mem->size, + &dma_mem->pa, GFP_KERNEL); + if (!dma_mem->va) { + err = IECM_ERR_NO_MEMORY; + goto dma_alloc_error; + } + memcpy(dma_mem->va, msg, msg_size); + ctlq_msg->ctx.indirect.payload = dma_mem; + + err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); + if (err) + goto send_error; + + return err; +send_error: + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, dma_mem->va, + dma_mem->pa); +dma_alloc_error: + kfree(dma_mem); +dma_mem_error: + kfree(ctlq_msg); +err: + return err; } EXPORT_SYMBOL(iecm_send_mb_msg); @@ -88,7 +199,265 @@ enum iecm_status iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, void *msg, int msg_size) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_msg ctlq_msg; + struct iecm_dma_mem *dma_mem; + struct iecm_vport *vport; + bool work_done = false; + int payload_size = 0; + int num_retry = 10; + u16 num_q_msg; + + vport = adapter->vports[0]; + while (1) { + /* Try to get one message */ + num_q_msg = 1; + dma_mem = NULL; + status = iecm_ctlq_recv(adapter->hw.arq, + &num_q_msg, &ctlq_msg); + /* If no message then decide if we have to retry based on + * opcode + */ + if (status || !num_q_msg) { + if (op && num_retry--) { + msleep(20); + continue; + } else { + break; + } + } + + /* If we are here a message is received. Check if we are looking + * for a specific message based on opcode. If it is different + * ignore and post buffers + */ + if (op && ctlq_msg.cookie.mbx.chnl_opcode != op) + goto post_buffs; + + if (ctlq_msg.data_len) + payload_size = ctlq_msg.ctx.indirect.payload->size; + + /* All conditions are met. Either a message requested is + * received or we received a message to be processed + */ + switch (ctlq_msg.cookie.mbx.chnl_opcode) { + case VIRTCHNL_OP_VERSION: + case VIRTCHNL_OP_GET_CAPS: + case VIRTCHNL_OP_CREATE_VPORT: + if (msg) + memcpy(msg, ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, msg_size)); + work_done = true; + break; + case VIRTCHNL_OP_ENABLE_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_ENA_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_ENA_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DISABLE_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DIS_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_DIS_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DESTROY_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DESTROY_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_DESTROY_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_TX_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_TXQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_TXQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_RX_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_RXQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_RXQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_ENA_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_ENA_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DIS_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_DIS_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_ADD_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_ADD_QUEUES_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min((int) + ctlq_msg.ctx.indirect.payload->size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_ADD_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DEL_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DEL_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_DEL_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_MAP_QUEUE_VECTOR: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_MAP_IRQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_MAP_IRQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_UNMAP_IRQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_UNMAP_IRQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_STATS: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_STATS_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_STATS, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_HASH: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_HASH_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_HASH, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_SET_RSS_HASH: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_SET_RSS_HASH_ERR, + adapter->vc_state); + set_bit(IECM_VC_SET_RSS_HASH, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_LUT: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_LUT_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_LUT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_SET_RSS_LUT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_SET_RSS_LUT_ERR, + adapter->vc_state); + set_bit(IECM_VC_SET_RSS_LUT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_KEY: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_KEY_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_KEY, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_RSS_KEY: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_RSS_KEY_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_RSS_KEY, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_EVENT: + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + iecm_recv_event_msg(vport); + break; + default: + if (adapter->dev_ops.vc_ops.recv_mbx_msg) + status = + adapter->dev_ops.vc_ops.recv_mbx_msg(adapter, + msg, + msg_size, + &ctlq_msg, + &work_done); + else + dev_warn(&adapter->pdev->dev, + "Unhandled virtchnl response %d\n", + ctlq_msg.cookie.mbx.chnl_opcode); + break; + } /* switch v_opcode */ +post_buffs: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + num_q_msg = 0; + + status = iecm_ctlq_post_rx_buffs(&adapter->hw, adapter->hw.arq, + &num_q_msg, + &dma_mem); + /* If post failed clear the only buffer we supplied */ + if (status && dma_mem) + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, + dma_mem->va, dma_mem->pa); + /* Applies only if we are looking for a specific opcode */ + if (work_done) + break; + } + + return status; } EXPORT_SYMBOL(iecm_recv_mb_msg); @@ -186,7 +555,27 @@ iecm_wait_for_event(struct iecm_adapter *adapter, enum iecm_vport_vc_state state, enum iecm_vport_vc_state err_check) { - /* stub */ + enum iecm_status status; + int event; + + event = wait_event_timeout(adapter->vchnl_wq, + test_and_clear_bit(state, adapter->vc_state), + msecs_to_jiffies(500)); + if (event) { + if (test_and_clear_bit(err_check, adapter->vc_state)) { + dev_err(&adapter->pdev->dev, + "VC response error %d", err_check); + status = IECM_ERR_CFG; + } else { + status = 0; + } + } else { + /* Timeout occurred */ + status = IECM_ERR_NOT_READY; + dev_err(&adapter->pdev->dev, "VC timeout, state = %u", state); + } + + return status; } EXPORT_SYMBOL(iecm_wait_for_event); @@ -426,7 +815,14 @@ enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, enum iecm_ctlq_type type, int id) { - /* stub */ + struct iecm_ctlq_info *cq, *tmp; + + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) { + if (cq->q_id == id && cq->cq_type == type) + return cq; + } + + return NULL; } /** @@ -435,7 +831,8 @@ static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, */ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) { - /* stub */ + cancel_delayed_work_sync(&adapter->init_task); + iecm_ctlq_deinit(&adapter->hw); } /** @@ -497,7 +894,21 @@ enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter) */ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) { - /* stub */ + adapter->vport_params_reqd = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vport_params_reqd), + GFP_KERNEL); + if (!adapter->vport_params_reqd) + return -ENOMEM; + + adapter->vport_params_recvd = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vport_params_recvd), + GFP_KERNEL); + if (!adapter->vport_params_recvd) { + kfree(adapter->vport_params_reqd); + return -ENOMEM; + } + + return 0; } /** From patchwork Tue Jun 23 22:40:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AFE1C433E1 for ; Tue, 23 Jun 2020 22:41:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51BB02078A for ; Tue, 23 Jun 2020 22:41:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388115AbgFWWlW (ORCPT ); Tue, 23 Jun 2020 18:41:22 -0400 Received: from mga01.intel.com ([192.55.52.88]:14604 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387879AbgFWWlO (ORCPT ); Tue, 23 Jun 2020 18:41:14 -0400 IronPort-SDR: ppGj/gEBvNFu7DLT2vWjvwUmEgYaJD0Oyi6SxKJ1aezLFv36XaeIPWjUmkDtyGgbbOcVW3Uqwk PRQG8+Xeki8Q== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327503" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327503" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:48 -0700 IronPort-SDR: A8lUaxfK5r23WkWlFfihk5LfPeW5mt2RNnMHM3/u2vWkGaqnbRhUSug3u/XfnZSpiYthLvnHZy wHrI72Fykuaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045944" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:48 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 09/15] iecm: Init and allocate vport Date: Tue, 23 Jun 2020 15:40:37 -0700 Message-Id: <20200623224043.801728-10-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Initialize vport and allocate queue resources. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 87 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 797 +++++++++++++++++- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 37 +- 3 files changed, 890 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index efbdff575149..a51731feedb4 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -444,7 +444,15 @@ static void iecm_vport_rel_all(struct iecm_adapter *adapter) void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog __always_unused *prog) { - /* stub */ + if (prog) { + vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT; + return; + } + if (iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_HEADER_SPLIT) && + iecm_is_queue_model_split(vport->rxq_model)) + vport->rx_hsplit_en = IECM_RX_HDR_SPLIT; + else + vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT; } /** @@ -532,7 +540,12 @@ static void iecm_service_task(struct work_struct *work) */ static void iecm_up_complete(struct iecm_vport *vport) { - /* stub */ + netif_set_real_num_rx_queues(vport->netdev, vport->num_txq); + netif_set_real_num_tx_queues(vport->netdev, vport->num_rxq); + netif_carrier_on(vport->netdev); + netif_tx_start_all_queues(vport->netdev); + + vport->adapter->state = __IECM_UP; } /** @@ -541,7 +554,71 @@ static void iecm_up_complete(struct iecm_vport *vport) */ static int iecm_vport_open(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int err; + + if (vport->adapter->state != __IECM_DOWN) + return -EBUSY; + + /* we do not allow interface up just yet */ + netif_carrier_off(vport->netdev); + + if (adapter->dev_ops.vc_ops.enable_vport) { + err = adapter->dev_ops.vc_ops.enable_vport(vport); + if (err) + return -EAGAIN; + } + + if (iecm_vport_queues_alloc(vport)) { + err = -ENOMEM; + goto unroll_queues_alloc; + } + + err = iecm_vport_intr_init(vport); + if (err) + goto unroll_intr_init; + + err = vport->adapter->dev_ops.vc_ops.config_queues(vport); + if (err) + goto unroll_config_queues; + err = vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, true); + if (err) { + dev_err(&vport->adapter->pdev->dev, + "Call to irq_map_unmap returned %d\n", err); + goto unroll_config_queues; + } + err = vport->adapter->dev_ops.vc_ops.enable_queues(vport); + if (err) + goto unroll_enable_queues; + + err = vport->adapter->dev_ops.vc_ops.get_ptype(vport); + if (err) + goto unroll_get_ptype; + + if (adapter->rss_data.rss_lut) + err = iecm_config_rss(vport); + else + err = iecm_init_rss(vport); + if (err) + goto unroll_init_rss; + iecm_up_complete(vport); + + netif_info(vport->adapter, hw, vport->netdev, "%s\n", __func__); + + return 0; +unroll_init_rss: +unroll_get_ptype: + vport->adapter->dev_ops.vc_ops.disable_queues(vport); +unroll_enable_queues: + vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, false); +unroll_config_queues: + iecm_vport_intr_deinit(vport); +unroll_intr_init: + iecm_vport_queues_rel(vport); +unroll_queues_alloc: + adapter->dev_ops.vc_ops.disable_vport(vport); + + return err; } /** @@ -862,7 +939,9 @@ EXPORT_SYMBOL(iecm_shutdown); */ static int iecm_open(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return iecm_vport_open(np->vport); } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 6a853d1828cb..4e5e3d487795 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -82,7 +82,37 @@ static void iecm_tx_desc_rel_all(struct iecm_vport *vport) */ static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) { - /* stub */ + int buf_size; + int i = 0; + + /* Allocate book keeping buffers only. Buffers to be supplied to HW + * are allocated by kernel network stack and received as part of skb + */ + buf_size = sizeof(struct iecm_tx_buf) * tx_q->desc_count; + tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL); + if (!tx_q->tx_buf) + return IECM_ERR_NO_MEMORY; + + /* Initialize Tx buf stack for out-of-order completions if + * flow scheduling offload is enabled + */ + tx_q->buf_stack.bufs = + kcalloc(tx_q->desc_count, sizeof(struct iecm_tx_buf *), + GFP_KERNEL); + if (!tx_q->buf_stack.bufs) + return IECM_ERR_NO_MEMORY; + + for (i = 0; i < tx_q->desc_count; i++) { + tx_q->buf_stack.bufs[i] = kzalloc(sizeof(struct iecm_tx_buf), + GFP_KERNEL); + if (!tx_q->buf_stack.bufs[i]) + return IECM_ERR_NO_MEMORY; + } + + tx_q->buf_stack.size = tx_q->desc_count; + tx_q->buf_stack.top = tx_q->desc_count; + + return 0; } /** @@ -92,7 +122,40 @@ static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) */ static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) { - /* stub */ + struct device *dev = tx_q->dev; + enum iecm_status err = 0; + + if (bufq) { + err = iecm_tx_buf_alloc_all(tx_q); + if (err) + goto err_alloc; + tx_q->size = tx_q->desc_count * + sizeof(struct iecm_base_tx_desc); + } else { + tx_q->size = tx_q->desc_count * + sizeof(struct iecm_splitq_tx_compl_desc); + } + + /* Allocate descriptors also round up to nearest 4K */ + tx_q->size = ALIGN(tx_q->size, 4096); + tx_q->desc_ring = dmam_alloc_coherent(dev, tx_q->size, &tx_q->dma, + GFP_KERNEL); + if (!tx_q->desc_ring) { + dev_info(dev, "Unable to allocate memory for the Tx descriptor ring, size=%d\n", + tx_q->size); + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + tx_q->next_to_alloc = 0; + tx_q->next_to_use = 0; + tx_q->next_to_clean = 0; + set_bit(__IECM_Q_GEN_CHK, tx_q->flags); + +err_alloc: + if (err) + iecm_tx_desc_rel(tx_q, bufq); + return err; } /** @@ -101,7 +164,41 @@ static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) */ static enum iecm_status iecm_tx_desc_alloc_all(struct iecm_vport *vport) { - /* stub */ + struct pci_dev *pdev = vport->adapter->pdev; + enum iecm_status err = 0; + int i, j; + + /* Setup buffer queues. In single queue model buffer queues and + * completion queues will be same + */ + for (i = 0; i < vport->num_txq_grp; i++) { + for (j = 0; j < vport->txq_grps[i].num_txq; j++) { + err = iecm_tx_desc_alloc(&vport->txq_grps[i].txqs[j], + true); + if (err) { + dev_err(&pdev->dev, + "Allocation for Tx Queue %u failed\n", + i); + goto err_out; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + /* Setup completion queues */ + err = iecm_tx_desc_alloc(vport->txq_grps[i].complq, + false); + if (err) { + dev_err(&pdev->dev, + "Allocation for Tx Completion Queue %u failed\n", + i); + goto err_out; + } + } + } +err_out: + if (err) + iecm_tx_desc_rel_all(vport); + return err; } /** @@ -156,7 +253,17 @@ static void iecm_rx_desc_rel_all(struct iecm_vport *vport) */ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) { - /* stub */ + /* update next to alloc since we have filled the ring */ + rxq->next_to_alloc = val; + + rxq->next_to_use = val; + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel_relaxed(val, rxq->tail); } /** @@ -169,7 +276,34 @@ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) */ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) { - /* stub */ + struct page *page = buf->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = alloc_page(GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!page)) + return false; + + /* map page for use */ + dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + + /* if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rxq->dev, dma)) { + __free_pages(page, 0); + return false; + } + + buf->dma = dma; + buf->page = page; + buf->page_offset = iecm_rx_offset(rxq); + + return true; } /** @@ -183,7 +317,34 @@ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) static bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *hdr_buf) { - /* stub */ + struct page *page = hdr_buf->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = alloc_page(GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!page)) + return false; + + /* map page for use */ + dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + + /* if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rxq->dev, dma)) { + __free_pages(page, 0); + return false; + } + + hdr_buf->dma = dma; + hdr_buf->page = page; + hdr_buf->page_offset = 0; + + return true; } /** @@ -197,7 +358,59 @@ static bool iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, u16 cleaned_count) { - /* stub */ + struct iecm_splitq_rx_buf_desc *splitq_rx_desc = NULL; + struct iecm_rx_buf *hdr_buf = NULL; + u16 nta = rxq->next_to_alloc; + struct iecm_rx_buf *buf; + + /* do nothing if no valid netdev defined */ + if (!rxq->vport->netdev || !cleaned_count) + return false; + + splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, nta); + + buf = &rxq->rx_buf.buf[nta]; + if (rxq->rx_hsplit_en) + hdr_buf = &rxq->rx_buf.hdr_buf[nta]; + + do { + if (rxq->rx_hsplit_en) { + if (!iecm_rx_hdr_buf_hw_alloc(rxq, hdr_buf)) + break; + + splitq_rx_desc->hdr_addr = + cpu_to_le64(hdr_buf->dma + + hdr_buf->page_offset); + hdr_buf++; + } + + if (!iecm_rx_buf_hw_alloc(rxq, buf)) + break; + + /* Refresh the desc even if buffer_addrs didn't change + * because each write-back erases this info. + */ + splitq_rx_desc->pkt_addr = + cpu_to_le64(buf->dma + buf->page_offset); + splitq_rx_desc->qword0.buf_id = cpu_to_le16(nta); + + splitq_rx_desc++; + buf++; + nta++; + if (unlikely(nta == rxq->desc_count)) { + splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, 0); + buf = rxq->rx_buf.buf; + hdr_buf = rxq->rx_buf.hdr_buf; + nta = 0; + } + + cleaned_count--; + } while (cleaned_count); + + if (rxq->next_to_alloc != nta) + iecm_rx_buf_hw_update(rxq, nta); + + return !!cleaned_count; } /** @@ -206,7 +419,44 @@ iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, */ static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) { - /* stub */ + enum iecm_status err = 0; + + /* Allocate book keeping buffers */ + rxq->rx_buf.buf = kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf), + GFP_KERNEL); + if (!rxq->rx_buf.buf) { + err = IECM_ERR_NO_MEMORY; + goto rx_buf_alloc_all_out; + } + + if (rxq->rx_hsplit_en) { + rxq->rx_buf.hdr_buf = + kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf), + GFP_KERNEL); + if (!rxq->rx_buf.hdr_buf) { + err = IECM_ERR_NO_MEMORY; + goto rx_buf_alloc_all_out; + } + } else { + rxq->rx_buf.hdr_buf = NULL; + } + + /* Allocate buffers to be given to HW. Allocate one less than + * total descriptor count as RX splits 4k buffers to 2K and recycles + */ + if (iecm_is_queue_model_split(rxq->vport->rxq_model)) { + if (iecm_rx_buf_hw_alloc_all(rxq, + rxq->desc_count - 1)) + err = IECM_ERR_NO_MEMORY; + } else if (iecm_rx_singleq_buf_hw_alloc_all(rxq, + rxq->desc_count - 1)) { + err = IECM_ERR_NO_MEMORY; + } + +rx_buf_alloc_all_out: + if (err) + iecm_rx_buf_rel_all(rxq); + return err; } /** @@ -218,7 +468,50 @@ static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, enum virtchnl_queue_model q_model) { - /* stub */ + struct device *dev = rxq->dev; + enum iecm_status err = 0; + + /* As both single and split descriptors are 32 byte, memory size + * will be same for all three singleq_base Rx, buf., splitq_base + * Rx. So pick anyone of them for size + */ + if (bufq) { + rxq->size = rxq->desc_count * + sizeof(struct iecm_splitq_rx_buf_desc); + } else { + rxq->size = rxq->desc_count * + sizeof(union iecm_rx_desc); + } + + /* Allocate descriptors and also round up to nearest 4K */ + rxq->size = ALIGN(rxq->size, 4096); + rxq->desc_ring = dmam_alloc_coherent(dev, rxq->size, + &rxq->dma, GFP_KERNEL); + if (!rxq->desc_ring) { + dev_info(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", + rxq->size); + err = IECM_ERR_NO_MEMORY; + return err; + } + + rxq->next_to_alloc = 0; + rxq->next_to_clean = 0; + rxq->next_to_use = 0; + set_bit(__IECM_Q_GEN_CHK, rxq->flags); + + /* Allocate buffers for a Rx queue if the q_model is single OR if it + * is a buffer queue in split queue model + */ + if (bufq || !iecm_is_queue_model_split(q_model)) { + err = iecm_rx_buf_alloc_all(rxq); + if (err) + goto err_alloc; + } + +err_alloc: + if (err) + iecm_rx_desc_rel(rxq, bufq, q_model); + return err; } /** @@ -227,7 +520,48 @@ static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, */ static enum iecm_status iecm_rx_desc_alloc_all(struct iecm_vport *vport) { - /* stub */ + struct device *dev = &vport->adapter->pdev->dev; + enum iecm_status err = 0; + struct iecm_queue *q; + int i, j, num_rxq; + + for (i = 0; i < vport->num_rxq_grp; i++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = vport->rxq_grps[i].splitq.num_rxq_sets; + else + num_rxq = vport->rxq_grps[i].singleq.num_rxq; + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &vport->rxq_grps[i].splitq.rxq_sets[j].rxq; + else + q = &vport->rxq_grps[i].singleq.rxqs[j]; + err = iecm_rx_desc_alloc(q, false, vport->rxq_model); + if (err) { + dev_err(dev, "Memory allocation for Rx Queue %u failed\n", + i); + goto err_out; + } + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + q = + &vport->rxq_grps[i].splitq.bufq_sets[j].bufq; + err = iecm_rx_desc_alloc(q, true, + vport->rxq_model); + if (err) { + dev_err(dev, "Memory allocation for Rx Buffer Queue %u failed\n", + i); + goto err_out; + } + } + } + } +err_out: + if (err) + iecm_rx_desc_rel_all(vport); + return err; } /** @@ -279,7 +613,26 @@ void iecm_vport_queues_rel(struct iecm_vport *vport) static enum iecm_status iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) { - /* stub */ + enum iecm_status err = 0; + int i, j, k = 0; + + vport->txqs = kcalloc(vport->num_txq, sizeof(struct iecm_queue *), + GFP_KERNEL); + + if (!vport->txqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_grp = &vport->txq_grps[i]; + + for (j = 0; j < tx_grp->num_txq; j++, k++) { + vport->txqs[k] = &tx_grp->txqs[j]; + vport->txqs[k]->idx = k; + } + } +err_alloc: + return err; } /** @@ -290,7 +643,12 @@ iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) void iecm_vport_init_num_qs(struct iecm_vport *vport, struct virtchnl_create_vport *vport_msg) { - /* stub */ + vport->num_txq = vport_msg->num_tx_q; + vport->num_rxq = vport_msg->num_rx_q; + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_complq = vport_msg->num_tx_complq; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->num_bufq = vport_msg->num_rx_bufq; } /** @@ -299,7 +657,32 @@ void iecm_vport_init_num_qs(struct iecm_vport *vport, */ void iecm_vport_calc_num_q_desc(struct iecm_vport *vport) { - /* stub */ + int num_req_txq_desc = vport->adapter->config_data.num_req_txq_desc; + int num_req_rxq_desc = vport->adapter->config_data.num_req_rxq_desc; + + vport->complq_desc_count = 0; + vport->bufq_desc_count = 0; + if (num_req_txq_desc) { + vport->txq_desc_count = num_req_txq_desc; + if (iecm_is_queue_model_split(vport->txq_model)) + vport->complq_desc_count = num_req_txq_desc; + } else { + vport->txq_desc_count = + IECM_DFLT_TX_Q_DESC_COUNT; + if (iecm_is_queue_model_split(vport->txq_model)) { + vport->complq_desc_count = + IECM_DFLT_TX_COMPLQ_DESC_COUNT; + } + } + if (num_req_rxq_desc) { + vport->rxq_desc_count = num_req_rxq_desc; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->bufq_desc_count = num_req_rxq_desc; + } else { + vport->rxq_desc_count = IECM_DFLT_RX_Q_DESC_COUNT; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->bufq_desc_count = IECM_DFLT_RX_BUFQ_DESC_COUNT; + } } EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); @@ -311,7 +694,51 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, int num_req_qs) { - /* stub */ + int dflt_splitq_txq_grps, dflt_singleq_txqs; + int dflt_splitq_rxq_grps, dflt_singleq_rxqs; + int num_txq_grps, num_rxq_grps; + int num_cpus; + + /* Restrict num of queues to cpus online as a default configuration to + * give best performance. User can always override to a max number + * of queues via ethtool. + */ + num_cpus = num_online_cpus(); + dflt_splitq_txq_grps = min_t(int, IECM_DFLT_SPLITQ_TX_Q_GROUPS, + num_cpus); + dflt_singleq_txqs = min_t(int, IECM_DFLT_SINGLEQ_TXQ_PER_GROUP, + num_cpus); + dflt_splitq_rxq_grps = min_t(int, IECM_DFLT_SPLITQ_RX_Q_GROUPS, + num_cpus); + dflt_singleq_rxqs = min_t(int, IECM_DFLT_SINGLEQ_RXQ_PER_GROUP, + num_cpus); + + if (iecm_is_queue_model_split(vport_msg->txq_model)) { + num_txq_grps = num_req_qs ? num_req_qs : dflt_splitq_txq_grps; + vport_msg->num_tx_complq = num_txq_grps * + IECM_COMPLQ_PER_GROUP; + vport_msg->num_tx_q = num_txq_grps * + IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + } else { + num_txq_grps = IECM_DFLT_SINGLEQ_TX_Q_GROUPS; + vport_msg->num_tx_q = num_txq_grps * + (num_req_qs ? num_req_qs : + dflt_singleq_txqs); + vport_msg->num_tx_complq = 0; + } + if (iecm_is_queue_model_split(vport_msg->rxq_model)) { + num_rxq_grps = num_req_qs ? num_req_qs : dflt_splitq_rxq_grps; + vport_msg->num_rx_bufq = num_rxq_grps * + IECM_BUFQS_PER_RXQ_SET; + vport_msg->num_rx_q = num_rxq_grps * + IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + } else { + num_rxq_grps = IECM_DFLT_SINGLEQ_RX_Q_GROUPS; + vport_msg->num_rx_bufq = 0; + vport_msg->num_rx_q = num_rxq_grps * + (num_req_qs ? num_req_qs : + dflt_singleq_rxqs); + } } /** @@ -320,7 +747,15 @@ void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, */ void iecm_vport_calc_num_q_groups(struct iecm_vport *vport) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_txq_grp = vport->num_txq; + else + vport->num_txq_grp = IECM_DFLT_SINGLEQ_TX_Q_GROUPS; + + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->num_rxq_grp = vport->num_rxq; + else + vport->num_rxq_grp = IECM_DFLT_SINGLEQ_RX_Q_GROUPS; } EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); @@ -333,7 +768,15 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, int *num_txq, int *num_rxq) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + *num_txq = IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + else + *num_txq = vport->num_txq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + *num_rxq = IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + else + *num_rxq = vport->num_rxq; } /** @@ -344,7 +787,10 @@ static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, */ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_q_vectors = vport->num_txq_grp; + else + vport->num_q_vectors = vport->num_txq; } /** @@ -355,7 +801,68 @@ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq) { - /* stub */ + struct iecm_itr tx_itr = { 0 }; + enum iecm_status err = 0; + int i; + + vport->txq_grps = kcalloc(vport->num_txq_grp, + sizeof(*vport->txq_grps), GFP_KERNEL); + if (!vport->txq_grps) + return IECM_ERR_NO_MEMORY; + + tx_itr.target_itr = IECM_ITR_TX_DEF; + tx_itr.itr_idx = VIRTCHNL_ITR_IDX_1; + tx_itr.next_update = jiffies + 1; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + int j; + + tx_qgrp->vport = vport; + tx_qgrp->num_txq = num_txq; + tx_qgrp->txqs = kcalloc(num_txq, sizeof(*tx_qgrp->txqs), + GFP_KERNEL); + if (!tx_qgrp->txqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + for (j = 0; j < tx_qgrp->num_txq; j++) { + struct iecm_queue *q = &tx_qgrp->txqs[j]; + + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->txq_desc_count; + q->vport = vport; + q->txq_grp = tx_qgrp; + hash_init(q->sched_buf_hash); + + if (!iecm_is_queue_model_split(vport->txq_model)) + q->itr = tx_itr; + } + + if (!iecm_is_queue_model_split(vport->txq_model)) + continue; + + tx_qgrp->complq = kcalloc(IECM_COMPLQ_PER_GROUP, + sizeof(*tx_qgrp->complq), + GFP_KERNEL); + if (!tx_qgrp->complq) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + tx_qgrp->complq->dev = &vport->adapter->pdev->dev; + tx_qgrp->complq->desc_count = vport->complq_desc_count; + tx_qgrp->complq->vport = vport; + tx_qgrp->complq->txq_grp = tx_qgrp; + + tx_qgrp->complq->itr = tx_itr; + } + +err_alloc: + if (err) + iecm_txq_group_rel(vport); + return err; } /** @@ -366,7 +873,118 @@ static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq) { - /* stub */ + enum iecm_status err = 0; + struct iecm_itr rx_itr = {0}; + struct iecm_queue *q; + int i; + + vport->rxq_grps = kcalloc(vport->num_rxq_grp, + sizeof(struct iecm_rxq_group), GFP_KERNEL); + if (!vport->rxq_grps) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + rx_itr.target_itr = IECM_ITR_RX_DEF; + rx_itr.itr_idx = VIRTCHNL_ITR_IDX_0; + rx_itr.next_update = jiffies + 1; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int j; + + rx_qgrp->vport = vport; + if (iecm_is_queue_model_split(vport->rxq_model)) { + rx_qgrp->splitq.num_rxq_sets = num_rxq; + rx_qgrp->splitq.rxq_sets = + kcalloc(num_rxq, + sizeof(struct iecm_rxq_set), + GFP_KERNEL); + if (!rx_qgrp->splitq.rxq_sets) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + rx_qgrp->splitq.bufq_sets = + kcalloc(IECM_BUFQS_PER_RXQ_SET, + sizeof(struct iecm_bufq_set), + GFP_KERNEL); + if (!rx_qgrp->splitq.bufq_sets) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + int swq_size = sizeof(struct iecm_sw_queue); + + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->bufq_desc_count; + q->vport = vport; + q->rxq_grp = rx_qgrp; + q->idx = j; + q->rx_buf_size = IECM_RX_BUF_2048; + q->rsc_low_watermark = IECM_LOW_WATERMARK; + q->rx_buf_stride = IECM_RX_BUF_STRIDE; + q->itr = rx_itr; + + if (vport->rx_hsplit_en) { + q->rx_hsplit_en = vport->rx_hsplit_en; + q->rx_hbuf_size = IECM_HDR_BUF_SIZE; + } + + rx_qgrp->splitq.bufq_sets[j].num_refillqs = + num_rxq; + rx_qgrp->splitq.bufq_sets[j].refillqs = + kcalloc(num_rxq, swq_size, GFP_KERNEL); + if (!rx_qgrp->splitq.bufq_sets[j].refillqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + } + } else { + rx_qgrp->singleq.num_rxq = num_rxq; + rx_qgrp->singleq.rxqs = kcalloc(num_rxq, + sizeof(struct iecm_queue), + GFP_KERNEL); + if (!rx_qgrp->singleq.rxqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + } + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) { + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + rx_qgrp->splitq.rxq_sets[j].refillq0 = + &rx_qgrp->splitq.bufq_sets[0].refillqs[j]; + rx_qgrp->splitq.rxq_sets[j].refillq1 = + &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; + + if (vport->rx_hsplit_en) { + q->rx_hsplit_en = vport->rx_hsplit_en; + q->rx_hbuf_size = IECM_HDR_BUF_SIZE; + } + + } else { + q = &rx_qgrp->singleq.rxqs[j]; + } + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->rxq_desc_count; + q->vport = vport; + q->rxq_grp = rx_qgrp; + q->idx = (i * num_rxq) + j; + q->rx_buf_size = IECM_RX_BUF_2048; + q->rsc_low_watermark = IECM_LOW_WATERMARK; + q->rx_max_pkt_size = vport->netdev->mtu + + IECM_PACKET_HDR_PAD; + q->itr = rx_itr; + } + } +err_alloc: + if (err) + iecm_rxq_group_rel(vport); + return err; } /** @@ -376,7 +994,20 @@ static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, static enum iecm_status iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) { - /* stub */ + int num_txq, num_rxq; + enum iecm_status err; + + iecm_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq); + + err = iecm_txq_group_alloc(vport, num_txq); + if (err) + goto err_out; + + err = iecm_rxq_group_alloc(vport, num_rxq); +err_out: + if (err) + iecm_vport_queue_grp_rel_all(vport); + return err; } /** @@ -387,7 +1018,35 @@ iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) */ enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum iecm_status err; + + err = iecm_vport_queue_grp_alloc_all(vport); + if (err) + goto err_out; + + err = adapter->dev_ops.vc_ops.vport_queue_ids_init(vport); + if (err) + goto err_out; + + adapter->dev_ops.reg_ops.vportq_reg_init(vport); + + err = iecm_tx_desc_alloc_all(vport); + if (err) + goto err_out; + + err = iecm_rx_desc_alloc_all(vport); + if (err) + goto err_out; + + err = iecm_vport_init_fast_path_txqs(vport); + if (err) + goto err_out; + + return 0; +err_out: + iecm_vport_queues_rel(vport); + return err; } /** @@ -1786,7 +2445,16 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); */ int iecm_config_rss(struct iecm_vport *vport) { - /* stub */ + int err = iecm_send_get_set_rss_key_msg(vport, false); + + if (!err) + err = vport->adapter->dev_ops.vc_ops.get_set_rss_lut(vport, + false); + if (!err) + err = vport->adapter->dev_ops.vc_ops.get_set_rss_hash(vport, + false); + + return err; } /** @@ -1798,7 +2466,20 @@ int iecm_config_rss(struct iecm_vport *vport) */ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) { - /* stub */ + int i, j, k = 0; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++) + qid_list[k++] = + rx_qgrp->splitq.rxq_sets[j].rxq.q_id; + } else { + for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + qid_list[k++] = rx_qgrp->singleq.rxqs[j].q_id; + } + } } /** @@ -1810,7 +2491,13 @@ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) */ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) { - /* stub */ + int num_lut_segs, lut_seg, i, k = 0; + + num_lut_segs = vport->adapter->rss_data.rss_lut_size / vport->num_rxq; + for (lut_seg = 0; lut_seg < num_lut_segs; lut_seg++) { + for (i = 0; i < vport->num_rxq; i++) + vport->adapter->rss_data.rss_lut[k++] = qid_list[i]; + } } /** @@ -1821,7 +2508,67 @@ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) */ int iecm_init_rss(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + u16 *qid_list; + int err; + + adapter->rss_data.rss_key = kzalloc(adapter->rss_data.rss_key_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_key) + return IECM_ERR_NO_MEMORY; + adapter->rss_data.rss_lut = kzalloc(adapter->rss_data.rss_lut_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_lut) { + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + return IECM_ERR_NO_MEMORY; + } + + /* Initialize default rss key */ + netdev_rss_key_fill((void *)adapter->rss_data.rss_key, + adapter->rss_data.rss_key_size); + + /* Initialize default rss lut */ + if (adapter->rss_data.rss_lut_size % vport->num_rxq) { + u16 dflt_qid; + int i; + + /* Set all entries to a default RX queue if the algorithm below + * won't fill all entries + */ + if (iecm_is_queue_model_split(vport->rxq_model)) + dflt_qid = + vport->rxq_grps[0].splitq.rxq_sets[0].rxq.q_id; + else + dflt_qid = + vport->rxq_grps[0].singleq.rxqs[0].q_id; + + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) + adapter->rss_data.rss_lut[i] = dflt_qid; + } + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) { + kfree(adapter->rss_data.rss_lut); + adapter->rss_data.rss_lut = NULL; + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + return IECM_ERR_NO_MEMORY; + } + + iecm_get_rx_qid_list(vport, qid_list); + + /* Fill the default RSS lut values*/ + iecm_fill_dflt_rss_lut(vport, qid_list); + + kfree(qid_list); + + /* Initialize default rss HASH */ + adapter->rss_data.rss_hash = IECM_DEFAULT_RSS_HASH_EXPANDED; + + err = iecm_config_rss(vport); + + return err; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 14a4f1149599..484f916fa1a0 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -692,7 +692,20 @@ iecm_send_destroy_vport_msg(struct iecm_vport *vport) enum iecm_status iecm_send_enable_vport_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_vport v_id; + enum iecm_status err; + + v_id.vport_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_ENABLE_VPORT, + sizeof(v_id), (u8 *)&v_id); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_ENA_VPORT, + IECM_VC_ENA_VPORT_ERR); + + return err; } /** @@ -1887,7 +1900,27 @@ EXPORT_SYMBOL(iecm_vc_core_init); static void iecm_vport_init(struct iecm_vport *vport, __always_unused int vport_id) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + + vport_msg = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + vport->txq_model = vport_msg->txq_model; + vport->rxq_model = vport_msg->rxq_model; + vport->vport_type = (u16)vport_msg->vport_type; + vport->vport_id = vport_msg->vport_id; + vport->adapter->rss_data.rss_key_size = min_t(u16, NETDEV_RSS_KEY_LEN, + vport_msg->rss_key_size); + vport->adapter->rss_data.rss_lut_size = vport_msg->rss_lut_size; + ether_addr_copy(vport->default_mac_addr, vport_msg->default_mac_addr); + vport->max_mtu = IECM_MAX_MTU; + + iecm_vport_set_hsplit(vport, NULL); + + init_waitqueue_head(&vport->sw_marker_wq); + iecm_vport_init_num_qs(vport, vport_msg); + iecm_vport_calc_num_q_desc(vport); + iecm_vport_calc_num_q_groups(vport); + iecm_vport_calc_num_q_vec(vport); } /** From patchwork Tue Jun 23 22:40:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E38BC433DF for ; Tue, 23 Jun 2020 22:41:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D24F62078A for ; Tue, 23 Jun 2020 22:41:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388205AbgFWWl2 (ORCPT ); Tue, 23 Jun 2020 18:41:28 -0400 Received: from mga01.intel.com ([192.55.52.88]:14610 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387895AbgFWWlW (ORCPT ); Tue, 23 Jun 2020 18:41:22 -0400 IronPort-SDR: vtQnIa2fcIf745P1pN20eaV1SFZ+P7ttsCE3Cz2+VXwE9WT9TUt5iMayj9vAWvpE69PoTMl2YH /UIaZzS579CQ== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327511" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327511" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:49 -0700 IronPort-SDR: +CLD9/ILkIqqgPVHTJqK0Y4iARdgKu2QnZX56KBEJ4CrOIirKxzJH9PBhLdRihkU+l5WCWOYAX 4eR2nmLtLgQQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045954" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:48 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 11/15] iecm: Add splitq TX/RX Date: Tue, 23 Jun 2020 15:40:39 -0700 Message-Id: <20200623224043.801728-12-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement main TX/RX flows for split queue model. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1283 ++++++++++++++++++- 1 file changed, 1235 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index bd358b121773..c6cc7fb6ace9 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -11,7 +11,12 @@ static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, struct iecm_tx_buf *buf) { - /* stub */ + if (stack->top == stack->size) + return IECM_ERR_MAX_LIMIT; + + stack->bufs[stack->top++] = buf; + + return 0; } /** @@ -20,7 +25,10 @@ static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, **/ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) { - /* stub */ + if (!stack->top) + return NULL; + + return stack->bufs[--stack->top]; } /** @@ -31,7 +39,16 @@ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) void iecm_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + iecm_send_get_stats_msg(vport); + stats->rx_packets = vport->netstats.rx_packets; + stats->tx_packets = vport->netstats.tx_packets; + stats->rx_bytes = vport->netstats.rx_bytes; + stats->tx_bytes = vport->netstats.tx_bytes; + stats->tx_errors = vport->netstats.tx_errors; + stats->rx_dropped = vport->netstats.rx_dropped; + stats->tx_dropped = vport->netstats.tx_dropped; } /** @@ -1246,7 +1263,16 @@ enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) static struct iecm_queue * iecm_tx_find_q(struct iecm_vport *vport, int q_id) { - /* stub */ + int i; + + for (i = 0; i < vport->num_txq; i++) { + struct iecm_queue *tx_q = vport->txqs[i]; + + if (tx_q->q_id == q_id) + return tx_q; + } + + return NULL; } /** @@ -1255,7 +1281,22 @@ iecm_tx_find_q(struct iecm_vport *vport, int q_id) */ static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q) { - /* stub */ + struct iecm_vport *vport = tx_q->vport; + bool drain_complete = true; + int i; + + clear_bit(__IECM_Q_SW_MARKER, tx_q->flags); + /* Hardware must write marker packets to all queues associated with + * completion queues. So check if all queues received marker packets + */ + for (i = 0; i < vport->num_txq; i++) { + if (test_bit(__IECM_Q_SW_MARKER, vport->txqs[i]->flags)) + drain_complete = false; + } + if (drain_complete) { + set_bit(__IECM_VPORT_SW_MARKER, vport->flags); + wake_up(&vport->sw_marker_wq); + } } /** @@ -1270,7 +1311,30 @@ static struct iecm_tx_queue_stats iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, int napi_budget) { - /* stub */ + struct iecm_tx_queue_stats cleaned = {0}; + struct netdev_queue *nq; + + /* update the statistics for this packet */ + cleaned.bytes = tx_buf->bytecount; + cleaned.packets = tx_buf->gso_segs; + + /* free the skb */ + napi_consume_skb(tx_buf->skb, napi_budget); + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + netdev_tx_completed_queue(nq, cleaned.packets, + cleaned.bytes); + + /* unmap skb header data */ + dma_unmap_single(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + + /* clear tx_buf data */ + tx_buf->skb = NULL; + dma_unmap_len_set(tx_buf, len, 0); + + return cleaned; } /** @@ -1282,7 +1346,33 @@ iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, static int iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf) { - /* stub */ + struct iecm_adapter *adapter = txq->vport->adapter; + struct iecm_tx_buf *shadow_buf; + + shadow_buf = iecm_buf_lifo_pop(&txq->buf_stack); + if (!shadow_buf) { + dev_err(&adapter->pdev->dev, + "No out-of-order TX buffers left!\n"); + return -ENOMEM; + } + + /* Store buffer params in shadow buffer */ + shadow_buf->skb = tx_buf->skb; + shadow_buf->bytecount = tx_buf->bytecount; + shadow_buf->gso_segs = tx_buf->gso_segs; + shadow_buf->dma = tx_buf->dma; + shadow_buf->len = tx_buf->len; + shadow_buf->compl_tag = tx_buf->compl_tag; + + /* Add buffer to buf_hash table to be freed + * later + */ + hash_add(txq->sched_buf_hash, &shadow_buf->hlist, + shadow_buf->compl_tag); + + memset(tx_buf, 0, sizeof(struct iecm_tx_buf)); + + return 0; } /** @@ -1305,7 +1395,91 @@ static struct iecm_tx_queue_stats iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, bool descs_only) { - /* stub */ + union iecm_tx_flex_desc *next_pending_desc = NULL; + struct iecm_tx_queue_stats cleaned_stats = {0}; + union iecm_tx_flex_desc *tx_desc; + s16 ntc = tx_q->next_to_clean; + struct iecm_tx_buf *tx_buf; + + tx_desc = IECM_FLEX_TX_DESC(tx_q, ntc); + next_pending_desc = IECM_FLEX_TX_DESC(tx_q, end); + tx_buf = &tx_q->tx_buf[ntc]; + ntc -= tx_q->desc_count; + + while (tx_desc != next_pending_desc) { + union iecm_tx_flex_desc *eop_desc = + (union iecm_tx_flex_desc *)tx_buf->next_to_watch; + + /* clear next_to_watch to prevent false hangs */ + tx_buf->next_to_watch = NULL; + + if (descs_only) { + if (iecm_stash_flow_sch_buffers(tx_q, tx_buf)) + goto tx_splitq_clean_out; + + while (tx_desc != eop_desc) { + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + + if (dma_unmap_len(tx_buf, len)) { + if (iecm_stash_flow_sch_buffers(tx_q, + tx_buf)) + goto tx_splitq_clean_out; + } + } + } else { + struct iecm_tx_queue_stats buf_stats = {0}; + + buf_stats = iecm_tx_splitq_clean_buf(tx_q, tx_buf, + napi_budget); + + /* update the statistics for this packet */ + cleaned_stats.bytes += buf_stats.bytes; + cleaned_stats.packets += buf_stats.packets; + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + } + } + } + + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + } + +tx_splitq_clean_out: + ntc += tx_q->desc_count; + tx_q->next_to_clean = ntc; + + return cleaned_stats; } /** @@ -1315,7 +1489,18 @@ iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, */ static inline void iecm_tx_hw_tstamp(struct sk_buff *skb, u8 *desc_ts) { - /* stub */ + struct skb_shared_hwtstamps hwtstamps; + u64 tstamp; + + /* Only report timestamp to stack if requested */ + if (!likely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) + return; + + tstamp = (desc_ts[0] | (desc_ts[1] << 8) | (desc_ts[2] & 0x3F) << 16); + hwtstamps.hwtstamp = + ns_to_ktime(tstamp << IECM_TW_TIME_STAMP_GRAN_512_DIV_S); + + skb_tstamp_tx(skb, &hwtstamps); } /** @@ -1330,7 +1515,39 @@ static struct iecm_tx_queue_stats iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, u8 *desc_ts, int budget) { - /* stub */ + struct iecm_tx_queue_stats cleaned_stats = {0}; + struct hlist_node *tmp_buf = NULL; + struct iecm_tx_buf *tx_buf = NULL; + + /* Buffer completion */ + hash_for_each_possible_safe(txq->sched_buf_hash, tx_buf, tmp_buf, + hlist, compl_tag) { + if (tx_buf->compl_tag != compl_tag) + continue; + + if (likely(tx_buf->skb)) { + /* fetch timestamp from completion + * descriptor to report to stack + */ + iecm_tx_hw_tstamp(tx_buf->skb, desc_ts); + + cleaned_stats = iecm_tx_splitq_clean_buf(txq, tx_buf, + budget); + } else if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(txq->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + } + + /* Push shadow buf back onto stack */ + iecm_buf_lifo_push(&txq->buf_stack, tx_buf); + + hash_del(&tx_buf->hlist); + } + + return cleaned_stats; } /** @@ -1343,7 +1560,109 @@ iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, static bool iecm_tx_clean_complq(struct iecm_queue *complq, int budget) { - /* stub */ + struct iecm_splitq_tx_compl_desc *tx_desc; + struct iecm_vport *vport = complq->vport; + s16 ntc = complq->next_to_clean; + unsigned int complq_budget; + + complq_budget = vport->compln_clean_budget; + tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, ntc); + ntc -= complq->desc_count; + + do { + struct iecm_tx_queue_stats cleaned_stats = {0}; + bool descs_only = false; + struct iecm_queue *tx_q; + u16 compl_tag, hw_head; + int tx_qid; + u8 ctype; /* completion type */ + u16 gen; + + /* if the descriptor isn't done, no work yet to do */ + gen = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S; + if (test_bit(__IECM_Q_GEN_CHK, complq->flags) != gen) + break; + + /* Find necessary info of TX queue to clean buffers */ + tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S; + tx_q = iecm_tx_find_q(vport, tx_qid); + if (!tx_q) { + dev_err(&complq->vport->adapter->pdev->dev, + "TxQ #%d not found\n", tx_qid); + goto fetch_next_desc; + } + + /* Determine completion type */ + ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_COMPL_TYPE_M) >> + IECM_TXD_COMPLQ_COMPL_TYPE_S; + switch (ctype) { + case IECM_TXD_COMPLT_RE: + hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + + cleaned_stats = iecm_tx_splitq_clean(tx_q, hw_head, + budget, + descs_only); + break; + case IECM_TXD_COMPLT_RS: + if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) { + compl_tag = + le16_to_cpu(tx_desc->q_head_compl_tag.compl_tag); + + cleaned_stats = + iecm_tx_clean_flow_sch_bufs(tx_q, + compl_tag, + tx_desc->ts, + budget); + } else { + hw_head = + le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + + cleaned_stats = iecm_tx_splitq_clean(tx_q, + hw_head, + budget, + false); + } + + break; + case IECM_TXD_COMPLT_SW_MARKER: + iecm_tx_handle_sw_marker(tx_q); + break; + default: + dev_err(&tx_q->vport->adapter->pdev->dev, + "Unknown TX completion type: %d\n", + ctype); + goto fetch_next_desc; + } + + tx_q->itr.stats.tx.packets += cleaned_stats.packets; + tx_q->itr.stats.tx.bytes += cleaned_stats.bytes; + u64_stats_update_begin(&tx_q->stats_sync); + tx_q->q_stats.tx.packets += cleaned_stats.packets; + tx_q->q_stats.tx.bytes += cleaned_stats.bytes; + u64_stats_update_end(&tx_q->stats_sync); + +fetch_next_desc: + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= complq->desc_count; + tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, 0); + change_bit(__IECM_Q_GEN_CHK, complq->flags); + } + + prefetch(tx_desc); + + /* update budget accounting */ + complq_budget--; + } while (likely(complq_budget)); + + ntc += complq->desc_count; + complq->next_to_clean = ntc; + + return !!complq_budget; } /** @@ -1359,7 +1678,12 @@ iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc, struct iecm_tx_splitq_params *parms, u16 td_cmd, u16 size) { - /* stub */ + desc->q.qw1.cmd_dtype = + cpu_to_le16(parms->dtype & IECM_FLEX_TXD_QW1_DTYPE_M); + desc->q.qw1.cmd_dtype |= + cpu_to_le16((td_cmd << IECM_FLEX_TXD_QW1_CMD_S) & + IECM_FLEX_TXD_QW1_CMD_M); + desc->q.qw1.buf_size = cpu_to_le16((u16)size); } /** @@ -1375,7 +1699,13 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, struct iecm_tx_splitq_params *parms, u16 td_cmd, u16 size) { - /* stub */ + desc->flow.qw1.cmd_dtype = (u16)parms->dtype | td_cmd; + desc->flow.qw1.rxr_bufsize = cpu_to_le16((u16)size); + desc->flow.qw1.compl_tag = cpu_to_le16(parms->compl_tag); + + desc->flow.qw1.ts[0] = parms->offload.desc_ts & 0xff; + desc->flow.qw1.ts[1] = (parms->offload.desc_ts >> 8) & 0xff; + desc->flow.qw1.ts[2] = (parms->offload.desc_ts >> 16) & 0xff; } /** @@ -1388,7 +1718,19 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, static int __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) { - /* stub */ + netif_stop_subqueue(tx_q->vport->netdev, tx_q->idx); + + /* Memory barrier before checking head and tail */ + smp_mb(); + + /* Check again in a case another CPU has just made room available. */ + if (likely(IECM_DESC_UNUSED(tx_q) < size)) + return -EBUSY; + + /* A reprieve! - use start_subqueue because it doesn't call schedule */ + netif_start_subqueue(tx_q->vport->netdev, tx_q->idx); + + return 0; } /** @@ -1400,7 +1742,10 @@ __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) */ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) { - /* stub */ + if (likely(IECM_DESC_UNUSED(tx_q) >= size)) + return 0; + + return __iecm_tx_maybe_stop(tx_q, size); } /** @@ -1410,7 +1755,23 @@ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) */ void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val) { - /* stub */ + struct netdev_queue *nq; + + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + tx_q->next_to_use = val; + + iecm_tx_maybe_stop(tx_q, IECM_TX_DESC_NEEDED); + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + + /* notify HW of packet */ + if (netif_xmit_stopped(nq) || !netdev_xmit_more()) + writel_relaxed(val, tx_q->tail); } /** @@ -1443,7 +1804,7 @@ void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val) */ static unsigned int __iecm_tx_desc_count_required(unsigned int size) { - /* stub */ + return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; } /** @@ -1454,13 +1815,26 @@ static unsigned int __iecm_tx_desc_count_required(unsigned int size) */ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) { - /* stub */ + const skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + unsigned int nr_frags = skb_shinfo(skb)->nr_frags; + unsigned int count = 0, size = skb_headlen(skb); + + for (;;) { + count += __iecm_tx_desc_count_required(size); + + if (!nr_frags--) + break; + + size = skb_frag_size(frag++); + } + + return count; } /** * iecm_tx_splitq_map - Build the Tx flex descriptor * @tx_q: queue to send buffer on - * @off: pointer to offload params struct + * @parms: pointer to splitq params struct * @first: first buffer info buffer to use * * This function loops over the skb data pointed to by *first @@ -1469,10 +1843,130 @@ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) */ static void iecm_tx_splitq_map(struct iecm_queue *tx_q, - struct iecm_tx_offload_params *off, + struct iecm_tx_splitq_params *parms, struct iecm_tx_buf *first) { - /* stub */ + union iecm_tx_flex_desc *tx_desc; + unsigned int data_len, size; + struct iecm_tx_buf *tx_buf; + u16 i = tx_q->next_to_use; + struct netdev_queue *nq; + struct sk_buff *skb; + skb_frag_t *frag; + u16 td_cmd = 0; + dma_addr_t dma; + + skb = first->skb; + + td_cmd = parms->offload.td_cmd; + parms->compl_tag = tx_q->tx_buf_key; + + data_len = skb->data_len; + size = skb_headlen(skb); + + tx_desc = IECM_FLEX_TX_DESC(tx_q, i); + + dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE); + + tx_buf = first; + + for (frag = &skb_shinfo(skb)->frags[0];; frag++) { + unsigned int max_data = IECM_TX_MAX_DESC_DATA_ALIGNED; + + if (dma_mapping_error(tx_q->dev, dma)) + goto dma_error; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buf, len, size); + dma_unmap_addr_set(tx_buf, dma, dma); + + /* align size to end of page */ + max_data += -dma & (IECM_TX_MAX_READ_REQ_SIZE - 1); + + /* buf_addr is in same location for both desc types */ + tx_desc->q.buf_addr = cpu_to_le64(dma); + + /* account for data chunks larger than the hardware + * can handle + */ + while (unlikely(size > IECM_TX_MAX_DESC_DATA)) { + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + + tx_desc++; + i++; + + if (i == tx_q->desc_count) { + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + i = 0; + } + + dma += max_data; + size -= max_data; + + max_data = IECM_TX_MAX_DESC_DATA_ALIGNED; + /* buf_addr is in same location for both desc types */ + tx_desc->q.buf_addr = cpu_to_le64(dma); + } + + if (likely(!data_len)) + break; + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + tx_desc++; + i++; + + if (i == tx_q->desc_count) { + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + i = 0; + } + + size = skb_frag_size(frag); + data_len -= size; + + dma = skb_frag_dma_map(tx_q->dev, frag, 0, size, + DMA_TO_DEVICE); + + tx_buf->compl_tag = parms->compl_tag; + tx_buf = &tx_q->tx_buf[i]; + } + + /* record bytecount for BQL */ + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + netdev_tx_sent_queue(nq, first->bytecount); + + /* record SW timestamp if HW timestamp is not available */ + skb_tx_timestamp(first->skb); + + /* write last descriptor with RS and EOP bits */ + td_cmd |= parms->eop_cmd; + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + i++; + if (i == tx_q->desc_count) + i = 0; + + /* set next_to_watch value indicating a packet is present */ + first->next_to_watch = tx_desc; + tx_buf->compl_tag = parms->compl_tag++; + + iecm_tx_buf_hw_update(tx_q, i); + + /* Update TXQ Completion Tag key for next buffer */ + tx_q->tx_buf_key = parms->compl_tag; + + return; + +dma_error: + /* clear DMA mappings for failed tx_buf map */ + for (;;) { + tx_buf = &tx_q->tx_buf[i]; + iecm_tx_buf_rel(tx_q, tx_buf); + if (tx_buf == first) + break; + if (i == 0) + i = tx_q->desc_count; + i--; + } + + tx_q->next_to_use = i; } /** @@ -1488,7 +1982,79 @@ iecm_tx_splitq_map(struct iecm_queue *tx_q, static int iecm_tso(struct iecm_tx_buf *first, struct iecm_tx_offload_params *off) { - /* stub */ + struct sk_buff *skb = first->skb; + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct tcphdr *tcp; + struct udphdr *udp; + unsigned char *hdr; + } l4; + u32 paylen, l4_start; + int err; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 0; + + if (!skb_is_gso(skb)) + return 0; + + err = skb_cow_head(skb, 0); + if (err < 0) + return err; + + ip.hdr = skb_network_header(skb); + l4.hdr = skb_transport_header(skb); + + /* initialize outer IP header fields */ + if (ip.v4->version == 4) { + ip.v4->tot_len = 0; + ip.v4->check = 0; + } else { + ip.v6->payload_len = 0; + } + + /* determine offset of transport header */ + l4_start = l4.hdr - skb->data; + + /* remove payload length from checksum */ + paylen = skb->len - l4_start; + + switch (skb_shinfo(skb)->gso_type) { + case SKB_GSO_TCPV4: + case SKB_GSO_TCPV6: + csum_replace_by_diff(&l4.tcp->check, + (__force __wsum)htonl(paylen)); + + /* compute length of segmentation header */ + off->tso_hdr_len = (l4.tcp->doff * 4) + l4_start; + break; + case SKB_GSO_UDP_L4: + csum_replace_by_diff(&l4.udp->check, + (__force __wsum)htonl(paylen)); + /* compute length of segmentation header */ + off->tso_hdr_len = sizeof(struct udphdr) + l4_start; + l4.udp->len = + htons(skb_shinfo(skb)->gso_size + + sizeof(struct udphdr)); + break; + default: + return -EINVAL; + } + + off->tso_len = skb->len - off->tso_hdr_len; + off->mss = skb_shinfo(skb)->gso_size; + + /* update gso_segs and bytecount */ + first->gso_segs = skb_shinfo(skb)->gso_segs; + first->bytecount += (first->gso_segs - 1) * off->tso_hdr_len; + + first->tx_flags |= IECM_TX_FLAGS_TSO; + + return 0; } /** @@ -1501,7 +2067,84 @@ static int iecm_tso(struct iecm_tx_buf *first, static netdev_tx_t iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) { - /* stub */ + struct iecm_tx_splitq_params tx_parms = { NULL, 0, 0, 0, {0} }; + struct iecm_tx_buf *first; + unsigned int count; + + count = iecm_tx_desc_count_required(skb); + + /* need: 1 descriptor per page * PAGE_SIZE/IECM_MAX_DATA_PER_TXD, + * + 1 desc for skb_head_len/IECM_MAX_DATA_PER_TXD, + * + 4 desc gap to avoid the cache line where head is, + * + 1 desc for context descriptor, + * otherwise try next time + */ + if (iecm_tx_maybe_stop(tx_q, count + IECM_TX_DESCS_PER_CACHE_LINE + + IECM_TX_DESCS_FOR_CTX)) { + return NETDEV_TX_BUSY; + } + + /* record the location of the first descriptor for this packet */ + first = &tx_q->tx_buf[tx_q->next_to_use]; + first->skb = skb; + first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN); + first->gso_segs = 1; + first->tx_flags = 0; + + if (iecm_tso(first, &tx_parms.offload) < 0) { + /* If tso returns an error, drop the packet */ + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + + if (first->tx_flags & IECM_TX_FLAGS_TSO) { + /* If TSO is needed, set up context desc */ + union iecm_flex_tx_ctx_desc *ctx_desc; + int i = tx_q->next_to_use; + + /* grab the next descriptor */ + ctx_desc = IECM_FLEX_TX_CTX_DESC(tx_q, i); + i++; + tx_q->next_to_use = (i < tx_q->desc_count) ? i : 0; + + ctx_desc->tso.qw1.cmd_dtype |= + cpu_to_le16(IECM_TX_DESC_DTYPE_FLEX_TSO_CTX | + IECM_TX_FLEX_CTX_DESC_CMD_TSO); + ctx_desc->tso.qw0.flex_tlen = + cpu_to_le32(tx_parms.offload.tso_len & + IECM_TXD_FLEX_CTX_TLEN_M); + ctx_desc->tso.qw0.mss_rt = + cpu_to_le16(tx_parms.offload.mss & + IECM_TXD_FLEX_CTX_MSS_RT_M); + ctx_desc->tso.qw0.hdr_len = tx_parms.offload.tso_hdr_len; + } + + if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) { + s64 ts_ns = first->skb->skb_mstamp_ns; + + tx_parms.offload.desc_ts = + ts_ns >> IECM_TW_TIME_STAMP_GRAN_512_DIV_S; + + tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE; + tx_parms.splitq_build_ctb = iecm_tx_splitq_build_flow_desc; + tx_parms.eop_cmd = + IECM_TXD_FLEX_FLOW_CMD_EOP | IECM_TXD_FLEX_FLOW_CMD_RE; + + if (skb->ip_summed == CHECKSUM_PARTIAL) + tx_parms.offload.td_cmd |= IECM_TXD_FLEX_FLOW_CMD_CS_EN; + + } else { + tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_DATA; + tx_parms.splitq_build_ctb = iecm_tx_splitq_build_ctb; + tx_parms.eop_cmd = IECM_TX_DESC_CMD_EOP | IECM_TX_DESC_CMD_RS; + + if (skb->ip_summed == CHECKSUM_PARTIAL) + tx_parms.offload.td_cmd |= IECM_TX_FLEX_DESC_CMD_CS_EN; + } + + iecm_tx_splitq_map(tx_q, &tx_parms, first); + + return NETDEV_TX_OK; } /** @@ -1514,7 +2157,18 @@ iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, struct net_device *netdev) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_queue *tx_q; + + tx_q = vport->txqs[skb->queue_mapping]; + + /* hardware can't handle really short frames, hardware padding works + * beyond this point + */ + if (skb_put_padto(skb, IECM_TX_MIN_LEN)) + return NETDEV_TX_OK; + + return iecm_tx_splitq_frame(skb, tx_q); } /** @@ -1529,7 +2183,18 @@ netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded = vport->rx_ptype_lkup[ptype]; + + if (!decoded.known) + return PKT_HASH_TYPE_NONE; + if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4) + return PKT_HASH_TYPE_L4; + if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3) + return PKT_HASH_TYPE_L3; + if (decoded.outer_ip == IECM_RX_PTYPE_OUTER_L2) + return PKT_HASH_TYPE_L2; + + return PKT_HASH_TYPE_NONE; } /** @@ -1543,7 +2208,17 @@ static void iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + u32 hash; + + if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXHASH)) + return; + + hash = rx_desc->status_err1 | + (rx_desc->fflags1 << 8) | + (rx_desc->ts_low << 16) | + (rx_desc->ff2_mirrid_hash2.hash2 << 24); + + skb_set_hash(skb, hash, iecm_ptype_to_htype(rxq->vport, ptype)); } /** @@ -1559,7 +2234,63 @@ static void iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded; + u8 rx_status_0_qw1, rx_status_0_qw0; + bool ipv4, ipv6; + + /* Start with CHECKSUM_NONE and by default csum_level = 0 */ + skb->ip_summed = CHECKSUM_NONE; + + /* check if Rx checksum is enabled */ + if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXCSUM)) + return; + + rx_status_0_qw1 = rx_desc->status_err0_qw1; + /* check if HW has decoded the packet and checksum */ + if (!(rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_L3L4P_S))) + return; + + decoded = rxq->vport->rx_ptype_lkup[ptype]; + if (!(decoded.known && decoded.outer_ip)) + return; + + ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4); + ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6); + + if (ipv4 && (rx_status_0_qw1 & + (BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | + BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))) + goto checksum_fail; + + rx_status_0_qw0 = rx_desc->status_err0_qw0; + if (ipv6 && (rx_status_0_qw0 & + (BIT(IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S)))) + return; + + /* check for L4 errors and handle packets that were not able to be + * checksummed + */ + if (rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) + goto checksum_fail; + + /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */ + switch (decoded.inner_prot) { + case IECM_RX_PTYPE_INNER_PROT_ICMP: + case IECM_RX_PTYPE_INNER_PROT_TCP: + case IECM_RX_PTYPE_INNER_PROT_UDP: + case IECM_RX_PTYPE_INNER_PROT_SCTP: + skb->ip_summed = CHECKSUM_UNNECESSARY; + rxq->q_stats.rx.basic_csum++; + default: + break; + } + return; + +checksum_fail: + rxq->q_stats.rx.csum_err++; + dev_dbg(rxq->dev, "RX Checksum not available\n"); } /** @@ -1575,7 +2306,74 @@ iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded; + u16 rsc_segments, rsc_payload_len; + struct ipv6hdr *ipv6h; + struct tcphdr *tcph; + struct iphdr *ipv4h; + bool ipv4, ipv6; + u16 hdr_len; + + rsc_payload_len = le16_to_cpu(rx_desc->fmd1_misc.rscseglen); + if (!rsc_payload_len) + goto rsc_err; + + decoded = rxq->vport->rx_ptype_lkup[ptype]; + if (!(decoded.known && decoded.outer_ip)) + goto rsc_err; + + ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4); + ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6); + + if (!(ipv4 ^ ipv6)) + goto rsc_err; + + if (ipv4) + hdr_len = ETH_HLEN + sizeof(struct tcphdr) + + sizeof(struct iphdr); + else + hdr_len = ETH_HLEN + sizeof(struct tcphdr) + + sizeof(struct ipv6hdr); + + rsc_segments = DIV_ROUND_UP(skb->len - hdr_len, rsc_payload_len); + + NAPI_GRO_CB(skb)->count = rsc_segments; + skb_shinfo(skb)->gso_size = rsc_payload_len; + + skb_reset_network_header(skb); + + if (ipv4) { + ipv4h = ip_hdr(skb); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + + /* Reset and set transport header offset in skb */ + skb_set_transport_header(skb, sizeof(struct iphdr)); + tcph = tcp_hdr(skb); + + /* Compute the TCP pseudo header checksum*/ + tcph->check = + ~tcp_v4_check(skb->len - skb_transport_offset(skb), + ipv4h->saddr, ipv4h->daddr, 0); + } else { + ipv6h = ipv6_hdr(skb); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + skb_set_transport_header(skb, sizeof(struct ipv6hdr)); + tcph = tcp_hdr(skb); + tcph->check = + ~tcp_v6_check(skb->len - skb_transport_offset(skb), + &ipv6h->saddr, &ipv6h->daddr, 0); + } + + tcp_gro_complete(skb); + + /* Map Rx qid to the skb*/ + skb_record_rx_queue(skb, rxq->q_id); + + return true; +rsc_err: + return false; } /** @@ -1587,7 +2385,19 @@ static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc, struct sk_buff __maybe_unused *skb) { - /* stub */ + u8 ts_lo = rx_desc->ts_low; + u32 ts_hi = 0; + u64 ts_ns = 0; + + ts_hi = le32_to_cpu(rx_desc->flex_ts.ts_high); + + ts_ns |= ts_lo | ((u64)ts_hi << 8); + + if (ts_ns) { + memset(skb_hwtstamps(skb), 0, + sizeof(struct skb_shared_hwtstamps)); + skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ts_ns); + } } /** @@ -1604,7 +2414,26 @@ static bool iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc) { - /* stub */ + bool err = false; + u16 rx_ptype; + bool rsc; + + rx_ptype = le16_to_cpu(rx_desc->ptype_err_fflags0) & + IECM_RXD_FLEX_PTYPE_M; + + /* modifies the skb - consumes the enet header */ + skb->protocol = eth_type_trans(skb, rxq->vport->netdev); + iecm_rx_csum(rxq, skb, rx_desc, rx_ptype); + /* process RSS/hash */ + iecm_rx_hash(rxq, skb, rx_desc, rx_ptype); + + rsc = le16_to_cpu(rx_desc->hdrlen_flags) & IECM_RXD_FLEX_RSC_M; + if (rsc) + err = iecm_rx_rsc(rxq, skb, rx_desc, rx_ptype); + + iecm_rx_hwtstamp(rx_desc, skb); + + return err; } /** @@ -1617,7 +2446,7 @@ iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, */ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) { - /* stub */ + napi_gro_receive(&rxq->q_vector->napi, skb); } /** @@ -1626,7 +2455,7 @@ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) */ static bool iecm_rx_page_is_reserved(struct page *page) { - /* stub */ + return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } /** @@ -1642,7 +2471,13 @@ static bool iecm_rx_page_is_reserved(struct page *page) static void iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) { - /* stub */ +#if (PAGE_SIZE < 8192) + /* flip page offset to other buffer */ + rx_buf->page_offset ^= size; +#else + /* move offset up to the next cache line */ + rx_buf->page_offset += size; +#endif } /** @@ -1656,7 +2491,34 @@ iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) */ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) { - /* stub */ +#if (PAGE_SIZE >= 8192) +#endif + unsigned int pagecnt_bias = rx_buf->pagecnt_bias; + struct page *page = rx_buf->page; + + /* avoid re-using remote pages */ + if (unlikely(iecm_rx_page_is_reserved(page))) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely((page_count(page) - pagecnt_bias) > 1)) + return false; +#else + if (rx_buf->page_offset > last_offset) + return false; +#endif /* PAGE_SIZE < 8192) */ + + /* If we have drained the page fragment pool we need to update + * the pagecnt_bias and page count so that we fully restock the + * number of references the driver holds. + */ + if (unlikely(pagecnt_bias == 1)) { + page_ref_add(page, USHRT_MAX - 1); + rx_buf->pagecnt_bias = USHRT_MAX; + } + + return true; } /** @@ -1672,7 +2534,17 @@ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { - /* stub */ +#if (PAGE_SIZE >= 8192) + unsigned int truesize = SKB_DATA_ALIGN(size); +#else + unsigned int truesize = IECM_RX_BUF_2048; +#endif + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, + rx_buf->page_offset, size, truesize); + + /* page is being used so we must update the page offset */ + iecm_rx_buf_adjust_pg_offset(rx_buf, truesize); } /** @@ -1687,7 +2559,22 @@ void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, bool hsplit, struct iecm_rx_buf *old_buf) { - /* stub */ + u16 ntu = rx_bufq->next_to_use; + struct iecm_rx_buf *new_buf; + + if (hsplit) + new_buf = &rx_bufq->rx_buf.hdr_buf[ntu]; + else + new_buf = &rx_bufq->rx_buf.buf[ntu]; + + /* Transfer page from old buffer to new buffer. + * Move each member individually to avoid possible store + * forwarding stalls and unnecessary copy of skb. + */ + new_buf->dma = old_buf->dma; + new_buf->page = old_buf->page; + new_buf->page_offset = old_buf->page_offset; + new_buf->pagecnt_bias = old_buf->pagecnt_bias; } /** @@ -1704,7 +2591,15 @@ static void iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, const unsigned int size) { - /* stub */ + prefetch(rx_buf->page); + + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(dev, rx_buf->dma, + rx_buf->page_offset, size, + DMA_FROM_DEVICE); + + /* We have pulled a buffer for use, so decrement pagecnt_bias */ + rx_buf->pagecnt_bias--; } /** @@ -1721,7 +2616,52 @@ struct sk_buff * iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, unsigned int size) { - /* stub */ + void *va = page_address(rx_buf->page) + rx_buf->page_offset; + unsigned int headlen; + struct sk_buff *skb; + + /* prefetch first cache line of first page */ + prefetch(va); +#if L1_CACHE_BYTES < 128 + prefetch((u8 *)va + L1_CACHE_BYTES); +#endif /* L1_CACHE_BYTES */ + /* allocate a skb to store the frags */ + skb = __napi_alloc_skb(&rxq->q_vector->napi, IECM_RX_HDR_SIZE, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + skb_record_rx_queue(skb, rxq->idx); + + /* Determine available headroom for copy */ + headlen = size; + if (headlen > IECM_RX_HDR_SIZE) + headlen = eth_get_headlen(skb->dev, va, IECM_RX_HDR_SIZE); + + /* align pull length to size of long to optimize memcpy performance */ + memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); + + /* if we exhaust the linear part then add what is left as a frag */ + size -= headlen; + if (size) { +#if (PAGE_SIZE >= 8192) + unsigned int truesize = SKB_DATA_ALIGN(size); +#else + unsigned int truesize = IECM_RX_BUF_2048; +#endif + skb_add_rx_frag(skb, 0, rx_buf->page, + rx_buf->page_offset + headlen, size, truesize); + /* buffer is used by skb, update page_offset */ + iecm_rx_buf_adjust_pg_offset(rx_buf, truesize); + } else { + /* buffer is unused, reset bias back to rx_buf; data was copied + * onto skb's linear part so there's no need for adjusting + * page offset and we can reuse this buffer as-is + */ + rx_buf->pagecnt_bias++; + } + + return skb; } /** @@ -1738,7 +2678,11 @@ iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, */ bool iecm_rx_cleanup_headers(struct sk_buff *skb) { - /* stub */ + /* if eth_skb_pad returns an error the skb was freed */ + if (eth_skb_pad(skb)) + return true; + + return false; } /** @@ -1751,7 +2695,7 @@ bool iecm_rx_cleanup_headers(struct sk_buff *skb) static bool iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) { - /* stub */ + return !!(stat_err_field & stat_err_bits); } /** @@ -1764,7 +2708,13 @@ iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) static bool iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) { - /* stub */ + /* if we are the last buffer then there is nothing else to do */ +#define IECM_RXD_EOF BIT(IECM_RX_FLEX_DESC_STATUS0_EOF_S) + if (likely(iecm_rx_splitq_test_staterr(rx_desc->status_err0_qw1, + IECM_RXD_EOF))) + return false; + + return true; } /** @@ -1781,7 +2731,24 @@ iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, struct iecm_rx_buf *rx_buf) { - /* stub */ + bool recycled = false; + + if (iecm_rx_can_reuse_page(rx_buf)) { + /* hand second half of page back to the queue */ + iecm_rx_reuse_page(rx_bufq, hsplit, rx_buf); + recycled = true; + } else { + /* we are not reusing the buffer so unmap it */ + dma_unmap_page_attrs(rx_bufq->dev, rx_buf->dma, PAGE_SIZE, + DMA_FROM_DEVICE, IECM_RX_DMA_ATTR); + __page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias); + } + + /* clear contents of buffer_info */ + rx_buf->page = NULL; + rx_buf->skb = NULL; + + return recycled; } /** @@ -1797,7 +2764,19 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, struct iecm_rx_buf *hdr_buf, struct iecm_rx_buf *rx_buf) { - /* stub */ + u16 ntu = rx_bufq->next_to_use; + bool recycled = false; + + if (likely(hdr_buf)) + recycled = iecm_rx_recycle_buf(rx_bufq, true, hdr_buf); + if (likely(rx_buf)) + recycled = iecm_rx_recycle_buf(rx_bufq, false, rx_buf); + + /* update, and store next to alloc if the buffer was recycled */ + if (recycled) { + ntu++; + rx_bufq->next_to_use = (ntu < rx_bufq->desc_count) ? ntu : 0; + } } /** @@ -1806,7 +2785,14 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, */ static void iecm_rx_bump_ntc(struct iecm_queue *q) { - /* stub */ + u16 ntc = q->next_to_clean + 1; + /* fetch, update, and store next to clean */ + if (ntc < q->desc_count) { + q->next_to_clean = ntc; + } else { + q->next_to_clean = 0; + change_bit(__IECM_Q_GEN_CHK, q->flags); + } } /** @@ -1823,7 +2809,158 @@ static void iecm_rx_bump_ntc(struct iecm_queue *q) */ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) { - /* stub */ + unsigned int total_rx_bytes = 0, total_rx_pkts = 0; + u16 cleaned_count[IECM_BUFQS_PER_RXQ_SET] = {0}; + struct iecm_queue *rx_bufq = NULL; + struct sk_buff *skb = rxq->skb; + bool failure = false; + int i; + + /* Process Rx packets bounded by budget */ + while (likely(total_rx_pkts < (unsigned int)budget)) { + struct iecm_flex_rx_desc *splitq_flex_rx_desc; + union iecm_rx_desc *rx_desc; + struct iecm_rx_buf *hdr_buf = NULL; + struct iecm_rx_buf *rx_buf = NULL; + unsigned int pkt_len = 0; + unsigned int hdr_len = 0; + u16 gen_id, buf_id; + u8 stat_err0_qw0; + u8 stat_err_bits; + /* Header buffer overflow only valid for header split */ + bool hbo = false; + int bufq_id; + + /* get the Rx desc from Rx queue based on 'next_to_clean' */ + rx_desc = IECM_RX_DESC(rxq, rxq->next_to_clean); + splitq_flex_rx_desc = (struct iecm_flex_rx_desc *)rx_desc; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc + */ + dma_rmb(); + + /* if the descriptor isn't done, no work yet to do */ + gen_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id); + gen_id = (gen_id & IECM_RXD_FLEX_GEN_M) >> IECM_RXD_FLEX_GEN_S; + if (test_bit(__IECM_Q_GEN_CHK, rxq->flags) != gen_id) + break; + + pkt_len = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id) & + IECM_RXD_FLEX_LEN_PBUF_M; + + hbo = splitq_flex_rx_desc->status_err0_qw1 & + BIT(IECM_RX_FLEX_DESC_STATUS0_HBO_S); + + if (unlikely(hbo)) { + rxq->q_stats.rx.hsplit_hbo++; + goto bypass_hsplit; + } + + hdr_len = + le16_to_cpu(splitq_flex_rx_desc->hdrlen_flags) & + IECM_RXD_FLEX_LEN_HDR_M; + +bypass_hsplit: + bufq_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id); + bufq_id = (bufq_id & IECM_RXD_FLEX_BUFQ_ID_M) >> + IECM_RXD_FLEX_BUFQ_ID_S; + /* retrieve buffer from the rxq */ + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq; + + buf_id = le16_to_cpu(splitq_flex_rx_desc->fmd0_bufid.buf_id); + + if (pkt_len) { + rx_buf = &rx_bufq->rx_buf.buf[buf_id]; + iecm_rx_get_buf_page(rx_bufq->dev, rx_buf, pkt_len); + } + + if (hdr_len) { + hdr_buf = &rx_bufq->rx_buf.hdr_buf[buf_id]; + iecm_rx_get_buf_page(rx_bufq->dev, hdr_buf, + hdr_len); + + skb = iecm_rx_construct_skb(rxq, hdr_buf, hdr_len); + } + + if (skb && pkt_len) + iecm_rx_add_frag(rx_buf, skb, pkt_len); + else if (pkt_len) + skb = iecm_rx_construct_skb(rxq, rx_buf, pkt_len); + + /* exit if we failed to retrieve a buffer */ + if (!skb) { + /* If we fetched a buffer, but didn't use it + * undo pagecnt_bias decrement + */ + if (rx_buf) + rx_buf->pagecnt_bias++; + break; + } + + iecm_rx_splitq_put_bufs(rx_bufq, hdr_buf, rx_buf); + iecm_rx_bump_ntc(rxq); + cleaned_count[bufq_id]++; + + /* skip if it is non EOP desc */ + if (iecm_rx_splitq_is_non_eop(splitq_flex_rx_desc)) + continue; + + stat_err_bits = BIT(IECM_RX_FLEX_DESC_STATUS0_RXE_S); + stat_err0_qw0 = splitq_flex_rx_desc->status_err0_qw0; + if (unlikely(iecm_rx_splitq_test_staterr(stat_err0_qw0, + stat_err_bits))) { + dev_kfree_skb_any(skb); + skb = NULL; + continue; + } + + /* correct empty headers and pad skb if needed (to make valid + * Ethernet frame + */ + if (iecm_rx_cleanup_headers(skb)) { + skb = NULL; + continue; + } + + /* probably a little skewed due to removing CRC */ + total_rx_bytes += skb->len; + + /* protocol */ + if (unlikely(iecm_rx_process_skb_fields(rxq, skb, + splitq_flex_rx_desc))) { + dev_kfree_skb_any(skb); + skb = NULL; + continue; + } + + /* send completed skb up the stack */ + iecm_rx_skb(rxq, skb); + skb = NULL; + + /* update budget accounting */ + total_rx_pkts++; + } + for (i = 0; i < IECM_BUFQS_PER_RXQ_SET; i++) { + if (cleaned_count[i]) { + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[i].bufq; + failure = iecm_rx_buf_hw_alloc_all(rx_bufq, + cleaned_count[i]) || + failure; + } + } + + rxq->skb = skb; + u64_stats_update_begin(&rxq->stats_sync); + rxq->q_stats.rx.packets += total_rx_pkts; + rxq->q_stats.rx.bytes += total_rx_bytes; + u64_stats_update_end(&rxq->stats_sync); + + rxq->itr.stats.rx.packets += total_rx_pkts; + rxq->itr.stats.rx.bytes += total_rx_bytes; + + /* guarantee a trip back through this routine if there was a failure */ + return failure ? budget : (int)total_rx_pkts; } /** @@ -2379,7 +3516,15 @@ iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) static inline bool iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget) { - /* stub */ + bool clean_complete = true; + int i, budget_per_q; + + budget_per_q = max(budget / q_vec->num_txq, 1); + for (i = 0; i < q_vec->num_txq; i++) { + if (!iecm_tx_clean_complq(q_vec->tx[i], budget_per_q)) + clean_complete = false; + } + return clean_complete; } /** @@ -2394,7 +3539,22 @@ static inline bool iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, int *cleaned) { - /* stub */ + bool clean_complete = true; + int pkts_cleaned_per_q; + int i, budget_per_q; + + budget_per_q = max(budget / q_vec->num_rxq, 1); + for (i = 0; i < q_vec->num_rxq; i++) { + pkts_cleaned_per_q = iecm_rx_splitq_clean(q_vec->rx[i], + budget_per_q); + /* if we clean as many as budgeted, we must not + * be done + */ + if (pkts_cleaned_per_q >= budget_per_q) + clean_complete = false; + *cleaned += pkts_cleaned_per_q; + } + return clean_complete; } /** @@ -2404,7 +3564,34 @@ iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, */ static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) { - /* stub */ + struct iecm_q_vector *q_vector = + container_of(napi, struct iecm_q_vector, napi); + bool clean_complete; + int work_done = 0; + + clean_complete = iecm_tx_splitq_clean_all(q_vector, budget); + + /* Handle case where we are called by netpoll with a budget of 0 */ + if (budget <= 0) + return budget; + + /* We attempt to distribute budget to each Rx queue fairly, but don't + * allow the budget to go below 1 because that would exit polling early. + */ + clean_complete |= iecm_rx_splitq_clean_all(q_vector, budget, + &work_done); + + /* If work not completed, return budget and polling will return */ + if (!clean_complete) + return budget; + + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + iecm_vport_intr_update_itr_ena_irq(q_vector); + + return min_t(int, work_done, budget - 1); } /** From patchwork Tue Jun 23 22:40:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0BC8C433DF for ; Tue, 23 Jun 2020 22:41:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDBA22078A for ; Tue, 23 Jun 2020 22:41:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388261AbgFWWlg (ORCPT ); Tue, 23 Jun 2020 18:41:36 -0400 Received: from mga01.intel.com ([192.55.52.88]:14604 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388171AbgFWWlZ (ORCPT ); Tue, 23 Jun 2020 18:41:25 -0400 IronPort-SDR: zY7Ujiy5LMVOYlyEcoNPyvqcBcS6p0DxDJ5HRZi2Ahs4oqTmQvDxTAGjBOO91BV2Ht5RQqtTTM RzgDc61dV0KA== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327523" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327523" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:50 -0700 IronPort-SDR: dNksoQH2Sw5DcPRhsUBg97cOQ023hW0e+m3AeQN8JNpX60G/b7lTprOsShgi+hkGiHJON0DpuM oKeijP2hLANA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045966" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:49 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v2 14/15] iecm: Add iecm to the kernel build system Date: Tue, 23 Jun 2020 15:40:42 -0700 Message-Id: <20200623224043.801728-15-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This introduces iecm as a module to the kernel, and adds relevant documentation. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../networking/device_drivers/intel/iecm.rst | 93 +++++++++++++++++++ MAINTAINERS | 2 + drivers/net/ethernet/intel/Kconfig | 7 ++ drivers/net/ethernet/intel/Makefile | 1 + drivers/net/ethernet/intel/iecm/Makefile | 19 ++++ 5 files changed, 122 insertions(+) create mode 100644 Documentation/networking/device_drivers/intel/iecm.rst create mode 100644 drivers/net/ethernet/intel/iecm/Makefile diff --git a/Documentation/networking/device_drivers/intel/iecm.rst b/Documentation/networking/device_drivers/intel/iecm.rst new file mode 100644 index 000000000000..5634e3e65c74 --- /dev/null +++ b/Documentation/networking/device_drivers/intel/iecm.rst @@ -0,0 +1,93 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======================== +Intel Ethernet Common Module +======================== + +The Intel Ethernet Common Module is meant to serve as an abstraction layer +between device specific implementation details and common net device driver +flows. This library provides several function hooks which allow a device driver +to specify register addresses, control queue communications, and other device +specific functionality. Some functions are required to be implemented while +others have a default implementation that is used when none is supplied by the +device driver. Doing this, a device driver can be written to take advantage +of existing code while also giving the flexibility to implement device specific +features. + +The common use case for this library is for a network device driver that wants +specify its own device specific details but also leverage the more common +code flows found in network device drivers. + +Sections in this document: + Entry Point + Exit Point + Register Operations API + Virtchnl Operations API + +Entry Point +~~~~~~~~~~~ +The primary entry point to the library is the iecm_probe function. Prior to +calling this, device drivers must have allocated an iecm_adapter struct and +initialized it with the required API functions. The adapter struct, along with +the pci_dev struct and the pci_device_id struct, is provided to iecm_probe +which finalizes device initialization and prepares the device for open. + +The iecm_dev_ops struct within the iecm_adapter struct is the primary vehicle +for passing information from device drivers to the common module. A dependent +module must define and assign a reg_ops_init function which will assign the +respective function pointers to initialize register values (see iecm_reg_ops +struct). These are required to be provided by the dependent device driver as +no suitable default can be assumed for register addresses. + +The vc_ops_init function pointer and the related iecm_virtchnl_ops struct are +optional and should only be necessary for device drivers which use a different +method/timing for communicating across a mailbox to the hardware. Within iecm +is a default interface provided in the case where one is not provided by the +device driver. + +Exit Point +~~~~~~~~~~ +When the device driver is being prepared to be removed through the pci_driver +remove callback, it is required for the device driver to call iecm_remove with +the pci_dev struct provided. This is required to ensure all resources are +properly freed and returned to the operating system. + +Register Operations API +~~~~~~~~~~~~~~~~~~~~~~~ +iecm_reg_ops contains three different function pointers relating to initializing +registers for the specific net device using the library. + +ctlq_reg_init relates specifically to setting up registers related to control +queue/mailbox communications. Registers that should be defined include: head, +tail, len, bah, bal, len_mask, len_ena_mask, and head_mask. + +vportq_reg_init relates to setting up queue registers. The tail registers to +be assigned to the iecm_queue struct for each RX/TX queue. + +intr_reg_init relates to any registers needed to setup interrupts. These +include registers needed to enable the interrupt and change ITR settings. + +If the initialization function finds that one or more required function +pointers were not provided, an error will be issued and the device will be +inoperable. + + +Virtchnl Operations API +~~~~~~~~~~~~~~~~~~~~~~~ +The virtchnl is a conduit between driver and hardware that allows device +drivers to send and receive control messages to/from hardware. This is +optional to be specified as there is a general interface that can be assumed +when using this library. However, if a device deviates in some way to +communicate across the mailbox correctly, this interface is provided to allow +that. + +If vc_ops_init is set in the dev_ops field of the iecm_adapter struct, then it +is assumed the device driver is using providing it's own interface to do +virtchnl communications. While providing vc_ops_init is optional, if it is +provided, it is required that the device driver provide function pointers for +those functions in vc_ops, with exception for the enable_vport, disable_vport, +and destroy_vport functions which may not be required for all devices. + +If the initialization function finds that vc_ops_init was defined but one or +more required function pointers were not provided, an error will be issued and +the device will be inoperable. diff --git a/MAINTAINERS b/MAINTAINERS index 301330e02bca..102ee1e4aef0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8654,6 +8654,7 @@ F: Documentation/networking/device_drivers/intel/fm10k.rst F: Documentation/networking/device_drivers/intel/i40e.rst F: Documentation/networking/device_drivers/intel/iavf.rst F: Documentation/networking/device_drivers/intel/ice.rst +F: Documentation/networking/device_drivers/intel/iecm.rst F: Documentation/networking/device_drivers/intel/igb.rst F: Documentation/networking/device_drivers/intel/igbvf.rst F: Documentation/networking/device_drivers/intel/ixgb.rst @@ -8662,6 +8663,7 @@ F: Documentation/networking/device_drivers/intel/ixgbevf.rst F: drivers/net/ethernet/intel/ F: drivers/net/ethernet/intel/*/ F: include/linux/avf/virtchnl.h +F: include/linux/net/intel/ INTEL FRAMEBUFFER DRIVER (excluding 810 and 815) M: Maik Broemme diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 48a8f9aa1dd0..6dd985cbdb6d 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -342,4 +342,11 @@ config IGC To compile this driver as a module, choose M here. The module will be called igc. +config IECM + tristate "Intel(R) Ethernet Common Module Support" + default n + depends on PCI + help + To compile this driver as a module, choose M here. The module + will be called iecm. MSI-X interrupt support is required endif # NET_VENDOR_INTEL diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile index 3075290063f6..c9eba9cc5087 100644 --- a/drivers/net/ethernet/intel/Makefile +++ b/drivers/net/ethernet/intel/Makefile @@ -16,3 +16,4 @@ obj-$(CONFIG_IXGB) += ixgb/ obj-$(CONFIG_IAVF) += iavf/ obj-$(CONFIG_FM10K) += fm10k/ obj-$(CONFIG_ICE) += ice/ +obj-$(CONFIG_IECM) += iecm/ diff --git a/drivers/net/ethernet/intel/iecm/Makefile b/drivers/net/ethernet/intel/iecm/Makefile new file mode 100644 index 000000000000..61c814013582 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/Makefile @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2020 Intel Corporation + +# +# Makefile for the Intel(R) Ethernet Common Module +# + +obj-$(CONFIG_IECM) += iecm.o + +iecm-y := \ + iecm_lib.o \ + iecm_virtchnl.o \ + iecm_txrx.o \ + iecm_singleq_txrx.o \ + iecm_ethtool.o \ + iecm_controlq.o \ + iecm_osdep.o \ + iecm_controlq_setup.o \ + iecm_main.o From patchwork Tue Jun 23 22:40:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54104C433DF for ; Tue, 23 Jun 2020 22:41:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33DD32078A for ; Tue, 23 Jun 2020 22:41:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388310AbgFWWlj (ORCPT ); Tue, 23 Jun 2020 18:41:39 -0400 Received: from mga01.intel.com ([192.55.52.88]:14604 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387814AbgFWWlg (ORCPT ); Tue, 23 Jun 2020 18:41:36 -0400 IronPort-SDR: K5jL2RehzlVMlF/VkHjQCRcQuLOJgIjpZzGLziRPJTujjEYFiXupXNXCJG+KHDRVsnPQDWlUOI PAuRw8FMPcxA== X-IronPort-AV: E=McAfee;i="6000,8403,9661"; a="162327525" X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="162327525" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2020 15:40:50 -0700 IronPort-SDR: KIl+LvWoDROq0NgLYtW7ZV4EG9g1m156TCjrczQcpJi3DxU9d+0coe/5KK9SDpmVsJpad6uzWI t7MiuQ5m6pJA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,272,1589266800"; d="scan'208";a="479045968" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga005.fm.intel.com with ESMTP; 23 Jun 2020 15:40:50 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alan Brady , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alice Michael , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , kbuild test robot , Jeff Kirsher Subject: [net-next v2 15/15] idpf: Introduce idpf driver Date: Tue, 23 Jun 2020 15:40:43 -0700 Message-Id: <20200623224043.801728-16-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> References: <20200623224043.801728-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alan Brady Utilizes the Intel Ethernet Common Module and provides a device specific implementation for data plane devices. Signed-off-by: Alan Brady Signed-off-by: Alice Michael Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Reported-by: kbuild test robot Signed-off-by: Jeff Kirsher --- .../networking/device_drivers/intel/idpf.rst | 47 ++++++ MAINTAINERS | 1 + drivers/net/ethernet/intel/Kconfig | 8 + drivers/net/ethernet/intel/Makefile | 1 + drivers/net/ethernet/intel/idpf/Makefile | 12 ++ drivers/net/ethernet/intel/idpf/idpf_dev.h | 17 ++ drivers/net/ethernet/intel/idpf/idpf_devids.h | 10 ++ drivers/net/ethernet/intel/idpf/idpf_main.c | 136 ++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_reg.c | 152 ++++++++++++++++++ 9 files changed, 384 insertions(+) create mode 100644 Documentation/networking/device_drivers/intel/idpf.rst create mode 100644 drivers/net/ethernet/intel/idpf/Makefile create mode 100644 drivers/net/ethernet/intel/idpf/idpf_dev.h create mode 100644 drivers/net/ethernet/intel/idpf/idpf_devids.h create mode 100644 drivers/net/ethernet/intel/idpf/idpf_main.c create mode 100644 drivers/net/ethernet/intel/idpf/idpf_reg.c diff --git a/Documentation/networking/device_drivers/intel/idpf.rst b/Documentation/networking/device_drivers/intel/idpf.rst new file mode 100644 index 000000000000..973fa9613428 --- /dev/null +++ b/Documentation/networking/device_drivers/intel/idpf.rst @@ -0,0 +1,47 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================================================== +Linux Base Driver for the Intel(R) Smart Network Adapter Family Series +================================================================== + +Intel idpf Linux driver. +Copyright(c) 2020 Intel Corporation. + +Contents +======== + +- Enabling the driver +- Support + +The driver in this release supports Intel's Smart Network Adapter Family Series +of products. For more information, visit Intel's support page at +https://support.intel.com. + +Enabling the driver +=================== +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> Network device support (NETDEVICES [=y]) + -> Ethernet driver support + -> Intel devices + -> Intel(R) Smart Network Adapter Family Series Support + +Support +======= +For general information, go to the Intel support website at: + +https://www.intel.com/support/ + +or the Intel Wired Networking project hosted by Sourceforge at: + +https://sourceforge.net/projects/e1000 + +If an issue is identified with the released source code on a supported kernel +with a supported adapter, email the specific information related to the issue +to e1000-devel@lists.sf.net. diff --git a/MAINTAINERS b/MAINTAINERS index 102ee1e4aef0..97ac0d417067 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8654,6 +8654,7 @@ F: Documentation/networking/device_drivers/intel/fm10k.rst F: Documentation/networking/device_drivers/intel/i40e.rst F: Documentation/networking/device_drivers/intel/iavf.rst F: Documentation/networking/device_drivers/intel/ice.rst +F: Documentation/networking/device_drivers/intel/idpf.rst F: Documentation/networking/device_drivers/intel/iecm.rst F: Documentation/networking/device_drivers/intel/igb.rst F: Documentation/networking/device_drivers/intel/igbvf.rst diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 6dd985cbdb6d..9e0b3c1bf7c6 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -349,4 +349,12 @@ config IECM help To compile this driver as a module, choose M here. The module will be called iecm. MSI-X interrupt support is required + +config IDPF + tristate "Intel(R) Data Plane Function Support" + default n + depends on PCI + help + To compile this driver as a module, choose M here. The module + will be called idpf. endif # NET_VENDOR_INTEL diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile index c9eba9cc5087..3786c2269f3d 100644 --- a/drivers/net/ethernet/intel/Makefile +++ b/drivers/net/ethernet/intel/Makefile @@ -17,3 +17,4 @@ obj-$(CONFIG_IAVF) += iavf/ obj-$(CONFIG_FM10K) += fm10k/ obj-$(CONFIG_ICE) += ice/ obj-$(CONFIG_IECM) += iecm/ +obj-$(CONFIG_IDPF) += idpf/ diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile new file mode 100644 index 000000000000..ac6cac6c6360 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2020 Intel Corporation + +# +# Makefile for the Intel(R) Data Plane Function Linux Driver +# + +obj-$(CONFIG_IDPF) += idpf.o + +idpf-y := \ + idpf_main.o \ + idpf_reg.o diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.h b/drivers/net/ethernet/intel/idpf/idpf_dev.h new file mode 100644 index 000000000000..1da33f5120a2 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IDPF_DEV_H_ +#define _IDPF_DEV_H_ + +#include + +void idpf_intr_reg_init(struct iecm_vport *vport); +void idpf_mb_intr_reg_init(struct iecm_adapter *adapter); +void idpf_reset_reg_init(struct iecm_reset_reg *reset_reg); +void idpf_trigger_reset(struct iecm_adapter *adapter, + enum iecm_flags trig_cause); +void idpf_vportq_reg_init(struct iecm_vport *vport); +void idpf_ctlq_reg_init(struct iecm_ctlq_create_info *cq); + +#endif /* _IDPF_DEV_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_devids.h b/drivers/net/ethernet/intel/idpf/idpf_devids.h new file mode 100644 index 000000000000..ee373a04cb20 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_devids.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IDPF_DEVIDS_H_ +#define _IDPF_DEVIDS_H_ + +/* Device IDs */ +#define IDPF_DEV_ID_PF 0x1452 + +#endif /* _IDPF_DEVIDS_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c new file mode 100644 index 000000000000..0290902958ee --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_main.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include "idpf_dev.h" +#include "idpf_devids.h" + +#define DRV_SUMMARY "Intel(R) Data Plane Function Linux Driver" +static const char idpf_driver_string[] = DRV_SUMMARY; +static const char idpf_copyright[] = "Copyright (c) 2020, Intel Corporation."; + +MODULE_AUTHOR("Intel Corporation, "); +MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_LICENSE("GPL v2"); + +int debug = -1; +module_param(debug, int, 0644); +#ifndef CONFIG_DYNAMIC_DEBUG +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXXXXX)"); +#else +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); +#endif /* !CONFIG_DYNAMIC_DEBUG */ + +/** + * idpf_reg_ops_init - Initialize register API function pointers + * @adapter: Driver specific private structure + */ +static void idpf_reg_ops_init(struct iecm_adapter *adapter) +{ + adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_ctlq_reg_init; + adapter->dev_ops.reg_ops.vportq_reg_init = idpf_vportq_reg_init; + adapter->dev_ops.reg_ops.intr_reg_init = idpf_intr_reg_init; + adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_mb_intr_reg_init; + adapter->dev_ops.reg_ops.reset_reg_init = idpf_reset_reg_init; + adapter->dev_ops.reg_ops.trigger_reset = idpf_trigger_reset; +} + +/** + * idpf_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in idpf_pci_tbl + * + * Returns 0 on success, negative on failure + */ +int idpf_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent) +{ + struct iecm_adapter *adapter = NULL; + int err; + + err = pcim_enable_device(pdev); + if (err) + return err; + + adapter = kzalloc(sizeof(*adapter), GFP_KERNEL); + if (!adapter) + return -ENOMEM; + + adapter->dev_ops.reg_ops_init = idpf_reg_ops_init; + + err = iecm_probe(pdev, ent, adapter); + if (err) + kfree(adapter); + + return err; +} + +/** + * idpf_remove - Device removal routine + * @pdev: PCI device information struct + */ +static void idpf_remove(struct pci_dev *pdev) +{ + struct iecm_adapter *adapter = pci_get_drvdata(pdev); + + iecm_remove(pdev); + kfree(adapter); +} + +/* idpf_pci_tbl - PCI Dev idpf ID Table + * + * Wildcard entries (PCI_ANY_ID) should come last + * Last entry must be all 0s + * + * { Vendor ID, Deviecm ID, SubVendor ID, SubDevice ID, + * Class, Class Mask, private data (not used) } + */ +static const struct pci_device_id idpf_pci_tbl[] = { + { PCI_VDEVICE(INTEL, IDPF_DEV_ID_PF), 0 }, + /* required last entry */ + { 0, } +}; +MODULE_DEVICE_TABLE(pci, idpf_pci_tbl); + +static struct pci_driver idpf_driver = { + .name = KBUILD_MODNAME, + .id_table = idpf_pci_tbl, + .probe = idpf_probe, + .remove = idpf_remove, + .shutdown = iecm_shutdown, +}; + +/** + * idpf_module_init - Driver registration routine + * + * idpf_module_init is the first routine called when the driver is + * loaded. All it does is register with the PCI subsystem. + */ +static int __init idpf_module_init(void) +{ + int status; + + pr_info("%s", idpf_driver_string); + pr_info("%s\n", idpf_copyright); + + status = pci_register_driver(&idpf_driver); + if (status) + pr_err("failed to register pci driver, err %d\n", status); + + return status; +} +module_init(idpf_module_init); + +/** + * idpf_module_exit - Driver exit cleanup routine + * + * idpf_module_exit is called just before the driver is removed + * from memory. + */ +static void __exit idpf_module_exit(void) +{ + pci_unregister_driver(&idpf_driver); + pr_info("module unloaded\n"); +} +module_exit(idpf_module_exit); diff --git a/drivers/net/ethernet/intel/idpf/idpf_reg.c b/drivers/net/ethernet/intel/idpf/idpf_reg.c new file mode 100644 index 000000000000..f5f364639cfc --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_reg.c @@ -0,0 +1,152 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include "idpf_dev.h" +#include + +/** + * idpf_ctlq_reg_init - initialize default mailbox registers + * @cq: pointer to the array of create control queues + */ +void idpf_ctlq_reg_init(struct iecm_ctlq_create_info *cq) +{ + int i; + +#define NUM_Q 2 + for (i = 0; i < NUM_Q; i++) { + struct iecm_ctlq_create_info *ccq = cq + i; + + switch (ccq->type) { + case IECM_CTLQ_TYPE_MAILBOX_TX: + /* set head and tail registers in our local struct */ + ccq->reg.head = PF_FW_ATQH; + ccq->reg.tail = PF_FW_ATQT; + ccq->reg.len = PF_FW_ATQLEN; + ccq->reg.bah = PF_FW_ATQBAH; + ccq->reg.bal = PF_FW_ATQBAL; + ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M; + ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M; + ccq->reg.head_mask = PF_FW_ATQH_ATQH_M; + break; + case IECM_CTLQ_TYPE_MAILBOX_RX: + /* set head and tail registers in our local struct */ + ccq->reg.head = PF_FW_ARQH; + ccq->reg.tail = PF_FW_ARQT; + ccq->reg.len = PF_FW_ARQLEN; + ccq->reg.bah = PF_FW_ARQBAH; + ccq->reg.bal = PF_FW_ARQBAL; + ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M; + ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M; + ccq->reg.head_mask = PF_FW_ARQH_ARQH_M; + break; + default: + break; + } + } +} + +/** + * idpf_vportq_reg_init - Initialize tail registers + * @vport: virtual port structure + */ +void idpf_vportq_reg_init(struct iecm_vport *vport) +{ + struct iecm_hw *hw = &vport->adapter->hw; + struct iecm_queue *q; + int i, j; + + for (i = 0; i < vport->num_txq_grp; i++) { + int num_txq = vport->txq_grps[i].num_txq; + + for (j = 0; j < num_txq; j++) { + q = &vport->txq_grps[i].txqs[j]; + q->tail = hw->hw_addr + PF_QTX_COMM_DBELL(q->q_id); + } + } + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rxq_grp = &vport->rxq_grps[i]; + int num_rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + q = &rxq_grp->splitq.bufq_sets[j].bufq; + q->tail = hw->hw_addr + + PF_QRX_BUFFQ_TAIL(q->q_id); + } + + num_rxq = rxq_grp->splitq.num_rxq_sets; + } else { + num_rxq = rxq_grp->singleq.num_rxq; + } + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &rxq_grp->splitq.rxq_sets[j].rxq; + else + q = &rxq_grp->singleq.rxqs[j]; + q->tail = hw->hw_addr + PF_QRX_TAIL(q->q_id); + } + } +} + +/** + * idpf_mb_intr_reg_init - Initialize mailbox interrupt register + * @adapter: adapter structure + */ +void idpf_mb_intr_reg_init(struct iecm_adapter *adapter) +{ + struct iecm_intr_reg *intr = &adapter->mb_vector.intr_reg; + int vidx; + + vidx = adapter->mb_vector.v_idx; + intr->dyn_ctl = PF_GLINT_DYN_CTL(vidx); + intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M; + intr->dyn_ctl_itridx_m = 0x3 << PF_GLINT_DYN_CTL_ITR_INDX_S; +} + +/** + * idpf_intr_reg_init - Initialize interrupt registers + * @vport: virtual port structure + */ +void idpf_intr_reg_init(struct iecm_vport *vport) +{ + int q_idx; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + struct iecm_intr_reg *intr = &q_vector->intr_reg; + u32 vidx = q_vector->v_idx; + + intr->dyn_ctl = PF_GLINT_DYN_CTL(vidx); + intr->dyn_ctl_clrpba_m = PF_GLINT_DYN_CTL_CLEARPBA_M; + intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M; + intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S; + intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S; + intr->itr = PF_GLINT_ITR(VIRTCHNL_ITR_IDX_0, vidx); + } +} + +/** + * idpf_reset_reg_init - Initialize reset registers + * @reset_reg: struct to be filled in with reset registers + */ +void idpf_reset_reg_init(struct iecm_reset_reg *reset_reg) +{ + reset_reg->rstat = PFGEN_RSTAT; + reset_reg->rstat_m = PFGEN_RSTAT_PFR_STATE_M; +} + +/** + * idpf_trigger_reset - trigger reset + * @adapter: Driver specific private structure + * @trig_cause: Reason to trigger a reset + */ +void idpf_trigger_reset(struct iecm_adapter *adapter, + enum iecm_flags trig_cause) +{ + u32 reset_reg; + + reset_reg = rd32(&adapter->hw, PFGEN_CTRL); + wr32(&adapter->hw, PFGEN_CTRL, (reset_reg | PFGEN_CTRL_PFSWR)); +}