From patchwork Fri Jun 26 02:07:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9E5CC433E0 for ; Fri, 26 Jun 2020 02:07:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDE4C207D8 for ; Fri, 26 Jun 2020 02:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728010AbgFZCHl (ORCPT ); Thu, 25 Jun 2020 22:07:41 -0400 Received: from mga03.intel.com ([134.134.136.65]:45084 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726070AbgFZCHl (ORCPT ); Thu, 25 Jun 2020 22:07:41 -0400 IronPort-SDR: 7O/rdVuWtLSYdstT7J7YVhakiPn79zeGC7QRZ+U/u01jDQsZONovXHdWsLTGT3kQNEG82xuP7o FxMYl1YLww8w== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209886" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209886" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:39 -0700 IronPort-SDR: 54qsJYOAWpSMA0Y+ziLqdInS0C0rz49HMwCm5THNvsj/mxYmQmhfGOqStZu8xXeVDM7Pqa0Lwu bfmhIu9//w/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011863" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:39 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 01/15] virtchnl: Extend AVF ops Date: Thu, 25 Jun 2020 19:07:23 -0700 Message-Id: <20200626020737.775377-2-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This implements the next generation of virtchnl ops which enable greater functionality and capabilities. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- include/linux/avf/virtchnl.h | 600 +++++++++++++++++++++++++++++++++++ 1 file changed, 600 insertions(+) diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index 40bad71865ea..60180c86d1b8 100644 --- a/include/linux/avf/virtchnl.h +++ b/include/linux/avf/virtchnl.h @@ -136,6 +136,34 @@ enum virtchnl_ops { VIRTCHNL_OP_DISABLE_CHANNELS = 31, VIRTCHNL_OP_ADD_CLOUD_FILTER = 32, VIRTCHNL_OP_DEL_CLOUD_FILTER = 33, + /* New major set of opcodes introduced and so leaving room for + * old misc opcodes to be added in future. Also these opcodes may only + * be used if both the PF and VF have successfully negotiated the + * VIRTCHNL_VF_CAP_EXT_FEATURES capability during initial capabilities + * exchange. + */ + VIRTCHNL_OP_GET_CAPS = 100, + VIRTCHNL_OP_CREATE_VPORT = 101, + VIRTCHNL_OP_DESTROY_VPORT = 102, + VIRTCHNL_OP_ENABLE_VPORT = 103, + VIRTCHNL_OP_DISABLE_VPORT = 104, + VIRTCHNL_OP_CONFIG_TX_QUEUES = 105, + VIRTCHNL_OP_CONFIG_RX_QUEUES = 106, + VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107, + VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108, + VIRTCHNL_OP_ADD_QUEUES = 109, + VIRTCHNL_OP_DEL_QUEUES = 110, + VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111, + VIRTCHNL_OP_UNMAP_QUEUE_VECTOR = 112, + VIRTCHNL_OP_GET_RSS_KEY = 113, + VIRTCHNL_OP_GET_RSS_LUT = 114, + VIRTCHNL_OP_SET_RSS_LUT = 115, + VIRTCHNL_OP_GET_RSS_HASH = 116, + VIRTCHNL_OP_SET_RSS_HASH = 117, + VIRTCHNL_OP_CREATE_VFS = 118, + VIRTCHNL_OP_DESTROY_VFS = 119, + VIRTCHNL_OP_ALLOC_VECTORS = 120, + VIRTCHNL_OP_DEALLOC_VECTORS = 121, }; /* These macros are used to generate compilation errors if a structure/union @@ -247,6 +275,14 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM 0X00200000 #define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM 0X00400000 #define VIRTCHNL_VF_OFFLOAD_ADQ 0X00800000 +#define VIRTCHNL_VF_OFFLOAD_ADQ_V2 0X01000000 +#define VIRTCHNL_VF_OFFLOAD_USO 0X02000000 +#define VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC 0X04000000 +#define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF 0X08000000 +#define VIRTCHNL_VF_OFFLOAD_FDIR_PF 0X10000000 +#define VIRTCHNL_VF_CAP_EXT_FEATURES 0x20000000 + /* 0X40000000 is reserved */ +#define VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC 0X80000000 /* Define below the capability flags that are not offloads */ #define VIRTCHNL_VF_CAP_ADV_LINK_SPEED 0x00000080 @@ -463,6 +499,21 @@ VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info); * PF replies with struct eth_stats in an external buffer. */ +struct virtchnl_eth_stats { + u64 rx_bytes; /* received bytes */ + u64 rx_unicast; /* received unicast pkts */ + u64 rx_multicast; /* received multicast pkts */ + u64 rx_broadcast; /* received broadcast pkts */ + u64 rx_discards; + u64 rx_unknown_protocol; + u64 tx_bytes; /* transmitted bytes */ + u64 tx_unicast; /* transmitted unicast pkts */ + u64 tx_multicast; /* transmitted multicast pkts */ + u64 tx_broadcast; /* transmitted broadcast pkts */ + u64 tx_discards; + u64 tx_errors; +}; + /* VIRTCHNL_OP_CONFIG_RSS_KEY * VIRTCHNL_OP_CONFIG_RSS_LUT * VF sends these messages to configure RSS. Only supported if both PF @@ -668,6 +719,397 @@ enum virtchnl_vfr_states { VIRTCHNL_VFR_VFACTIVE, }; +/* PF capability flags + * VIRTCHNL_CAP_STATELESS_OFFLOADS flag indicates stateless offloads + * such as TX/RX Checksum offloading and TSO for non-tunneled packets. Please + * note that old and new capabilities are exclusive and not supposed to be + * mixed + */ +#define VIRTCHNL_CAP_STATELESS_OFFLOADS BIT(1) +#define VIRTCHNL_CAP_UDP_SEG_OFFLOAD BIT(2) +#define VIRTCHNL_CAP_RSS BIT(3) +#define VIRTCHNL_CAP_TCP_RSC BIT(4) +#define VIRTCHNL_CAP_HEADER_SPLIT BIT(5) +#define VIRTCHNL_CAP_RDMA BIT(6) +#define VIRTCHNL_CAP_SRIOV BIT(7) +/* Earliest Departure Time capability used for Timing Wheel */ +#define VIRTCHNL_CAP_EDT BIT(8) + +/* Type of virtual port */ +enum virtchnl_vport_type { + VIRTCHNL_VPORT_TYPE_DEFAULT = 0, +}; + +/* Type of queue model */ +enum virtchnl_queue_model { + VIRTCHNL_QUEUE_MODEL_SINGLE = 0, + VIRTCHNL_QUEUE_MODEL_SPLIT = 1, +}; + +/* TX and RX queue types are valid in legacy as well as split queue models. + * With Split Queue model, 2 additional types are introduced - TX_COMPLETION + * and RX_BUFFER. In split queue model, RX corresponds to the queue where HW + * posts completions. + */ +enum virtchnl_queue_type { + VIRTCHNL_QUEUE_TYPE_TX = 0, + VIRTCHNL_QUEUE_TYPE_RX = 1, + VIRTCHNL_QUEUE_TYPE_TX_COMPLETION = 2, + VIRTCHNL_QUEUE_TYPE_RX_BUFFER = 3, +}; + +/* RX Queue Feature bits */ +#define VIRTCHNL_RXQ_RSC BIT(1) +#define VIRTCHNL_RXQ_HDR_SPLIT BIT(2) +#define VIRTCHNL_RXQ_IMMEDIATE_WRITE_BACK BIT(4) + +/* RX Queue Descriptor Types */ +enum virtchnl_rxq_desc_size { + VIRTCHNL_RXQ_DESC_SIZE_16BYTE = 0, + VIRTCHNL_RXQ_DESC_SIZE_32BYTE = 1, +}; + +/* TX Queue Scheduling Modes Queue mode is the legacy type i.e. inorder + * and Flow mode is out of order packet processing + */ +enum virtchnl_txq_sched_mode { + VIRTCHNL_TXQ_SCHED_MODE_QUEUE = 0, + VIRTCHNL_TXQ_SCHED_MODE_FLOW = 1, +}; + +/* Queue Descriptor Profiles Base mode is the legacy and Native is the + * flex descriptors + */ +enum virtchnl_desc_profile { + VIRTCHNL_TXQ_DESC_PROFILE_BASE = 0, + VIRTCHNL_TXQ_DESC_PROFILE_NATIVE = 1, +}; + +/* Type of RSS algorithm */ +enum virtchnl_rss_algorithm { + VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0, + VIRTCHNL_RSS_ALG_R_ASYMMETRIC = 1, + VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC = 2, + VIRTCHNL_RSS_ALG_XOR_SYMMETRIC = 3, +}; + +/* VIRTCHNL_OP_GET_CAPS + * PF sends this message to CP to negotiate capabilities by filling + * in the u64 bitmap of its desired capabilities, max_num_vfs and + * num_allocated_vectors. + * CP responds with an updated virtchnl_get_capabilities structure + * with allowed capabilities and the other fields as below. + * If PF sets max_num_vfs as 0, CP will respond with max number of VFs + * that can be created by this PF. For any other value 'n', CP responds + * with max_num_vfs set to max(n, x) where x is the max number of VFs + * allowed by CP's policy. + * If PF sets num_allocated_vectors as 0, CP will respond with 1 which + * is default vector associated with the default mailbox. For any other + * value 'n', CP responds with a value <= n based on the CP's policy of + * max number of vectors for a PF. + */ +struct virtchnl_get_capabilities { + u64 cap_flags; + u16 max_num_vfs; + u16 num_allocated_vectors; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_get_capabilities); + +/* structure to specify a chunk of contiguous queues */ +struct virtchnl_queue_chunk { + enum virtchnl_queue_type type; + u16 start_queue_id; + u16 num_queues; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk); + +/* structure to specify several chunks of contiguous queues */ +struct virtchnl_queue_chunks { + u16 num_chunks; + u16 rsvd; + struct virtchnl_queue_chunk chunks[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_chunks); + +/* VIRTCHNL_OP_CREATE_VPORT + * PF sends this message to CP to create a vport by filling in the first 8 + * fields of virtchnl_create_vport structure (vport type, Tx, Rx queue models + * and desired number of queues and vectors). CP responds with the updated + * virtchnl_create_vport structure containing the number of assigned queues, + * vectors, vport id, max mtu, default mac addr followed by chunks which in turn + * will have an array of num_chunks entries of virtchnl_queue_chunk structures. + */ +struct virtchnl_create_vport { + enum virtchnl_vport_type vport_type; + /* single or split */ + enum virtchnl_queue_model txq_model; + /* single or split */ + enum virtchnl_queue_model rxq_model; + u16 num_tx_q; + /* valid only if txq_model is split Q */ + u16 num_tx_complq; + u16 num_rx_q; + /* valid only if rxq_model is split Q */ + u16 num_rx_bufq; + u16 vport_id; + u16 max_mtu; + u8 default_mac_addr[ETH_ALEN]; + enum virtchnl_rss_algorithm rss_algorithm; + u16 rss_key_size; + u16 rss_lut_size; + u16 qset_handle; + struct virtchnl_queue_chunks chunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(56, virtchnl_create_vport); + +/* VIRTCHNL_OP_DESTROY_VPORT + * VIRTCHNL_OP_ENABLE_VPORT + * VIRTCHNL_OP_DISABLE_VPORT + * PF sends this message to CP to destroy, enable or disable a vport by filling + * in the vport_id in virtchnl_vport structure. + * CP responds with the status of the requested operation. + */ +struct virtchnl_vport { + u16 vport_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(2, virtchnl_vport); + +/* Tx queue config info */ +struct virtchnl_txq_info_v2 { + u16 queue_id; + /* single or split */ + enum virtchnl_queue_model model; + /* Tx or tx_completion */ + enum virtchnl_queue_type type; + /* queue or flow based */ + enum virtchnl_txq_sched_mode sched_mode; + /* base or native */ + enum virtchnl_desc_profile desc_profile; + u16 ring_len; + u64 dma_ring_addr; + /* valid only if queue model is split and type is Tx */ + u16 tx_compl_queue_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_txq_info_v2); + +/* VIRTCHNL_OP_CONFIG_TX_QUEUES + * PF sends this message to set up parameters for one or more TX queues. + * This message contains an array of num_qinfo instances of virtchnl_txq_info_v2 + * structures. CP configures requested queues and returns a status code. If + * num_qinfo specified is greater than the number of queues associated with the + * vport, an error is returned and no queues are configured. + */ +struct virtchnl_config_tx_queues { + u16 vport_id; + u16 num_qinfo; + u32 rsvd; + struct virtchnl_txq_info_v2 qinfo[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_config_tx_queues); + +/* Rx queue config info */ +struct virtchnl_rxq_info_v2 { + u16 queue_id; + /* single or split */ + enum virtchnl_queue_model model; + /* Rx or Rx buffer */ + enum virtchnl_queue_type type; + /* base or native */ + enum virtchnl_desc_profile desc_profile; + /* rsc, header-split, immediate write back */ + u16 queue_flags; + /* 16 or 32 byte */ + enum virtchnl_rxq_desc_size desc_size; + u16 ring_len; + u16 hdr_buffer_size; + u32 data_buffer_size; + u32 max_pkt_size; + u64 dma_ring_addr; + u64 dma_head_wb_addr; + u16 rsc_low_watermark; + u8 buffer_notif_stride; + enum virtchnl_rx_hsplit rx_split_pos; + /* valid only if queue model is split and type is Rx buffer*/ + u16 rx_bufq1_id; + /* valid only if queue model is split and type is Rx buffer*/ + u16 rx_bufq2_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_rxq_info_v2); + +/* VIRTCHNL_OP_CONFIG_RX_QUEUES + * PF sends this message to set up parameters for one or more RX queues. + * This message contains an array of num_qinfo instances of virtchnl_rxq_info_v2 + * structures. CP configures requested queues and returns a status code. + * If the number of queues specified is greater than the number of queues + * associated with the vport, an error is returned and no queues are configured. + */ +struct virtchnl_config_rx_queues { + u16 vport_id; + u16 num_qinfo; + struct virtchnl_rxq_info_v2 qinfo[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_config_rx_queues); + +/* VIRTCHNL_OP_ADD_QUEUES + * PF sends this message to request additional TX/RX queues beyond the ones + * that were assigned via CREATE_VPORT request. virtchnl_add_queues structure is + * used to specify the number of each type of queues. + * CP responds with the same structure with the actual number of queues assigned + * followed by num_chunks of virtchnl_queue_chunk structures. + */ +struct virtchnl_add_queues { + u16 vport_id; + u16 num_tx_q; + u16 num_tx_complq; + u16 num_rx_q; + u16 num_rx_bufq; + struct virtchnl_queue_chunks chunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_add_queues); + +/* VIRTCHNL_OP_ENABLE_QUEUES + * VIRTCHNL_OP_DISABLE_QUEUES + * VIRTCHNL_OP_DEL_QUEUES + * PF sends these messages to enable, disable or delete queues specified in + * chunks. PF sends virtchnl_del_ena_dis_queues struct to specify the queues + * to be enabled/disabled/deleted. Also applicable to single queue RX or + * TX. CP performs requested action and returns status. + */ +struct virtchnl_del_ena_dis_queues { + u16 vport_id; + struct virtchnl_queue_chunks chunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_del_ena_dis_queues); + +/* Virtchannel interrupt throttling rate index */ +enum virtchnl_itr_idx { + VIRTCHNL_ITR_IDX_0 = 0, + VIRTCHNL_ITR_IDX_1 = 1, + VIRTCHNL_ITR_IDX_NO_ITR = 3, +}; + +/* Queue to vector mapping */ +struct virtchnl_queue_vector { + u16 queue_id; + u16 vector_id; + enum virtchnl_itr_idx itr_idx; + enum virtchnl_queue_type queue_type; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_vector); + +/* VIRTCHNL_OP_MAP_QUEUE_VECTOR + * VIRTCHNL_OP_UNMAP_QUEUE_VECTOR + * PF sends this message to map or unmap queues to vectors and ITR index + * registers. External data buffer contains virtchnl_queue_vector_maps structure + * that contains num_maps of virtchnl_queue_vector structures. + * CP maps the requested queue vector maps after validating the queue and vector + * ids and returns a status code. + */ +struct virtchnl_queue_vector_maps { + u16 vport_id; + u16 num_maps; + struct virtchnl_queue_vector qv_maps[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_queue_vector_maps); + +/* Structure to specify a chunk of contiguous interrupt vectors */ +struct virtchnl_vector_chunk { + u16 start_vector_id; + u16 num_vectors; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_vector_chunk); + +/* Structure to specify several chunks of contiguous interrupt vectors */ +struct virtchnl_vector_chunks { + u16 num_vector_chunks; + struct virtchnl_vector_chunk num_vchunk[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vector_chunks); + +/* VIRTCHNL_OP_ALLOC_VECTORS + * PF sends this message to request additional interrupt vectors beyond the + * ones that were assigned via GET_CAPS request. virtchnl_alloc_vectors + * structure is used to specify the number of vectors requested. CP responds + * with the same structure with the actual number of vectors assigned followed + * by virtchnl_vector_chunks structure identifying the vector ids. + */ +struct virtchnl_alloc_vectors { + u16 num_vectors; + struct virtchnl_vector_chunks vchunks; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_alloc_vectors); + +/* VIRTCHNL_OP_DEALLOC_VECTORS + * PF sends this message to release the vectors. + * PF sends virtchnl_vector_chunks struct to specify the vectors it is giving + * away. CP performs requested action and returns status. + */ + +/* VIRTCHNL_OP_GET_RSS_LUT + * VIRTCHNL_OP_SET_RSS_LUT + * PF sends this message to get or set RSS lookup table. Only supported if + * both PF and CP drivers set the VIRTCHNL_CAP_RSS bit during configuration + * negotiation. Uses the virtchnl_rss_lut_v2 structure + */ +struct virtchnl_rss_lut_v2 { + u16 vport_id; + u16 lut_entries; + u16 lut[1]; /* RSS lookup table */ +}; + +VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut_v2); + +/* VIRTCHNL_OP_GET_RSS_KEY + * PF sends this message to get RSS key. Only supported if + * both PF and CP drivers set the VIRTCHNL_CAP_RSS bit during configuration + * negotiation. Uses the virtchnl_rss_key structure + */ + +/* VIRTCHNL_OP_GET_RSS_HASH + * VIRTCHNL_OP_SET_RSS_HASH + * PF sends these messages to get and set the hash filter enable bits for RSS. + * By default, the CP sets these to all possible traffic types that the + * hardware supports. The PF can query this value if it wants to change the + * traffic types that are hashed by the hardware. + * Only supported if both PF and CP drivers set the VIRTCHNL_CAP_RSS bit + * during configuration negotiation. + */ +struct virtchnl_rss_hash { + u64 hash; + u16 vport_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_rss_hash); + +/* VIRTCHNL_OP_CREATE_SRIOV_VFS + * VIRTCHNL_OP_DESTROY_SRIOV_VFS + * This message is used to let the CP know how many SRIOV VFs need to be + * created. The actual allocation of resources for the VFs in terms of VSI, + * Queues and Interrupts is done by CP. When this call completes, the APF driver + * calls pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices. + */ +struct virtchnl_sriov_vfs_info { + u16 num_vfs; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(2, virtchnl_sriov_vfs_info); + /** * virtchnl_vc_validate_vf_msg * @ver: Virtchnl version info @@ -828,6 +1270,164 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DEL_CLOUD_FILTER: valid_len = sizeof(struct virtchnl_filter); break; + case VIRTCHNL_OP_GET_CAPS: + valid_len = sizeof(struct virtchnl_get_capabilities); + break; + case VIRTCHNL_OP_CREATE_VPORT: + valid_len = sizeof(struct virtchnl_create_vport); + if (msglen >= valid_len) { + struct virtchnl_create_vport *cvport = + (struct virtchnl_create_vport *)msg; + + if (cvport->chunks.num_chunks == 0) { + /* zero chunks is allowed as input */ + break; + } + + valid_len += (cvport->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk); + } + break; + case VIRTCHNL_OP_DESTROY_VPORT: + case VIRTCHNL_OP_ENABLE_VPORT: + case VIRTCHNL_OP_DISABLE_VPORT: + valid_len = sizeof(struct virtchnl_vport); + break; + case VIRTCHNL_OP_CONFIG_TX_QUEUES: + valid_len = sizeof(struct virtchnl_config_tx_queues); + if (msglen >= valid_len) { + struct virtchnl_config_tx_queues *ctq = + (struct virtchnl_config_tx_queues *)msg; + if (ctq->num_qinfo == 0) { + err_msg_format = true; + break; + } + valid_len += (ctq->num_qinfo - 1) * + sizeof(struct virtchnl_txq_info_v2); + } + break; + case VIRTCHNL_OP_CONFIG_RX_QUEUES: + valid_len = sizeof(struct virtchnl_config_rx_queues); + if (msglen >= valid_len) { + struct virtchnl_config_rx_queues *crq = + (struct virtchnl_config_rx_queues *)msg; + if (crq->num_qinfo == 0) { + err_msg_format = true; + break; + } + valid_len += (crq->num_qinfo - 1) * + sizeof(struct virtchnl_rxq_info_v2); + } + break; + case VIRTCHNL_OP_ADD_QUEUES: + valid_len = sizeof(struct virtchnl_add_queues); + if (msglen >= valid_len) { + struct virtchnl_add_queues *add_q = + (struct virtchnl_add_queues *)msg; + + if (add_q->chunks.num_chunks == 0) { + /* zero chunks is allowed as input */ + break; + } + + valid_len += (add_q->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk); + } + break; + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + case VIRTCHNL_OP_DEL_QUEUES: + valid_len = sizeof(struct virtchnl_del_ena_dis_queues); + if (msglen >= valid_len) { + struct virtchnl_del_ena_dis_queues *qs = + (struct virtchnl_del_ena_dis_queues *)msg; + if (qs->chunks.num_chunks == 0) { + err_msg_format = true; + break; + } + valid_len += (qs->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk); + } + break; + case VIRTCHNL_OP_MAP_QUEUE_VECTOR: + case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR: + valid_len = sizeof(struct virtchnl_queue_vector_maps); + if (msglen >= valid_len) { + struct virtchnl_queue_vector_maps *v_qp = + (struct virtchnl_queue_vector_maps *)msg; + if (v_qp->num_maps == 0) { + err_msg_format = true; + break; + } + valid_len += (v_qp->num_maps - 1) * + sizeof(struct virtchnl_queue_vector); + } + break; + case VIRTCHNL_OP_ALLOC_VECTORS: + valid_len = sizeof(struct virtchnl_alloc_vectors); + if (msglen >= valid_len) { + struct virtchnl_alloc_vectors *v_av = + (struct virtchnl_alloc_vectors *)msg; + + if (v_av->vchunks.num_vector_chunks == 0) { + /* zero chunks is allowed as input */ + break; + } + + valid_len += (v_av->vchunks.num_vector_chunks - 1) * + sizeof(struct virtchnl_vector_chunk); + } + break; + case VIRTCHNL_OP_DEALLOC_VECTORS: + valid_len = sizeof(struct virtchnl_vector_chunks); + if (msglen >= valid_len) { + struct virtchnl_vector_chunks *v_chunks = + (struct virtchnl_vector_chunks *)msg; + if (v_chunks->num_vector_chunks == 0) { + err_msg_format = true; + break; + } + valid_len += (v_chunks->num_vector_chunks - 1) * + sizeof(struct virtchnl_vector_chunk); + } + break; + case VIRTCHNL_OP_GET_RSS_KEY: + valid_len = sizeof(struct virtchnl_rss_key); + if (msglen >= valid_len) { + struct virtchnl_rss_key *vrk = + (struct virtchnl_rss_key *)msg; + + if (vrk->key_len == 0) { + /* zero length is allowed as input */ + break; + } + + valid_len += vrk->key_len - 1; + } + break; + case VIRTCHNL_OP_GET_RSS_LUT: + case VIRTCHNL_OP_SET_RSS_LUT: + valid_len = sizeof(struct virtchnl_rss_lut_v2); + if (msglen >= valid_len) { + struct virtchnl_rss_lut_v2 *vrl = + (struct virtchnl_rss_lut_v2 *)msg; + + if (vrl->lut_entries == 0) { + /* zero entries is allowed as input */ + break; + } + + valid_len += (vrl->lut_entries - 1) * sizeof(u16); + } + break; + case VIRTCHNL_OP_GET_RSS_HASH: + case VIRTCHNL_OP_SET_RSS_HASH: + valid_len = sizeof(struct virtchnl_rss_hash); + break; + case VIRTCHNL_OP_CREATE_VFS: + case VIRTCHNL_OP_DESTROY_VFS: + valid_len = sizeof(struct virtchnl_sriov_vfs_info); + break; /* These are always errors coming from the VF. */ case VIRTCHNL_OP_EVENT: case VIRTCHNL_OP_UNKNOWN: From patchwork Fri Jun 26 02:07:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2039DC433DF for ; Fri, 26 Jun 2020 02:10:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D8B282075D for ; Fri, 26 Jun 2020 02:10:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728094AbgFZCKB (ORCPT ); Thu, 25 Jun 2020 22:10:01 -0400 Received: from mga03.intel.com ([134.134.136.65]:45230 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728044AbgFZCKA (ORCPT ); Thu, 25 Jun 2020 22:10:00 -0400 IronPort-SDR: mEHbXklQq3cxMG3Nbnv3nivkSiNO25clZc2CXsnfthOpcVqJhD9qPu0vQf69NCYbuhJ+BPKohk j/ttsrWSqzJw== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209887" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209887" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:39 -0700 IronPort-SDR: oP2RISO7p8yiRG8Lmgj1t2l2hfmuPGE3EdNyGsl2cKdP43iTHk5+miXNTfYTXM3qmcBIsbp2J1 51CKX+C/QQXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011866" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:39 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 02/15] iecm: Add framework set of header files Date: Thu, 25 Jun 2020 19:07:24 -0700 Message-Id: <20200626020737.775377-3-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Introduces the framework of data for the driver common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- include/linux/net/intel/iecm.h | 433 ++++++++++++++++++++ include/linux/net/intel/iecm_alloc.h | 29 ++ include/linux/net/intel/iecm_controlq.h | 95 +++++ include/linux/net/intel/iecm_controlq_api.h | 188 +++++++++ include/linux/net/intel/iecm_osdep.h | 24 ++ include/linux/net/intel/iecm_type.h | 47 +++ 6 files changed, 816 insertions(+) create mode 100644 include/linux/net/intel/iecm.h create mode 100644 include/linux/net/intel/iecm_alloc.h create mode 100644 include/linux/net/intel/iecm_controlq.h create mode 100644 include/linux/net/intel/iecm_controlq_api.h create mode 100644 include/linux/net/intel/iecm_osdep.h create mode 100644 include/linux/net/intel/iecm_type.h diff --git a/include/linux/net/intel/iecm.h b/include/linux/net/intel/iecm.h new file mode 100644 index 000000000000..f785e7c26f1f --- /dev/null +++ b/include/linux/net/intel/iecm.h @@ -0,0 +1,433 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IECM_H_ +#define _IECM_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include + +#include +#include + +extern const char iecm_drv_ver[]; +extern char iecm_drv_name[]; +extern int debug; + +#define IECM_BAR0 0 +#define IECM_NO_FREE_SLOT 0xffff + +/* Default Mailbox settings */ +#define IECM_DFLT_MBX_BUF_SIZE (1 * 1024) +#define IECM_NUM_QCTX_PER_MSG 3 +#define IECM_DFLT_MBX_Q_LEN 64 +#define IECM_DFLT_MBX_ID -1 +/* maximum number of times to try before resetting mailbox */ +#define IECM_MB_MAX_ERR 20 + +#define IECM_MAX_NUM_VPORTS 1 + +/* default message levels */ +#define IECM_DFLT_NETIF_M (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK) + +/* Forward declaration */ +struct iecm_adapter; + +enum iecm_state { + __IECM_STARTUP, + __IECM_VER_CHECK, + __IECM_GET_RES, + __IECM_GET_CAPS, + __IECM_GET_DFLT_VPORT_PARAMS, + __IECM_INIT_SW, + __IECM_DOWN, + __IECM_UP, + __IECM_REMOVE, + __IECM_STATE_LAST /* this member MUST be last */ +}; + +enum iecm_flags { + /* Soft reset causes */ + __IECM_SR_Q_CHANGE, /* Soft reset to do queue change */ + __IECM_SR_Q_DESC_CHANGE, + __IECM_SR_Q_SCH_CHANGE, /* Scheduling mode change in queue context */ + __IECM_SR_MTU_CHANGE, + __IECM_SR_TC_CHANGE, + /* Hard reset causes */ + __IECM_HR_FUNC_RESET, /* Hard reset when txrx timeout */ + __IECM_HR_CORE_RESET, /* when reset event is received on virtchannel */ + __IECM_HR_DRV_LOAD, /* Set on driver load for a clean HW */ + /* Generic bits to share a message */ + __IECM_DEL_QUEUES, + __IECM_UP_REQUESTED, /* Set if open to be called explicitly by driver */ + /* Mailbox interrupt event */ + __IECM_MB_INTR_MODE, + __IECM_MB_INTR_TRIGGER, + /* must be last */ + __IECM_FLAGS_NBITS, +}; + +struct iecm_netdev_priv { + struct iecm_vport *vport; +}; + +struct iecm_reset_reg { + u32 rstat; + u32 rstat_m; +}; + +/* product specific register API */ +struct iecm_reg_ops { + void (*ctlq_reg_init)(struct iecm_ctlq_create_info *cq); + void (*vportq_reg_init)(struct iecm_vport *vport); + void (*intr_reg_init)(struct iecm_vport *vport); + void (*mb_intr_reg_init)(struct iecm_adapter *adapter); + void (*reset_reg_init)(struct iecm_reset_reg *reset_reg); + void (*trigger_reset)(struct iecm_adapter *adapter, + enum iecm_flags trig_cause); +}; + +struct iecm_virtchnl_ops { + int (*core_init)(struct iecm_adapter *adapter, int *vport_id); + void (*vport_init)(struct iecm_vport *vport, int vport_id); + enum iecm_status (*vport_queue_ids_init)(struct iecm_vport *vport); + enum iecm_status (*get_caps)(struct iecm_adapter *adapter); + enum iecm_status (*config_queues)(struct iecm_vport *vport); + enum iecm_status (*enable_queues)(struct iecm_vport *vport); + enum iecm_status (*disable_queues)(struct iecm_vport *vport); + enum iecm_status (*irq_map_unmap)(struct iecm_vport *vport, bool map); + enum iecm_status (*enable_vport)(struct iecm_vport *vport); + enum iecm_status (*disable_vport)(struct iecm_vport *vport); + enum iecm_status (*destroy_vport)(struct iecm_vport *vport); + enum iecm_status (*get_ptype)(struct iecm_vport *vport); + enum iecm_status (*get_set_rss_lut)(struct iecm_vport *vport, bool get); + enum iecm_status (*get_set_rss_hash)(struct iecm_vport *vport, + bool get); + enum iecm_status (*adjust_qs)(struct iecm_vport *vport); + enum iecm_status (*recv_mbx_msg)(struct iecm_adapter *adapter, + void *msg, int msg_size, + struct iecm_ctlq_msg *ctlq_msg, + bool *work_done); + enum iecm_status (*alloc_vectors)(struct iecm_adapter *adapter, + u16 num_vectors); + enum iecm_status (*dealloc_vectors)(struct iecm_adapter *adapter); + bool (*is_cap_ena)(struct iecm_adapter *adapter, u64 flag); +}; + +struct iecm_dev_ops { + void (*reg_ops_init)(struct iecm_adapter *adapter); + void (*vc_ops_init)(struct iecm_adapter *adapter); + void (*crc_enable)(u64 *td_cmd); + struct iecm_reg_ops reg_ops; + struct iecm_virtchnl_ops vc_ops; +}; + +/* vport specific data structure */ + +enum iecm_vport_vc_state { + IECM_VC_ENA_VPORT, + IECM_VC_ENA_VPORT_ERR, + IECM_VC_DIS_VPORT, + IECM_VC_DIS_VPORT_ERR, + IECM_VC_DESTROY_VPORT, + IECM_VC_DESTROY_VPORT_ERR, + IECM_VC_CONFIG_TXQ, + IECM_VC_CONFIG_TXQ_ERR, + IECM_VC_CONFIG_RXQ, + IECM_VC_CONFIG_RXQ_ERR, + IECM_VC_CONFIG_Q, + IECM_VC_CONFIG_Q_ERR, + IECM_VC_ENA_QUEUES, + IECM_VC_ENA_QUEUES_ERR, + IECM_VC_DIS_QUEUES, + IECM_VC_DIS_QUEUES_ERR, + IECM_VC_MAP_IRQ, + IECM_VC_MAP_IRQ_ERR, + IECM_VC_UNMAP_IRQ, + IECM_VC_UNMAP_IRQ_ERR, + IECM_VC_ADD_QUEUES, + IECM_VC_ADD_QUEUES_ERR, + IECM_VC_DEL_QUEUES, + IECM_VC_DEL_QUEUES_ERR, + IECM_VC_ALLOC_VECTORS, + IECM_VC_ALLOC_VECTORS_ERR, + IECM_VC_DEALLOC_VECTORS, + IECM_VC_DEALLOC_VECTORS_ERR, + IECM_VC_CREATE_VFS, + IECM_VC_CREATE_VFS_ERR, + IECM_VC_DESTROY_VFS, + IECM_VC_DESTROY_VFS_ERR, + IECM_VC_GET_RSS_HASH, + IECM_VC_GET_RSS_HASH_ERR, + IECM_VC_SET_RSS_HASH, + IECM_VC_SET_RSS_HASH_ERR, + IECM_VC_GET_RSS_LUT, + IECM_VC_GET_RSS_LUT_ERR, + IECM_VC_SET_RSS_LUT, + IECM_VC_SET_RSS_LUT_ERR, + IECM_VC_GET_RSS_KEY, + IECM_VC_GET_RSS_KEY_ERR, + IECM_VC_CONFIG_RSS_KEY, + IECM_VC_CONFIG_RSS_KEY_ERR, + IECM_VC_GET_STATS, + IECM_VC_GET_STATS_ERR, + IECM_VC_NBITS +}; + +enum iecm_vport_flags { + __IECM_VPORT_SW_MARKER, + __IECM_VPORT_FLAGS_NBITS, +}; + +struct iecm_vport { + /* TX */ + unsigned int num_txq; + unsigned int num_complq; + /* It makes more sense for descriptor count to be part of only iecm + * queue structure. But when user changes the count via ethtool, driver + * has to store that value somewhere other than queue structure as the + * queues will be freed and allocated again. + */ + unsigned int txq_desc_count; + unsigned int complq_desc_count; + unsigned int compln_clean_budget; + unsigned int num_txq_grp; + struct iecm_txq_group *txq_grps; + enum virtchnl_queue_model txq_model; + /* Used only in hotpath to get to the right queue very fast */ + struct iecm_queue **txqs; + wait_queue_head_t sw_marker_wq; + DECLARE_BITMAP(flags, __IECM_VPORT_FLAGS_NBITS); + + /* RX */ + unsigned int num_rxq; + unsigned int num_bufq; + unsigned int rxq_desc_count; + unsigned int bufq_desc_count; + unsigned int num_rxq_grp; + struct iecm_rxq_group *rxq_grps; + enum virtchnl_queue_model rxq_model; + struct iecm_rx_ptype_decoded rx_ptype_lkup[IECM_RX_MAX_PTYPE]; + + struct iecm_adapter *adapter; + struct net_device *netdev; + u16 vport_type; + u16 vport_id; + u16 idx; /* software index in adapter vports struct */ + struct rtnl_link_stats64 netstats; + + /* handler for hard interrupt */ + irqreturn_t (*irq_q_handler)(int irq, void *data); + struct iecm_q_vector *q_vectors; /* q vector array */ + u16 num_q_vectors; + u16 q_vector_base; + u16 max_mtu; + u8 default_mac_addr[ETH_ALEN]; + u16 qset_handle; + /* Duplicated in queue structure for performance reasons */ + enum iecm_rx_hsplit rx_hsplit_en; +}; + +/* User defined configuration values */ +struct iecm_user_config_data { + u32 num_req_qs; /* user requested queues through ethtool */ + u32 num_req_txq_desc; + u32 num_req_rxq_desc; + void *req_qs_chunks; +}; + +struct iecm_rss_data { + u64 rss_hash; + u16 rss_key_size; + u8 *rss_key; + u16 rss_lut_size; + u8 *rss_lut; +}; + +struct iecm_adapter { + struct pci_dev *pdev; + + u32 tx_timeout_count; + u32 msg_enable; + enum iecm_state state; + DECLARE_BITMAP(flags, __IECM_FLAGS_NBITS); + struct mutex reset_lock; /* lock to protect reset flows */ + struct iecm_reset_reg reset_reg; + struct iecm_hw hw; + + u16 num_req_msix; + u16 num_msix_entries; + struct msix_entry *msix_entries; + struct virtchnl_alloc_vectors *req_vec_chunks; + struct iecm_q_vector mb_vector; + /* handler for hard interrupt for mailbox*/ + irqreturn_t (*irq_mb_handler)(int irq, void *data); + + /* vport structs */ + struct iecm_vport **vports; /* vports created by the driver */ + u16 num_alloc_vport; + u16 next_vport; /* Next free slot in pf->vport[] - 0-based! */ + struct mutex sw_mutex; /* lock to protect vport alloc flow */ + + struct delayed_work init_task; /* delayed init task */ + struct workqueue_struct *init_wq; + u32 mb_wait_count; + struct delayed_work serv_task; /* delayed service task */ + struct workqueue_struct *serv_wq; + struct delayed_work stats_task; /* delayed statistics task */ + struct workqueue_struct *stats_wq; + struct delayed_work vc_event_task; /* delayed virtchannel event task */ + struct workqueue_struct *vc_event_wq; + /* Store the resources data received from control plane */ + void **vport_params_reqd; + void **vport_params_recvd; + /* User set parameters */ + struct iecm_user_config_data config_data; + void *caps; + wait_queue_head_t vchnl_wq; + DECLARE_BITMAP(vc_state, IECM_VC_NBITS); + struct mutex vc_msg_lock; /* lock to protect vc_msg flow */ + char vc_msg[IECM_DFLT_MBX_BUF_SIZE]; + struct iecm_rss_data rss_data; + struct iecm_dev_ops dev_ops; + enum virtchnl_link_speed link_speed; + bool link_up; +}; + +/** + * iecm_is_queue_model_split - check if queue model is split + * @q_model: queue model single or split + * + * Returns true if queue model is split else false + */ +static inline int iecm_is_queue_model_split(enum virtchnl_queue_model q_model) +{ + return (q_model == VIRTCHNL_QUEUE_MODEL_SPLIT); +} + +/** + * iecm_is_cap_ena - Determine if HW capability is supported + * @adapter: private data struct + * @flag: Feature flag to check + */ +static inline bool iecm_is_cap_ena(struct iecm_adapter *adapter, u64 flag) +{ + return adapter->dev_ops.vc_ops.is_cap_ena(adapter, flag); +} + +/** + * iecm_is_feature_ena - Determine if a particular feature is enabled + * @vport: vport to check + * @feature: netdev flag to check + * + * Returns true or false if a particular feature is enabled. + */ +static inline bool iecm_is_feature_ena(struct iecm_vport *vport, + netdev_features_t feature) +{ + return vport->netdev->features & feature; +} + +/** + * iecm_rx_offset - Return expected offset into page to access data + * @rx_q: queue we are requesting offset of + * + * Returns the offset value for queue into the data buffer. + */ +static inline unsigned int +iecm_rx_offset(struct iecm_queue __maybe_unused *rx_q) +{ + return 0; +} + +int iecm_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent, + struct iecm_adapter *adapter); +void iecm_remove(struct pci_dev *pdev); +void iecm_shutdown(struct pci_dev *pdev); +enum iecm_status iecm_vport_adjust_qs(struct iecm_vport *vport); +enum iecm_status +iecm_ctlq_reg_init(struct iecm_ctlq_create_info *cq, int num_q); +enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter); +void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter); +void iecm_vc_ops_init(struct iecm_adapter *adapter); +int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id); +enum iecm_status +iecm_wait_for_event(struct iecm_adapter *adapter, + enum iecm_vport_vc_state state, + enum iecm_vport_vc_state err_check); +enum iecm_status iecm_send_get_caps_msg(struct iecm_adapter *adapter); +enum iecm_status iecm_send_delete_queues_msg(struct iecm_vport *vport); +enum iecm_status +iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, + u16 num_complq, u16 num_rx_q, u16 num_rx_bufq); +enum iecm_status iecm_initiate_soft_reset(struct iecm_vport *vport, + enum iecm_flags reset_cause); +enum iecm_status iecm_send_config_tx_queues_msg(struct iecm_vport *vport); +enum iecm_status iecm_send_config_rx_queues_msg(struct iecm_vport *vport); +enum iecm_status iecm_send_enable_vport_msg(struct iecm_vport *vport); +enum iecm_status iecm_send_disable_vport_msg(struct iecm_vport *vport); +enum iecm_status iecm_send_destroy_vport_msg(struct iecm_vport *vport); +enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport); +enum iecm_status +iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get); +enum iecm_status +iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get); +enum iecm_status +iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get); +int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter); +void iecm_vport_params_buf_rel(struct iecm_adapter *adapter); +struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev); +struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev); +enum iecm_status iecm_send_get_stats_msg(struct iecm_vport *vport); +int iecm_vport_get_vec_ids(u16 *vecids, int num_vecids, + struct virtchnl_vector_chunks *chunks); +enum iecm_status +iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + void *msg, int msg_size); +enum iecm_status +iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + u16 msg_size, u8 *msg); +void iecm_set_ethtool_ops(struct net_device *netdev); +void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog *prog); +#endif /* !_IECM_H_ */ diff --git a/include/linux/net/intel/iecm_alloc.h b/include/linux/net/intel/iecm_alloc.h new file mode 100644 index 000000000000..fd13bf085663 --- /dev/null +++ b/include/linux/net/intel/iecm_alloc.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_ALLOC_H_ +#define _IECM_ALLOC_H_ + +/* Memory types */ +enum iecm_memset_type { + IECM_NONDMA_MEM = 0, + IECM_DMA_MEM +}; + +/* Memcpy types */ +enum iecm_memcpy_type { + IECM_NONDMA_TO_NONDMA = 0, + IECM_NONDMA_TO_DMA, + IECM_DMA_TO_DMA, + IECM_DMA_TO_NONDMA +}; + +struct iecm_hw; +struct iecm_dma_mem; + +/* prototype for functions used for dynamic memory allocation */ +void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, + u64 size); +void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem); + +#endif /* _IECM_ALLOC_H_ */ diff --git a/include/linux/net/intel/iecm_controlq.h b/include/linux/net/intel/iecm_controlq.h new file mode 100644 index 000000000000..4cba637042cd --- /dev/null +++ b/include/linux/net/intel/iecm_controlq.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_CONTROLQ_H_ +#define _IECM_CONTROLQ_H_ + +#include + +/* Maximum buffer lengths for all control queue types */ +#define IECM_CTLQ_MAX_RING_SIZE 1024 +#define IECM_CTLQ_MAX_BUF_LEN 4096 + +#define IECM_CTLQ_DESC(R, i) \ + (&(((struct iecm_ctlq_desc *)((R)->desc_ring.va))[i])) + +#define IECM_CTLQ_DESC_UNUSED(R) \ + (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \ + (R)->next_to_clean - (R)->next_to_use - 1) + +/* Data type manipulation macros. */ +#define IECM_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) +#define IECM_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) +#define IECM_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) +#define IECM_LO_WORD(x) ((u16)((x) & 0xFFFF)) + +/* Control Queue default settings */ +#define IECM_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */ + +struct iecm_ctlq_desc { + __le16 flags; + __le16 opcode; + __le16 datalen; /* 0 for direct commands */ + union { + __le16 ret_val; + __le16 pfid_vfid; +#define IECM_CTLQ_DESC_VF_ID_S 0 +#define IECM_CTLQ_DESC_VF_ID_M (0x3FF << IECM_CTLQ_DESC_VF_ID_S) +#define IECM_CTLQ_DESC_PF_ID_S 10 +#define IECM_CTLQ_DESC_PF_ID_M (0x3F << IECM_CTLQ_DESC_PF_ID_S) + }; + __le32 cookie_high; + __le32 cookie_low; + union { + struct { + __le32 param0; + __le32 param1; + __le32 param2; + __le32 param3; + } direct; + struct { + __le32 param0; + __le32 param1; + __le32 addr_high; + __le32 addr_low; + } indirect; + u8 raw[16]; + } params; +}; + +/* Flags sub-structure + * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 | + * |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| * RSV * | + */ +/* command flags and offsets */ +#define IECM_CTLQ_FLAG_DD_S 0 +#define IECM_CTLQ_FLAG_CMP_S 1 +#define IECM_CTLQ_FLAG_ERR_S 2 +#define IECM_CTLQ_FLAG_FTYPE_S 6 +#define IECM_CTLQ_FLAG_RD_S 10 +#define IECM_CTLQ_FLAG_VFC_S 11 +#define IECM_CTLQ_FLAG_BUF_S 12 + +#define IECM_CTLQ_FLAG_DD BIT(IECM_CTLQ_FLAG_DD_S) /* 0x1 */ +#define IECM_CTLQ_FLAG_CMP BIT(IECM_CTLQ_FLAG_CMP_S) /* 0x2 */ +#define IECM_CTLQ_FLAG_ERR BIT(IECM_CTLQ_FLAG_ERR_S) /* 0x4 */ +#define IECM_CTLQ_FLAG_FTYPE_VM BIT(IECM_CTLQ_FLAG_FTYPE_S) /* 0x40 */ +#define IECM_CTLQ_FLAG_FTYPE_PF BIT(IECM_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */ +#define IECM_CTLQ_FLAG_RD BIT(IECM_CTLQ_FLAG_RD_S) /* 0x400 */ +#define IECM_CTLQ_FLAG_VFC BIT(IECM_CTLQ_FLAG_VFC_S) /* 0x800 */ +#define IECM_CTLQ_FLAG_BUF BIT(IECM_CTLQ_FLAG_BUF_S) /* 0x1000 */ + +struct iecm_mbxq_desc { + u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */ + u32 chnl_opcode; /* avoid confusion with desc->opcode */ + u32 chnl_retval; /* ditto for desc->retval */ + u32 pf_vf_id; /* used by CP when sending to PF */ +}; + +enum iecm_status +iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, + struct iecm_ctlq_info *cq); + +void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq); + +#endif /* _IECM_CONTROLQ_H_ */ diff --git a/include/linux/net/intel/iecm_controlq_api.h b/include/linux/net/intel/iecm_controlq_api.h new file mode 100644 index 000000000000..33e3cb3878fc --- /dev/null +++ b/include/linux/net/intel/iecm_controlq_api.h @@ -0,0 +1,188 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_CONTROLQ_API_H_ +#define _IECM_CONTROLQ_API_H_ + +/* Error Codes */ +enum iecm_status { + IECM_SUCCESS = 0, + + /* Reserve range -1..-49 for generic codes */ + IECM_ERR_PARAM = -1, + IECM_ERR_NOT_IMPL = -2, + IECM_ERR_NOT_READY = -3, + IECM_ERR_BAD_PTR = -5, + IECM_ERR_INVAL_SIZE = -6, + IECM_ERR_DEVICE_NOT_SUPPORTED = -8, + IECM_ERR_FW_API_VER = -9, + IECM_ERR_NO_MEMORY = -10, + IECM_ERR_CFG = -11, + IECM_ERR_OUT_OF_RANGE = -12, + IECM_ERR_ALREADY_EXISTS = -13, + IECM_ERR_DOES_NOT_EXIST = -14, + IECM_ERR_IN_USE = -15, + IECM_ERR_MAX_LIMIT = -16, + IECM_ERR_RESET_ONGOING = -17, + + /* Reserve range -100..-109 for CRQ/CSQ specific error codes */ + IECM_ERR_CTLQ_ERROR = -100, + IECM_ERR_CTLQ_TIMEOUT = -101, + IECM_ERR_CTLQ_FULL = -102, + IECM_ERR_CTLQ_NO_WORK = -103, + IECM_ERR_CTLQ_EMPTY = -104, +}; + +/* Used for queue init, response and events */ +enum iecm_ctlq_type { + IECM_CTLQ_TYPE_MAILBOX_TX = 0, + IECM_CTLQ_TYPE_MAILBOX_RX = 1, + IECM_CTLQ_TYPE_CONFIG_TX = 2, + IECM_CTLQ_TYPE_CONFIG_RX = 3, + IECM_CTLQ_TYPE_EVENT_RX = 4, + IECM_CTLQ_TYPE_RDMA_TX = 5, + IECM_CTLQ_TYPE_RDMA_RX = 6, + IECM_CTLQ_TYPE_RDMA_COMPL = 7 +}; + +/* Generic Control Queue Structures */ + +struct iecm_ctlq_reg { + /* used for queue tracking */ + u32 head; + u32 tail; + /* Below applies only to default mb (if present) */ + u32 len; + u32 bah; + u32 bal; + u32 len_mask; + u32 len_ena_mask; + u32 head_mask; +}; + +/* Generic queue msg structure */ +struct iecm_ctlq_msg { + u16 vmvf_type; /* represents the source of the message on recv */ +#define IECM_VMVF_TYPE_VF 0 +#define IECM_VMVF_TYPE_VM 1 +#define IECM_VMVF_TYPE_PF 2 + u16 opcode; + u16 data_len; /* data_len = 0 when no payload is attached */ + union { + u16 func_id; /* when sending a message */ + u16 status; /* when receiving a message */ + }; + union { + struct { + u32 chnl_retval; + u32 chnl_opcode; + } mbx; + } cookie; + union { +#define IECM_DIRECT_CTX_SIZE 16 +#define IECM_INDIRECT_CTX_SIZE 8 + /* 16 bytes of context can be provided or 8 bytes of context + * plus the address of a DMA buffer + */ + u8 direct[IECM_DIRECT_CTX_SIZE]; + struct { + u8 context[IECM_INDIRECT_CTX_SIZE]; + struct iecm_dma_mem *payload; + } indirect; + } ctx; +}; + +/* Generic queue info structures */ +/* MB, CONFIG and EVENT q do not have extended info */ +struct iecm_ctlq_create_info { + enum iecm_ctlq_type type; + /* absolute queue offset passed as input + * -1 for default mailbox if present + */ + int id; + u16 len; /* Queue length passed as input */ + u16 buf_size; /* buffer size passed as input */ + u64 base_address; /* output, HPA of the Queue start */ + struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */ + + int ext_info_size; + void *ext_info; /* Specific to q type */ +}; + +/* Control Queue information */ +struct iecm_ctlq_info { + struct list_head cq_list; + + enum iecm_ctlq_type cq_type; + int q_id; + struct mutex cq_lock; /* queue lock */ + + /* used for interrupt processing */ + u16 next_to_use; + u16 next_to_clean; + + /* starting descriptor to post buffers to after recev */ + u16 next_to_post; + struct iecm_dma_mem desc_ring; /* descriptor ring memory */ + + union { + struct iecm_dma_mem **rx_buff; + struct iecm_ctlq_msg **tx_msg; + } bi; + + u16 buf_size; /* queue buffer size */ + u16 ring_size; /* Number of descriptors */ + struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */ +}; + +/* PF/VF mailbox commands */ +enum iecm_mbx_opc { + iecm_mbq_opc_send_msg_to_cp = 0x0801, + iecm_mbq_opc_send_msg_to_vf = 0x0802, + iecm_mbq_opc_send_msg_to_pf = 0x0803, +}; + +/* API supported for control queue management */ + +/* Will init all required q including default mb. "q_info" is an array of + * create_info structs equal to the number of control queues to be created. + */ +enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, + struct iecm_ctlq_create_info *q_info); + +/* Allocate and initialize a single control queue, which will be added to the + * control queue list; returns a handle to the created control queue + */ +enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, + struct iecm_ctlq_create_info *qinfo, + struct iecm_ctlq_info **cq); +/* Deinitialize and deallocate a single control queue */ +void iecm_ctlq_remove(struct iecm_hw *hw, + struct iecm_ctlq_info *cq); + +/* Sends messages to HW and will also free the buffer*/ +enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 num_q_msg, + struct iecm_ctlq_msg q_msg[]); + +/* Receives messages and called by interrupt handler/polling + * initiated by app/process. Also caller is supposed to free the buffers + */ +enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq, + u16 *num_q_msg, struct iecm_ctlq_msg *q_msg); +/* Reclaims send descriptors on HW write back */ +enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, + u16 *clean_count, + struct iecm_ctlq_msg *msg_status[]); + +/* Indicate RX buffers are done being processed */ +enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 *buff_count, + struct iecm_dma_mem **buffs); + +/* Will destroy all q including the default mb */ +enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw); + +#endif /* _IECM_CONTROLQ_API_H_ */ diff --git a/include/linux/net/intel/iecm_osdep.h b/include/linux/net/intel/iecm_osdep.h new file mode 100644 index 000000000000..33e01582f516 --- /dev/null +++ b/include/linux/net/intel/iecm_osdep.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_OSDEP_H_ +#define _IECM_OSDEP_H_ + +#include +#include +#include +#include + +#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) +#define rd32(a, reg) readl((a)->hw_addr + (reg)) +#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg))) +#define rd64(a, reg) readq((a)->hw_addr + (reg)) + +struct iecm_dma_mem { + void *va; + dma_addr_t pa; + size_t size; +}; + +#define iecm_wmb() wmb() /* memory barrier */ +#endif /* _IECM_OSDEP_H_ */ diff --git a/include/linux/net/intel/iecm_type.h b/include/linux/net/intel/iecm_type.h new file mode 100644 index 000000000000..ea7781dd288f --- /dev/null +++ b/include/linux/net/intel/iecm_type.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2020, Intel Corporation. */ + +#ifndef _IECM_TYPE_H_ +#define _IECM_TYPE_H_ + +#include +#include +#include +#include + +#define MAKEMASK(m, s) ((m) << (s)) + +/* Bus parameters */ +struct iecm_bus_info { + u16 func; + u16 device; + u16 bus_id; +}; + +/* Define the APF hardware struct to replace other control structs as needed + * Align to ctlq_hw_info + */ +struct iecm_hw { + u8 __iomem *hw_addr; + u64 hw_addr_len; + void *back; + + /* control queue - send and receive */ + struct iecm_ctlq_info *asq; + struct iecm_ctlq_info *arq; + + /* subsystem structs */ + struct iecm_bus_info bus; + + /* pci info */ + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; + bool adapter_stopped; + + struct list_head cq_list_head; +}; + +#endif /* _IECM_TYPE_H_ */ From patchwork Fri Jun 26 02:07:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2025C433E0 for ; Fri, 26 Jun 2020 02:08:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A17C02075D for ; Fri, 26 Jun 2020 02:08:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728023AbgFZCHz (ORCPT ); Thu, 25 Jun 2020 22:07:55 -0400 Received: from mga03.intel.com ([134.134.136.65]:45097 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728012AbgFZCHw (ORCPT ); Thu, 25 Jun 2020 22:07:52 -0400 IronPort-SDR: WSumZCUM77b2bOQV4lN487Bvni7Fgn201DUzhjt9LFMQ6ur+ng0QoGhm8d/SHSdHOSi0UpyxOg CIMlKBnw7qSA== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209890" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209890" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:39 -0700 IronPort-SDR: G/BvESdRUFW07nFhNdWzcoL7BercgAGevFVP8tMADkYPd8q1jtOTwyowKF5FGB2EQl3ygGxc01 bnMUjFVRhRnw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011875" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:39 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 05/15] iecm: Add basic netdevice functionality Date: Thu, 25 Jun 2020 19:07:27 -0700 Message-Id: <20200626020737.775377-6-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This implements probe, interface up/down, and netdev_ops. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 404 +++++++++++++++++- drivers/net/ethernet/intel/iecm/iecm_main.c | 7 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 6 +- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 73 +++- 4 files changed, 467 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index be944f655045..9aa944430187 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -24,7 +24,17 @@ static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) */ static void iecm_intr_rel(struct iecm_adapter *adapter) { - /* stub */ + if (!adapter->msix_entries) + return; + clear_bit(__IECM_MB_INTR_MODE, adapter->flags); + clear_bit(__IECM_MB_INTR_TRIGGER, adapter->flags); + iecm_mb_intr_rel_irq(adapter); + + pci_free_irq_vectors(adapter->pdev); + kfree(adapter->msix_entries); + adapter->msix_entries = NULL; + kfree(adapter->req_vec_chunks); + adapter->req_vec_chunks = NULL; } /** @@ -96,7 +106,53 @@ static void iecm_intr_distribute(struct iecm_adapter *adapter) */ static int iecm_intr_req(struct iecm_adapter *adapter) { - /* stub */ + int min_vectors, max_vectors, err = 0; + unsigned int vector; + int num_vecs; + int v_actual; + + num_vecs = adapter->vports[0]->num_q_vectors + + IECM_MAX_NONQ_VEC + IECM_MAX_RDMA_VEC; + + min_vectors = IECM_MIN_VEC; +#define IECM_MAX_EVV_MAPPED_VEC 16 + max_vectors = min(num_vecs, IECM_MAX_EVV_MAPPED_VEC); + + v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors, + max_vectors, PCI_IRQ_MSIX); + if (v_actual < 0) { + dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n", + v_actual); + return v_actual; + } + + adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry), + GFP_KERNEL); + + if (!adapter->msix_entries) { + pci_free_irq_vectors(adapter->pdev); + return -ENOMEM; + } + + for (vector = 0; vector < v_actual; vector++) { + adapter->msix_entries[vector].entry = vector; + adapter->msix_entries[vector].vector = + pci_irq_vector(adapter->pdev, vector); + } + adapter->num_msix_entries = v_actual; + adapter->num_req_msix = num_vecs; + + iecm_intr_distribute(adapter); + + err = iecm_mb_intr_init(adapter); + if (err) + goto intr_rel; + iecm_mb_irq_enable(adapter); + return err; + +intr_rel: + iecm_intr_rel(adapter); + return err; } /** @@ -118,7 +174,21 @@ static int iecm_cfg_netdev(struct iecm_vport *vport) */ static int iecm_cfg_hw(struct iecm_adapter *adapter) { - /* stub */ + struct pci_dev *pdev = adapter->pdev; + struct iecm_hw *hw = &adapter->hw; + + hw->hw_addr_len = pci_resource_len(pdev, 0); + hw->hw_addr = ioremap(pci_resource_start(pdev, 0), hw->hw_addr_len); + + if (!hw->hw_addr) + return -EIO; + + hw->back = adapter; + hw->bus.device = PCI_SLOT(pdev->devfn); + hw->bus.func = PCI_FUNC(pdev->devfn); + hw->bus.bus_id = pdev->bus->number; + + return 0; } /** @@ -132,7 +202,22 @@ static int iecm_cfg_hw(struct iecm_adapter *adapter) */ static int iecm_get_free_slot(void *array, int size, int curr) { - /* stub */ + int **tmp_array = (int **)array; + int next; + + if (curr < (size - 1) && !tmp_array[curr + 1]) { + next = curr + 1; + } else { + int i = 0; + + while ((i < size) && (tmp_array[i])) + i++; + if (i == size) + next = IECM_NO_FREE_SLOT; + else + next = i; + } + return next; } /** @@ -141,7 +226,9 @@ static int iecm_get_free_slot(void *array, int size, int curr) */ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return np->vport; } /** @@ -150,7 +237,9 @@ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) */ struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return np->vport->adapter; } /** @@ -185,7 +274,22 @@ static int iecm_stop(struct net_device *netdev) */ static int iecm_vport_rel(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter; + + if (!vport->adapter) + return -ENODEV; + adapter = vport->adapter; + + iecm_vport_stop(vport); + iecm_deinit_rss(vport); + unregister_netdev(vport->netdev); + free_netdev(vport->netdev); + vport->netdev = NULL; + if (adapter->dev_ops.vc_ops.destroy_vport) + adapter->dev_ops.vc_ops.destroy_vport(vport); + kfree(vport); + + return 0; } /** @@ -194,7 +298,24 @@ static int iecm_vport_rel(struct iecm_vport *vport) */ static void iecm_vport_rel_all(struct iecm_adapter *adapter) { - /* stub */ + int err, i; + + if (!adapter->vports) + return; + + for (i = 0; i < adapter->num_alloc_vport; i++) { + if (!adapter->vports[i]) + continue; + + err = iecm_vport_rel(adapter->vports[i]); + if (err) + dev_dbg(&adapter->pdev->dev, + "Failed to release adapter->vport[%d], err %d,\n", + i, err); + else + adapter->vports[i] = NULL; + } + adapter->num_alloc_vport = 0; } /** @@ -218,7 +339,47 @@ void iecm_vport_set_hsplit(struct iecm_vport *vport, static struct iecm_vport * iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) { - /* stub */ + struct iecm_vport *vport = NULL; + + if (adapter->next_vport == IECM_NO_FREE_SLOT) + return vport; + + /* Need to protect the allocation of the vports at the adapter level */ + mutex_lock(&adapter->sw_mutex); + + vport = kzalloc(sizeof(*vport), GFP_KERNEL); + if (!vport) + goto unlock_adapter; + + vport->adapter = adapter; + vport->idx = adapter->next_vport; + vport->compln_clean_budget = IECM_TX_COMPLQ_CLEAN_BUDGET; + adapter->num_alloc_vport++; + adapter->dev_ops.vc_ops.vport_init(vport, vport_id); + + /* Setup default MSIX irq handler for the vport */ + vport->irq_q_handler = iecm_vport_intr_clean_queues; + vport->q_vector_base = IECM_MAX_NONQ_VEC; + + /* fill vport slot in the adapter struct */ + adapter->vports[adapter->next_vport] = vport; + if (iecm_cfg_netdev(vport)) + goto cfg_netdev_fail; + + /* prepare adapter->next_vport for next use */ + adapter->next_vport = iecm_get_free_slot(adapter->vports, + adapter->num_alloc_vport, + adapter->next_vport); + + goto unlock_adapter; + +cfg_netdev_fail: + adapter->vports[adapter->next_vport] = NULL; + kfree(vport); + vport = NULL; +unlock_adapter: + mutex_unlock(&adapter->sw_mutex); + return vport; } /** @@ -228,7 +389,22 @@ iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) */ static void iecm_service_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + serv_task.work); + + if (test_bit(__IECM_MB_INTR_MODE, adapter->flags)) { + if (test_and_clear_bit(__IECM_MB_INTR_TRIGGER, + adapter->flags)) { + iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0); + iecm_mb_irq_enable(adapter); + } + } else { + iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0); + } + + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(300)); } /** @@ -262,7 +438,41 @@ static int iecm_vport_open(struct iecm_vport *vport) */ static void iecm_init_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + init_task.work); + struct iecm_vport *vport; + struct pci_dev *pdev; + int vport_id, err; + + err = adapter->dev_ops.vc_ops.core_init(adapter, &vport_id); + if (err) + return; + + pdev = adapter->pdev; + vport = iecm_vport_alloc(adapter, vport_id); + if (!vport) { + err = -EFAULT; + dev_err(&pdev->dev, "probe failed on vport setup:%d\n", + err); + return; + } + /* Start the service task before requesting vectors. This will ensure + * vector information response from mailbox is handled + */ + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(5 * (pdev->devfn & 0x07))); + err = iecm_intr_req(adapter); + if (err) { + dev_err(&pdev->dev, "failed to enable interrupt vectors: %d\n", + err); + iecm_vport_rel(vport); + return; + } + /* Once state is put into DOWN, driver is ready for dev_open */ + adapter->state = __IECM_DOWN; + if (test_and_clear_bit(__IECM_UP_REQUESTED, adapter->flags)) + iecm_vport_open(vport); } /** @@ -273,7 +483,40 @@ static void iecm_init_task(struct work_struct *work) */ static int iecm_api_init(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_reg_ops *reg_ops = &adapter->dev_ops.reg_ops; + struct pci_dev *pdev = adapter->pdev; + + if (!adapter->dev_ops.reg_ops_init) { + dev_err(&pdev->dev, "Invalid device, register API init not defined.\n"); + return -EINVAL; + } + adapter->dev_ops.reg_ops_init(adapter); + if (!(reg_ops->ctlq_reg_init && reg_ops->vportq_reg_init && + reg_ops->intr_reg_init && reg_ops->mb_intr_reg_init && + reg_ops->reset_reg_init && reg_ops->trigger_reset)) { + dev_err(&pdev->dev, "Invalid device, missing one or more register functions\n"); + return -EINVAL; + } + + if (adapter->dev_ops.vc_ops_init) { + struct iecm_virtchnl_ops *vc_ops; + + adapter->dev_ops.vc_ops_init(adapter); + vc_ops = &adapter->dev_ops.vc_ops; + if (!(vc_ops->core_init && vc_ops->vport_init && + vc_ops->vport_queue_ids_init && vc_ops->get_caps && + vc_ops->config_queues && vc_ops->enable_queues && + vc_ops->disable_queues && vc_ops->irq_map_unmap && + vc_ops->get_set_rss_lut && vc_ops->get_set_rss_hash && + vc_ops->adjust_qs && vc_ops->get_ptype)) { + dev_err(&pdev->dev, "Invalid device, missing one or more virtchnl functions\n"); + return -EINVAL; + } + } else { + iecm_vc_ops_init(adapter); + } + + return 0; } /** @@ -285,7 +528,11 @@ static int iecm_api_init(struct iecm_adapter *adapter) */ static void iecm_deinit_task(struct iecm_adapter *adapter) { - /* stub */ + iecm_vport_rel_all(adapter); + cancel_delayed_work_sync(&adapter->serv_task); + iecm_deinit_dflt_mbx(adapter); + iecm_vport_params_buf_rel(adapter); + iecm_intr_rel(adapter); } /** @@ -307,7 +554,13 @@ iecm_init_hard_reset(struct iecm_adapter *adapter) */ static void iecm_vc_event_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + vc_event_task.work); + + if (test_bit(__IECM_HR_CORE_RESET, adapter->flags) || + test_bit(__IECM_HR_FUNC_RESET, adapter->flags)) + iecm_init_hard_reset(adapter); } /** @@ -336,7 +589,103 @@ int iecm_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent, struct iecm_adapter *adapter) { - /* stub */ + int err; + + adapter->pdev = pdev; + err = iecm_api_init(adapter); + if (err) { + dev_err(&pdev->dev, "Device API is incorrectly configured\n"); + return err; + } + + err = pcim_iomap_regions(pdev, BIT(IECM_BAR0), pci_name(pdev)); + if (err) { + dev_err(&pdev->dev, "BAR0 I/O map error %d\n", err); + return err; + } + + /* set up for high or low DMA */ + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + if (err) { + dev_err(&pdev->dev, "DMA configuration failed: 0x%x\n", err); + return err; + } + + pci_enable_pcie_error_reporting(pdev); + pci_set_master(pdev); + pci_set_drvdata(pdev, adapter); + + adapter->init_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + if (!adapter->init_wq) { + dev_err(&pdev->dev, "Failed to allocate workqueue\n"); + err = -ENOMEM; + goto err_wq_alloc; + } + + adapter->serv_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + if (!adapter->serv_wq) { + dev_err(&pdev->dev, "Failed to allocate workqueue\n"); + err = -ENOMEM; + goto err_mbx_wq_alloc; + } + /* setup msglvl */ + adapter->msg_enable = netif_msg_init(debug, IECM_DFLT_NETIF_M); + + adapter->vports = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vports), GFP_KERNEL); + if (!adapter->vports) { + err = -ENOMEM; + goto err_vport_alloc; + } + + err = iecm_vport_params_buf_alloc(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to alloc vport params buffer: %d\n", + err); + goto err_mb_res; + } + + err = iecm_cfg_hw(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to configure HW structure for adapter: %d\n", + err); + goto err_cfg_hw; + } + + mutex_init(&adapter->sw_mutex); + mutex_init(&adapter->vc_msg_lock); + mutex_init(&adapter->reset_lock); + init_waitqueue_head(&adapter->vchnl_wq); + + INIT_DELAYED_WORK(&adapter->serv_task, iecm_service_task); + INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task); + INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task); + + mutex_lock(&adapter->reset_lock); + set_bit(__IECM_HR_DRV_LOAD, adapter->flags); + err = iecm_init_hard_reset(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to reset device: %d\n", err); + goto err_mb_init; + } + + return 0; +err_mb_init: +err_cfg_hw: + iecm_vport_params_buf_rel(adapter); +err_mb_res: + kfree(adapter->vports); +err_vport_alloc: + destroy_workqueue(adapter->serv_wq); +err_mbx_wq_alloc: + destroy_workqueue(adapter->init_wq); +err_wq_alloc: + pci_disable_pcie_error_reporting(pdev); + return err; } EXPORT_SYMBOL(iecm_probe); @@ -346,7 +695,22 @@ EXPORT_SYMBOL(iecm_probe); */ void iecm_remove(struct pci_dev *pdev) { - /* stub */ + struct iecm_adapter *adapter = pci_get_drvdata(pdev); + + if (!adapter) + return; + + iecm_deinit_task(adapter); + cancel_delayed_work_sync(&adapter->vc_event_task); + destroy_workqueue(adapter->serv_wq); + destroy_workqueue(adapter->init_wq); + kfree(adapter->vports); + kfree(adapter->vport_params_recvd); + kfree(adapter->vport_params_reqd); + mutex_destroy(&adapter->sw_mutex); + mutex_destroy(&adapter->vc_msg_lock); + mutex_destroy(&adapter->reset_lock); + pci_disable_pcie_error_reporting(pdev); } EXPORT_SYMBOL(iecm_remove); @@ -356,7 +720,13 @@ EXPORT_SYMBOL(iecm_remove); */ void iecm_shutdown(struct pci_dev *pdev) { - /* stub */ + struct iecm_adapter *adapter; + + adapter = pci_get_drvdata(pdev); + adapter->state = __IECM_REMOVE; + + if (system_state == SYSTEM_POWER_OFF) + pci_set_power_state(pdev, PCI_D3hot); } EXPORT_SYMBOL(iecm_shutdown); diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c index 0644581fc746..3b6eb44643de 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_main.c +++ b/drivers/net/ethernet/intel/iecm/iecm_main.c @@ -30,7 +30,10 @@ MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); */ static int __init iecm_module_init(void) { - /* stub */ + pr_info("%s\n", iecm_driver_string); + pr_info("%s\n", iecm_copyright); + + return 0; } module_init(iecm_module_init); @@ -42,6 +45,6 @@ module_init(iecm_module_init); */ static void __exit iecm_module_exit(void) { - /* stub */ + pr_info("module unloaded\n"); } module_exit(iecm_module_exit); diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 9ad9c4a6bc84..c403c6ec7838 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -989,7 +989,11 @@ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) irqreturn_t iecm_vport_intr_clean_queues(int __always_unused irq, void *data) { - /* stub */ + struct iecm_q_vector *q_vector = (struct iecm_q_vector *)data; + + napi_schedule(&q_vector->napi); + + return IRQ_HANDLED; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 2db9f53efb87..6fa441e34e21 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -446,7 +446,47 @@ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) */ enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_ctlq_create_info ctlq_info[] = { + { + .type = IECM_CTLQ_TYPE_MAILBOX_TX, + .id = IECM_DFLT_MBX_ID, + .len = IECM_DFLT_MBX_Q_LEN, + .buf_size = IECM_DFLT_MBX_BUF_SIZE + }, + { + .type = IECM_CTLQ_TYPE_MAILBOX_RX, + .id = IECM_DFLT_MBX_ID, + .len = IECM_DFLT_MBX_Q_LEN, + .buf_size = IECM_DFLT_MBX_BUF_SIZE + } + }; + struct iecm_hw *hw = &adapter->hw; + enum iecm_status ret; + + adapter->dev_ops.reg_ops.ctlq_reg_init(ctlq_info); + +#define NUM_Q 2 + ret = iecm_ctlq_init(hw, NUM_Q, ctlq_info); + if (ret) + goto init_mbx_done; + + hw->asq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_TX, + IECM_DFLT_MBX_ID); + hw->arq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_RX, + IECM_DFLT_MBX_ID); + + if (!hw->asq || !hw->arq) { + iecm_ctlq_deinit(hw); + ret = IECM_ERR_CTLQ_ERROR; + } + adapter->state = __IECM_STARTUP; + /* Skew the delay for init tasks for each function based on fn number + * to prevent every function from making the same call simultaneously. + */ + queue_delayed_work(adapter->init_wq, &adapter->init_task, + msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07))); +init_mbx_done: + return ret; } /** @@ -468,7 +508,15 @@ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) */ void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) { - /* stub */ + int i = 0; + + for (i = 0; i < IECM_MAX_NUM_VPORTS; i++) { + kfree(adapter->vport_params_recvd[i]); + kfree(adapter->vport_params_reqd[i]); + } + + kfree(adapter->caps); + kfree(adapter->config_data.req_qs_chunks); } /** @@ -594,6 +642,25 @@ static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) */ void iecm_vc_ops_init(struct iecm_adapter *adapter) { - /* stub */ + adapter->dev_ops.vc_ops.core_init = iecm_vc_core_init; + adapter->dev_ops.vc_ops.vport_init = iecm_vport_init; + adapter->dev_ops.vc_ops.vport_queue_ids_init = + iecm_vport_queue_ids_init; + adapter->dev_ops.vc_ops.get_caps = iecm_send_get_caps_msg; + adapter->dev_ops.vc_ops.is_cap_ena = iecm_is_capability_ena; + adapter->dev_ops.vc_ops.config_queues = iecm_send_config_queues_msg; + adapter->dev_ops.vc_ops.enable_queues = iecm_send_enable_queues_msg; + adapter->dev_ops.vc_ops.disable_queues = iecm_send_disable_queues_msg; + adapter->dev_ops.vc_ops.irq_map_unmap = + iecm_send_map_unmap_queue_vector_msg; + adapter->dev_ops.vc_ops.enable_vport = iecm_send_enable_vport_msg; + adapter->dev_ops.vc_ops.disable_vport = iecm_send_disable_vport_msg; + adapter->dev_ops.vc_ops.destroy_vport = iecm_send_destroy_vport_msg; + adapter->dev_ops.vc_ops.get_ptype = iecm_send_get_rx_ptype_msg; + adapter->dev_ops.vc_ops.get_set_rss_lut = iecm_send_get_set_rss_lut_msg; + adapter->dev_ops.vc_ops.get_set_rss_hash = + iecm_send_get_set_rss_hash_msg; + adapter->dev_ops.vc_ops.adjust_qs = iecm_vport_adjust_qs; + adapter->dev_ops.vc_ops.recv_mbx_msg = NULL; } EXPORT_SYMBOL(iecm_vc_ops_init); From patchwork Fri Jun 26 02:07:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55728C433E0 for ; Fri, 26 Jun 2020 02:08:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FD9F2075D for ; Fri, 26 Jun 2020 02:08:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728039AbgFZCII (ORCPT ); Thu, 25 Jun 2020 22:08:08 -0400 Received: from mga03.intel.com ([134.134.136.65]:45097 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728004AbgFZCID (ORCPT ); Thu, 25 Jun 2020 22:08:03 -0400 IronPort-SDR: WbPHtVnImhQtR9aO7YHBpdCRiyLo/oBrhOAQdCm20v4vv4Lzw2gPbSxD9rZxD66U3pJDNtfHAc st0Iqs8drx6Q== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209891" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209891" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:40 -0700 IronPort-SDR: wiR9pOK1Be824wCeiOVY+MQ6Mm/GrjHhv59c9lUzx5JXOm+Z03t6C0EY638zTkYa9XG/09GaNV JZsL1D7jw1hA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011879" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:39 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 06/15] iecm: Implement mailbox functionality Date: Thu, 25 Jun 2020 19:07:28 -0700 Message-Id: <20200626020737.775377-7-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement mailbox setup, take down, and commands. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../net/ethernet/intel/iecm/iecm_controlq.c | 497 +++++++++++++++++- .../ethernet/intel/iecm/iecm_controlq_setup.c | 105 +++- drivers/net/ethernet/intel/iecm/iecm_lib.c | 71 ++- drivers/net/ethernet/intel/iecm/iecm_osdep.c | 17 +- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 427 ++++++++++++++- 5 files changed, 1088 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c index 85ae216e8350..1ce5f4d4c348 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_controlq.c +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c @@ -14,7 +14,15 @@ static void iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq, struct iecm_ctlq_create_info *q_create_info) { - /* stub */ + /* set head and tail registers in our local struct */ + cq->reg.head = q_create_info->reg.head; + cq->reg.tail = q_create_info->reg.tail; + cq->reg.len = q_create_info->reg.len; + cq->reg.bah = q_create_info->reg.bah; + cq->reg.bal = q_create_info->reg.bal; + cq->reg.len_mask = q_create_info->reg.len_mask; + cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask; + cq->reg.head_mask = q_create_info->reg.head_mask; } /** @@ -30,7 +38,32 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, struct iecm_ctlq_info *cq, bool is_rxq) { - /* stub */ + u32 reg = 0; + + if (is_rxq) + /* Update tail to post pre-allocated buffers for Rx queues */ + wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1)); + else + wr32(hw, cq->reg.tail, 0); + + /* For non-Mailbox control queues only TAIL need to be set */ + if (cq->q_id != -1) + return 0; + + /* Clear Head for both send or receive */ + wr32(hw, cq->reg.head, 0); + + /* set starting point */ + wr32(hw, cq->reg.bal, IECM_LO_DWORD(cq->desc_ring.pa)); + wr32(hw, cq->reg.bah, IECM_HI_DWORD(cq->desc_ring.pa)); + wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask)); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, cq->reg.bah); + if (reg != IECM_HI_DWORD(cq->desc_ring.pa)) + return IECM_ERR_CTLQ_ERROR; + + return 0; } /** @@ -42,7 +75,30 @@ static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, */ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) { - /* stub */ + int i = 0; + + for (i = 0; i < cq->ring_size; i++) { + struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i); + struct iecm_dma_mem *bi = cq->bi.rx_buff[i]; + + /* No buffer to post to descriptor, continue */ + if (!bi) + continue; + + desc->flags = + cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD); + desc->opcode = 0; + desc->datalen = (__le16)cpu_to_le16(bi->size); + desc->ret_val = 0; + desc->cookie_high = 0; + desc->cookie_low = 0; + desc->params.indirect.addr_high = + cpu_to_le32(IECM_HI_DWORD(bi->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(IECM_LO_DWORD(bi->pa)); + desc->params.indirect.param0 = 0; + desc->params.indirect.param1 = 0; + } } /** @@ -54,7 +110,20 @@ static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) */ static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + mutex_lock(&cq->cq_lock); + + if (!cq->ring_size) + goto shutdown_sq_out; + + /* free ring buffers and the ring itself */ + iecm_ctlq_dealloc_ring_res(hw, cq); + + /* Set ring_size to 0 to indicate uninitialized queue */ + cq->ring_size = 0; + +shutdown_sq_out: + mutex_unlock(&cq->cq_lock); + mutex_destroy(&cq->cq_lock); } /** @@ -73,7 +142,74 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, struct iecm_ctlq_create_info *qinfo, struct iecm_ctlq_info **cq) { - /* stub */ + enum iecm_status status = 0; + bool is_rxq = false; + + if (!qinfo->len || !qinfo->buf_size || + qinfo->len > IECM_CTLQ_MAX_RING_SIZE || + qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN) + return IECM_ERR_CFG; + + *cq = kcalloc(1, sizeof(struct iecm_ctlq_info), GFP_KERNEL); + if (!(*cq)) + return IECM_ERR_NO_MEMORY; + + (*cq)->cq_type = qinfo->type; + (*cq)->q_id = qinfo->id; + (*cq)->buf_size = qinfo->buf_size; + (*cq)->ring_size = qinfo->len; + + (*cq)->next_to_use = 0; + (*cq)->next_to_clean = 0; + (*cq)->next_to_post = (*cq)->ring_size - 1; + + switch (qinfo->type) { + case IECM_CTLQ_TYPE_MAILBOX_RX: + is_rxq = true; + fallthrough; + case IECM_CTLQ_TYPE_MAILBOX_TX: + status = iecm_ctlq_alloc_ring_res(hw, *cq); + break; + default: + status = IECM_ERR_PARAM; + break; + } + + if (status) + goto init_free_q; + + if (is_rxq) { + iecm_ctlq_init_rxq_bufs(*cq); + } else { + /* Allocate the array of msg pointers for TX queues */ + (*cq)->bi.tx_msg = kcalloc(qinfo->len, + sizeof(struct iecm_ctlq_msg *), + GFP_KERNEL); + if (!(*cq)->bi.tx_msg) { + status = IECM_ERR_NO_MEMORY; + goto init_dealloc_q_mem; + } + } + + iecm_ctlq_setup_regs(*cq, qinfo); + + status = iecm_ctlq_init_regs(hw, *cq, is_rxq); + if (status) + goto init_dealloc_q_mem; + + mutex_init(&(*cq)->cq_lock); + + list_add(&(*cq)->cq_list, &hw->cq_list_head); + + return status; + +init_dealloc_q_mem: + /* free ring buffers and the ring itself */ + iecm_ctlq_dealloc_ring_res(hw, *cq); +init_free_q: + kfree(*cq); + + return status; } /** @@ -84,7 +220,9 @@ enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, void iecm_ctlq_remove(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + list_del(&cq->cq_list); + iecm_ctlq_shutdown(hw, cq); + kfree(cq); } /** @@ -101,7 +239,27 @@ void iecm_ctlq_remove(struct iecm_hw *hw, enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, struct iecm_ctlq_create_info *q_info) { - /* stub */ + struct iecm_ctlq_info *cq = NULL, *tmp = NULL; + enum iecm_status ret_code = 0; + int i = 0; + + INIT_LIST_HEAD(&hw->cq_list_head); + + for (i = 0; i < num_q; i++) { + struct iecm_ctlq_create_info *qinfo = q_info + i; + + ret_code = iecm_ctlq_add(hw, qinfo, &cq); + if (ret_code) + goto init_destroy_qs; + } + + return ret_code; + +init_destroy_qs: + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) + iecm_ctlq_remove(hw, cq); + + return ret_code; } /** @@ -110,7 +268,13 @@ enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, */ enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw) { - /* stub */ + struct iecm_ctlq_info *cq = NULL, *tmp = NULL; + enum iecm_status ret_code = 0; + + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) + iecm_ctlq_remove(hw, cq); + + return ret_code; } /** @@ -130,7 +294,79 @@ enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, u16 num_q_msg, struct iecm_ctlq_msg q_msg[]) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_desc *desc; + int num_desc_avail = 0; + int i = 0; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + mutex_lock(&cq->cq_lock); + + /* Ensure there are enough descriptors to send all messages */ + num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq); + if (num_desc_avail == 0 || num_desc_avail < num_q_msg) { + status = IECM_ERR_CTLQ_FULL; + goto sq_send_command_out; + } + + for (i = 0; i < num_q_msg; i++) { + struct iecm_ctlq_msg *msg = &q_msg[i]; + u64 msg_cookie; + + desc = IECM_CTLQ_DESC(cq, cq->next_to_use); + + desc->opcode = cpu_to_le16(msg->opcode); + desc->pfid_vfid = cpu_to_le16(msg->func_id); + + msg_cookie = *(u64 *)&msg->cookie; + desc->cookie_high = + cpu_to_le32(IECM_HI_DWORD(msg_cookie)); + desc->cookie_low = + cpu_to_le32(IECM_LO_DWORD(msg_cookie)); + + if (msg->data_len) { + struct iecm_dma_mem *buff = msg->ctx.indirect.payload; + + desc->datalen = cpu_to_le16(msg->data_len); + desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_BUF); + desc->flags |= cpu_to_le16(IECM_CTLQ_FLAG_RD); + + /* Update the address values in the desc with the pa + * value for respective buffer + */ + desc->params.indirect.addr_high = + cpu_to_le32(IECM_HI_DWORD(buff->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(IECM_LO_DWORD(buff->pa)); + + memcpy(&desc->params, msg->ctx.indirect.context, + IECM_INDIRECT_CTX_SIZE); + } else { + memcpy(&desc->params, msg->ctx.direct, + IECM_DIRECT_CTX_SIZE); + } + + /* Store buffer info */ + cq->bi.tx_msg[cq->next_to_use] = msg; + + (cq->next_to_use)++; + if (cq->next_to_use == cq->ring_size) + cq->next_to_use = 0; + } + + /* Force memory write to complete before letting hardware + * know that there are new descriptors to fetch. + */ + iecm_wmb(); + + wr32(hw, cq->reg.tail, cq->next_to_use); + +sq_send_command_out: + mutex_unlock(&cq->cq_lock); + + return status; } /** @@ -152,7 +388,58 @@ enum iecm_status iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count, struct iecm_ctlq_msg *msg_status[]) { - /* stub */ + enum iecm_status ret = 0; + struct iecm_ctlq_desc *desc; + u16 i = 0, num_to_clean; + u16 ntc, desc_err; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + if (*clean_count == 0) + return 0; + else if (*clean_count > cq->ring_size) + return IECM_ERR_PARAM; + + mutex_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + + num_to_clean = *clean_count; + + for (i = 0; i < num_to_clean; i++) { + /* Fetch next descriptor and check if marked as done */ + desc = IECM_CTLQ_DESC(cq, ntc); + if (!(le16_to_cpu(desc->flags) & IECM_CTLQ_FLAG_DD)) + break; + + desc_err = le16_to_cpu(desc->ret_val); + if (desc_err) { + /* strip off FW internal code */ + desc_err &= 0xff; + } + + msg_status[i] = cq->bi.tx_msg[ntc]; + msg_status[i]->status = desc_err; + + cq->bi.tx_msg[ntc] = NULL; + + /* Zero out any stale data */ + memset(desc, 0, sizeof(*desc)); + + ntc++; + if (ntc == cq->ring_size) + ntc = 0; + } + + cq->next_to_clean = ntc; + + mutex_unlock(&cq->cq_lock); + + /* Return number of descriptors actually cleaned */ + *clean_count = i; + + return ret; } /** @@ -175,7 +462,111 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, u16 *buff_count, struct iecm_dma_mem **buffs) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_desc *desc; + u16 ntp = cq->next_to_post; + bool buffs_avail = false; + u16 tbp = ntp + 1; + int i = 0; + + if (*buff_count > cq->ring_size) + return IECM_ERR_PARAM; + + if (*buff_count > 0) + buffs_avail = true; + + mutex_lock(&cq->cq_lock); + + if (tbp >= cq->ring_size) + tbp = 0; + + if (tbp == cq->next_to_clean) + /* Nothing to do */ + goto post_buffs_out; + + /* Post buffers for as many as provided or up until the last one used */ + while (ntp != cq->next_to_clean) { + desc = IECM_CTLQ_DESC(cq, ntp); + + if (!cq->bi.rx_buff[ntp]) { + if (!buffs_avail) { + /* If the caller hasn't given us any buffers or + * there are none left, search the ring itself + * for an available buffer to move to this + * entry starting at the next entry in the ring + */ + tbp = ntp + 1; + + /* Wrap ring if necessary */ + if (tbp >= cq->ring_size) + tbp = 0; + + while (tbp != cq->next_to_clean) { + if (cq->bi.rx_buff[tbp]) { + cq->bi.rx_buff[ntp] = + cq->bi.rx_buff[tbp]; + cq->bi.rx_buff[tbp] = NULL; + + /* Found a buffer, no need to + * search anymore + */ + break; + } + + /* Wrap ring if necessary */ + tbp++; + if (tbp >= cq->ring_size) + tbp = 0; + } + + if (tbp == cq->next_to_clean) + goto post_buffs_out; + } else { + /* Give back pointer to DMA buffer */ + cq->bi.rx_buff[ntp] = buffs[i]; + i++; + + if (i >= *buff_count) + buffs_avail = false; + } + } + + desc->flags = + cpu_to_le16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD); + + /* Post buffers to descriptor */ + desc->datalen = cpu_to_le16(cq->bi.rx_buff[ntp]->size); + desc->params.indirect.addr_high = + cpu_to_le32(IECM_HI_DWORD(cq->bi.rx_buff[ntp]->pa)); + desc->params.indirect.addr_low = + cpu_to_le32(IECM_LO_DWORD(cq->bi.rx_buff[ntp]->pa)); + + ntp++; + if (ntp == cq->ring_size) + ntp = 0; + } + +post_buffs_out: + /* Only update tail if buffers were actually posted */ + if (cq->next_to_post != ntp) { + if (ntp) + /* Update next_to_post to ntp - 1 since current ntp + * will not have a buffer + */ + cq->next_to_post = ntp - 1; + else + /* Wrap to end of end ring since current ntp is 0 */ + cq->next_to_post = cq->ring_size - 1; + + wr32(hw, cq->reg.tail, cq->next_to_post); + } + + mutex_unlock(&cq->cq_lock); + + /* return the number of buffers that were not posted */ + *buff_count = *buff_count - i; + + return status; } /** @@ -192,5 +583,87 @@ enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, enum iecm_status iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg, struct iecm_ctlq_msg *q_msg) { - /* stub */ + u16 num_to_clean, ntc, ret_val, flags; + enum iecm_status ret_code = 0; + struct iecm_ctlq_desc *desc; + u16 i = 0; + + if (!cq || !cq->ring_size) + return IECM_ERR_CTLQ_EMPTY; + + if (*num_q_msg == 0) + return 0; + else if (*num_q_msg > cq->ring_size) + return IECM_ERR_PARAM; + + /* take the lock before we start messing with the ring */ + mutex_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + + num_to_clean = *num_q_msg; + + for (i = 0; i < num_to_clean; i++) { + u64 msg_cookie; + + /* Fetch next descriptor and check if marked as done */ + desc = IECM_CTLQ_DESC(cq, ntc); + flags = le16_to_cpu(desc->flags); + + if (!(flags & IECM_CTLQ_FLAG_DD)) + break; + + ret_val = le16_to_cpu(desc->ret_val); + + q_msg[i].vmvf_type = (flags & + (IECM_CTLQ_FLAG_FTYPE_VM | + IECM_CTLQ_FLAG_FTYPE_PF)) >> + IECM_CTLQ_FLAG_FTYPE_S; + + if (flags & IECM_CTLQ_FLAG_ERR) + ret_code = IECM_ERR_CTLQ_ERROR; + + msg_cookie = (u64)le32_to_cpu(desc->cookie_high) << 32; + msg_cookie |= (u64)le32_to_cpu(desc->cookie_low); + memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64)); + + q_msg[i].opcode = le16_to_cpu(desc->opcode); + q_msg[i].data_len = le16_to_cpu(desc->datalen); + q_msg[i].status = ret_val; + + if (desc->datalen) { + memcpy(q_msg[i].ctx.indirect.context, + &desc->params.indirect, IECM_INDIRECT_CTX_SIZE); + + /* Assign pointer to DMA buffer to ctlq_msg array + * to be given to upper layer + */ + q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc]; + + /* Zero out pointer to DMA buffer info; + * will be repopulated by post buffers API + */ + cq->bi.rx_buff[ntc] = NULL; + } else { + memcpy(q_msg[i].ctx.direct, desc->params.raw, + IECM_DIRECT_CTX_SIZE); + } + + /* Zero out stale data in descriptor */ + memset(desc, 0, sizeof(struct iecm_ctlq_desc)); + + ntc++; + if (ntc == cq->ring_size) + ntc = 0; + }; + + cq->next_to_clean = ntc; + + mutex_unlock(&cq->cq_lock); + + *num_q_msg = i; + if (*num_q_msg == 0) + ret_code = IECM_ERR_CTLQ_NO_WORK; + + return ret_code; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c index 2fd6e3d15a1a..f230f2044a48 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c @@ -13,7 +13,13 @@ static enum iecm_status iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc); + + cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size); + if (!cq->desc_ring.va) + return IECM_ERR_NO_MEMORY; + + return 0; } /** @@ -27,7 +33,52 @@ iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + int i = 0; + + /* Do not allocate DMA buffers for transmit queues */ + if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX) + return 0; + + /* We'll be allocating the buffer info memory first, then we can + * allocate the mapped buffers for the event processing + */ + cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct iecm_dma_mem *), + GFP_KERNEL); + if (!cq->bi.rx_buff) + return IECM_ERR_NO_MEMORY; + + /* allocate the mapped buffers (except for the last one) */ + for (i = 0; i < cq->ring_size - 1; i++) { + struct iecm_dma_mem *bi; + int num = 1; /* number of iecm_dma_mem to be allocated */ + + cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct iecm_dma_mem), + GFP_KERNEL); + if (!cq->bi.rx_buff[i]) + goto unwind_alloc_cq_bufs; + + bi = cq->bi.rx_buff[i]; + + bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size); + if (!bi->va) { + /* unwind will not free the failed entry */ + kfree(cq->bi.rx_buff[i]); + goto unwind_alloc_cq_bufs; + } + } + + return 0; + +unwind_alloc_cq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) { + iecm_free_dma_mem(hw, cq->bi.rx_buff[i]); + kfree(cq->bi.rx_buff[i]); + } + kfree(cq->bi.rx_buff); + + return IECM_ERR_NO_MEMORY; } /** @@ -41,7 +92,7 @@ static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + iecm_free_dma_mem(hw, &cq->desc_ring); } /** @@ -54,7 +105,26 @@ static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, */ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + void *bi; + + if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) { + int i; + + /* free DMA buffers for Rx queues*/ + for (i = 0; i < cq->ring_size; i++) { + if (cq->bi.rx_buff[i]) { + iecm_free_dma_mem(hw, cq->bi.rx_buff[i]); + kfree(cq->bi.rx_buff[i]); + } + } + + bi = (void *)cq->bi.rx_buff; + } else { + bi = (void *)cq->bi.tx_msg; + } + + /* free the buffer header */ + kfree(bi); } /** @@ -66,7 +136,9 @@ static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) */ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + /* free ring buffers and the ring itself */ + iecm_ctlq_free_bufs(hw, cq); + iecm_ctlq_free_desc_ring(hw, cq); } /** @@ -80,5 +152,26 @@ void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) { - /* stub */ + enum iecm_status ret_code; + + /* verify input for valid configuration */ + if (!cq->ring_size || !cq->buf_size) + return IECM_ERR_CFG; + + /* allocate the ring memory */ + ret_code = iecm_ctlq_alloc_desc_ring(hw, cq); + if (ret_code) + return ret_code; + + /* allocate buffers in the rings */ + ret_code = iecm_ctlq_alloc_bufs(hw, cq); + if (ret_code) + goto iecm_init_cq_free_ring; + + /* success! */ + return 0; + +iecm_init_cq_free_ring: + iecm_free_dma_mem(hw, &cq->desc_ring); + return ret_code; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 9aa944430187..afceab24aeda 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -163,7 +163,76 @@ static int iecm_intr_req(struct iecm_adapter *adapter) */ static int iecm_cfg_netdev(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + netdev_features_t dflt_features; + netdev_features_t offloads = 0; + struct iecm_netdev_priv *np; + struct net_device *netdev; + int err; + + netdev = alloc_etherdev_mqs(sizeof(struct iecm_netdev_priv), + IECM_MAX_Q, IECM_MAX_Q); + if (!netdev) + return -ENOMEM; + vport->netdev = netdev; + np = netdev_priv(netdev); + np->vport = vport; + + if (!is_valid_ether_addr(vport->default_mac_addr)) { + eth_hw_addr_random(netdev); + ether_addr_copy(vport->default_mac_addr, netdev->dev_addr); + } else { + ether_addr_copy(netdev->dev_addr, vport->default_mac_addr); + ether_addr_copy(netdev->perm_addr, vport->default_mac_addr); + } + + /* assign netdev_ops */ + if (iecm_is_queue_model_split(vport->txq_model)) + netdev->netdev_ops = &iecm_netdev_ops_splitq; + else + netdev->netdev_ops = &iecm_netdev_ops_singleq; + + /* setup watchdog timeout value to be 5 second */ + netdev->watchdog_timeo = 5 * HZ; + + /* configure default MTU size */ + netdev->min_mtu = ETH_MIN_MTU; + netdev->max_mtu = vport->max_mtu; + + dflt_features = NETIF_F_SG | + NETIF_F_HIGHDMA | + NETIF_F_RXHASH; + + if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_STATELESS_OFFLOADS)) { + dflt_features |= + NETIF_F_RXCSUM | + NETIF_F_IP_CSUM | + NETIF_F_IPV6_CSUM | + 0; + offloads |= NETIF_F_TSO | + NETIF_F_TSO6; + } + if (iecm_is_cap_ena(adapter, VIRTCHNL_CAP_UDP_SEG_OFFLOAD)) + offloads |= NETIF_F_GSO_UDP_L4; + + netdev->features |= dflt_features; + netdev->hw_features |= dflt_features | offloads; + netdev->hw_enc_features |= dflt_features | offloads; + + iecm_set_ethtool_ops(netdev); + SET_NETDEV_DEV(netdev, &adapter->pdev->dev); + + err = register_netdev(netdev); + if (err) + return err; + + /* carrier off on init to avoid Tx hangs */ + netif_carrier_off(netdev); + + /* make sure transmit queues start off as stopped */ + netif_tx_stop_all_queues(netdev); + + return 0; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_osdep.c b/drivers/net/ethernet/intel/iecm/iecm_osdep.c index d0534df357d0..be5dbe1d23b2 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_osdep.c +++ b/drivers/net/ethernet/intel/iecm/iecm_osdep.c @@ -6,10 +6,23 @@ void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) { - /* stub */ + size_t sz = ALIGN(size, 4096); + struct iecm_adapter *pf = hw->back; + + mem->va = dma_alloc_coherent(&pf->pdev->dev, sz, + &mem->pa, GFP_KERNEL | __GFP_ZERO); + mem->size = size; + + return mem->va; } void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem) { - /* stub */ + struct iecm_adapter *pf = hw->back; + + dma_free_coherent(&pf->pdev->dev, mem->size, + mem->va, mem->pa); + mem->size = 0; + mem->va = NULL; + mem->pa = 0; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 6fa441e34e21..c5061778ce74 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -39,7 +39,42 @@ struct iecm_rx_ptype_decoded iecm_rx_ptype_lkup[IECM_RX_SUPP_PTYPE] = { */ static void iecm_recv_event_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum virtchnl_link_speed link_speed; + struct virtchnl_pf_event *vpe; + + vpe = (struct virtchnl_pf_event *)vport->adapter->vc_msg; + + switch (vpe->event) { + case VIRTCHNL_EVENT_LINK_CHANGE: + link_speed = vpe->event_data.link_event.link_speed; + adapter->link_speed = link_speed; + if (adapter->link_up != + vpe->event_data.link_event.link_status) { + adapter->link_up = + vpe->event_data.link_event.link_status; + if (adapter->link_up) { + netif_tx_start_all_queues(vport->netdev); + netif_carrier_on(vport->netdev); + } else { + netif_tx_stop_all_queues(vport->netdev); + netif_carrier_off(vport->netdev); + } + } + break; + case VIRTCHNL_EVENT_RESET_IMPENDING: + mutex_lock(&adapter->reset_lock); + set_bit(__IECM_HR_CORE_RESET, adapter->flags); + queue_delayed_work(adapter->vc_event_wq, + &adapter->vc_event_task, + msecs_to_jiffies(10)); + break; + default: + dev_err(&vport->adapter->pdev->dev, + "Unknown event %d from PF\n", vpe->event); + break; + } + mutex_unlock(&vport->adapter->vc_msg_lock); } /** @@ -53,7 +88,34 @@ static void iecm_recv_event_msg(struct iecm_vport *vport) static enum iecm_status iecm_mb_clean(struct iecm_adapter *adapter) { - /* stub */ + enum iecm_status err = 0; + u16 num_q_msg = IECM_DFLT_MBX_Q_LEN; + struct iecm_ctlq_msg **q_msg; + struct iecm_dma_mem *dma_mem; + u16 i; + + q_msg = kcalloc(num_q_msg, sizeof(struct iecm_ctlq_msg *), GFP_KERNEL); + if (!q_msg) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, + q_msg); + if (err) + goto error; + + for (i = 0; i < num_q_msg; i++) { + dma_mem = q_msg[i]->ctx.indirect.payload; + if (dma_mem) + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, + dma_mem->va, dma_mem->pa); + kfree(q_msg[i]); + } + kfree(q_msg); + +error: + return err; } /** @@ -71,7 +133,56 @@ enum iecm_status iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, u16 msg_size, u8 *msg) { - /* stub */ + enum iecm_status err = 0; + struct iecm_ctlq_msg *ctlq_msg; + struct iecm_dma_mem *dma_mem; + + err = iecm_mb_clean(adapter); + if (err) + goto err; + + ctlq_msg = kzalloc(sizeof(struct iecm_ctlq_msg), GFP_KERNEL); + if (!ctlq_msg) { + err = IECM_ERR_NO_MEMORY; + goto err; + } + + dma_mem = kzalloc(sizeof(struct iecm_dma_mem), GFP_KERNEL); + if (!dma_mem) { + err = IECM_ERR_NO_MEMORY; + goto dma_mem_error; + } + + memset(ctlq_msg, 0, sizeof(struct iecm_ctlq_msg)); + ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_cp; + ctlq_msg->func_id = 0; + ctlq_msg->data_len = msg_size; + ctlq_msg->cookie.mbx.chnl_opcode = op; + ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS; + dma_mem->size = IECM_DFLT_MBX_BUF_SIZE; + dma_mem->va = dmam_alloc_coherent(&adapter->pdev->dev, dma_mem->size, + &dma_mem->pa, GFP_KERNEL); + if (!dma_mem->va) { + err = IECM_ERR_NO_MEMORY; + goto dma_alloc_error; + } + memcpy(dma_mem->va, msg, msg_size); + ctlq_msg->ctx.indirect.payload = dma_mem; + + err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); + if (err) + goto send_error; + + return err; +send_error: + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, dma_mem->va, + dma_mem->pa); +dma_alloc_error: + kfree(dma_mem); +dma_mem_error: + kfree(ctlq_msg); +err: + return err; } EXPORT_SYMBOL(iecm_send_mb_msg); @@ -88,7 +199,265 @@ enum iecm_status iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, void *msg, int msg_size) { - /* stub */ + enum iecm_status status = 0; + struct iecm_ctlq_msg ctlq_msg; + struct iecm_dma_mem *dma_mem; + struct iecm_vport *vport; + bool work_done = false; + int payload_size = 0; + int num_retry = 10; + u16 num_q_msg; + + vport = adapter->vports[0]; + while (1) { + /* Try to get one message */ + num_q_msg = 1; + dma_mem = NULL; + status = iecm_ctlq_recv(adapter->hw.arq, + &num_q_msg, &ctlq_msg); + /* If no message then decide if we have to retry based on + * opcode + */ + if (status || !num_q_msg) { + if (op && num_retry--) { + msleep(20); + continue; + } else { + break; + } + } + + /* If we are here a message is received. Check if we are looking + * for a specific message based on opcode. If it is different + * ignore and post buffers + */ + if (op && ctlq_msg.cookie.mbx.chnl_opcode != op) + goto post_buffs; + + if (ctlq_msg.data_len) + payload_size = ctlq_msg.ctx.indirect.payload->size; + + /* All conditions are met. Either a message requested is + * received or we received a message to be processed + */ + switch (ctlq_msg.cookie.mbx.chnl_opcode) { + case VIRTCHNL_OP_VERSION: + case VIRTCHNL_OP_GET_CAPS: + case VIRTCHNL_OP_CREATE_VPORT: + if (msg) + memcpy(msg, ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, msg_size)); + work_done = true; + break; + case VIRTCHNL_OP_ENABLE_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_ENA_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_ENA_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DISABLE_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DIS_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_DIS_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DESTROY_VPORT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DESTROY_VPORT_ERR, + adapter->vc_state); + set_bit(IECM_VC_DESTROY_VPORT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_TX_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_TXQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_TXQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_RX_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_RXQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_RXQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_ENA_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_ENA_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DIS_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_DIS_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_ADD_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_ADD_QUEUES_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min((int) + ctlq_msg.ctx.indirect.payload->size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_ADD_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_DEL_QUEUES: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_DEL_QUEUES_ERR, + adapter->vc_state); + set_bit(IECM_VC_DEL_QUEUES, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_MAP_QUEUE_VECTOR: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_MAP_IRQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_MAP_IRQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_UNMAP_QUEUE_VECTOR: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_UNMAP_IRQ_ERR, + adapter->vc_state); + set_bit(IECM_VC_UNMAP_IRQ, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_STATS: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_STATS_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_STATS, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_HASH: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_HASH_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_HASH, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_SET_RSS_HASH: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_SET_RSS_HASH_ERR, + adapter->vc_state); + set_bit(IECM_VC_SET_RSS_HASH, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_LUT: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_LUT_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_LUT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_SET_RSS_LUT: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_SET_RSS_LUT_ERR, + adapter->vc_state); + set_bit(IECM_VC_SET_RSS_LUT, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_GET_RSS_KEY: + if (ctlq_msg.cookie.mbx.chnl_retval) { + set_bit(IECM_VC_GET_RSS_KEY_ERR, + adapter->vc_state); + } else { + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + } + set_bit(IECM_VC_GET_RSS_KEY, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_CONFIG_RSS_KEY: + if (ctlq_msg.cookie.mbx.chnl_retval) + set_bit(IECM_VC_CONFIG_RSS_KEY_ERR, + adapter->vc_state); + set_bit(IECM_VC_CONFIG_RSS_KEY, adapter->vc_state); + wake_up(&adapter->vchnl_wq); + break; + case VIRTCHNL_OP_EVENT: + mutex_lock(&adapter->vc_msg_lock); + memcpy(adapter->vc_msg, + ctlq_msg.ctx.indirect.payload->va, + min_t(int, + payload_size, + IECM_DFLT_MBX_BUF_SIZE)); + iecm_recv_event_msg(vport); + break; + default: + if (adapter->dev_ops.vc_ops.recv_mbx_msg) + status = + adapter->dev_ops.vc_ops.recv_mbx_msg(adapter, + msg, + msg_size, + &ctlq_msg, + &work_done); + else + dev_warn(&adapter->pdev->dev, + "Unhandled virtchnl response %d\n", + ctlq_msg.cookie.mbx.chnl_opcode); + break; + } /* switch v_opcode */ +post_buffs: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + num_q_msg = 0; + + status = iecm_ctlq_post_rx_buffs(&adapter->hw, adapter->hw.arq, + &num_q_msg, + &dma_mem); + /* If post failed clear the only buffer we supplied */ + if (status && dma_mem) + dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, + dma_mem->va, dma_mem->pa); + /* Applies only if we are looking for a specific opcode */ + if (work_done) + break; + } + + return status; } EXPORT_SYMBOL(iecm_recv_mb_msg); @@ -186,7 +555,27 @@ iecm_wait_for_event(struct iecm_adapter *adapter, enum iecm_vport_vc_state state, enum iecm_vport_vc_state err_check) { - /* stub */ + enum iecm_status status; + int event; + + event = wait_event_timeout(adapter->vchnl_wq, + test_and_clear_bit(state, adapter->vc_state), + msecs_to_jiffies(500)); + if (event) { + if (test_and_clear_bit(err_check, adapter->vc_state)) { + dev_err(&adapter->pdev->dev, + "VC response error %d", err_check); + status = IECM_ERR_CFG; + } else { + status = 0; + } + } else { + /* Timeout occurred */ + status = IECM_ERR_NOT_READY; + dev_err(&adapter->pdev->dev, "VC timeout, state = %u", state); + } + + return status; } EXPORT_SYMBOL(iecm_wait_for_event); @@ -426,7 +815,14 @@ enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, enum iecm_ctlq_type type, int id) { - /* stub */ + struct iecm_ctlq_info *cq, *tmp; + + list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list) { + if (cq->q_id == id && cq->cq_type == type) + return cq; + } + + return NULL; } /** @@ -435,7 +831,8 @@ static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, */ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) { - /* stub */ + cancel_delayed_work_sync(&adapter->init_task); + iecm_ctlq_deinit(&adapter->hw); } /** @@ -497,7 +894,21 @@ enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter) */ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) { - /* stub */ + adapter->vport_params_reqd = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vport_params_reqd), + GFP_KERNEL); + if (!adapter->vport_params_reqd) + return -ENOMEM; + + adapter->vport_params_recvd = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vport_params_recvd), + GFP_KERNEL); + if (!adapter->vport_params_recvd) { + kfree(adapter->vport_params_reqd); + return -ENOMEM; + } + + return 0; } /** From patchwork Fri Jun 26 02:07:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C470C433E1 for ; Fri, 26 Jun 2020 02:08:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 536CF207D8 for ; Fri, 26 Jun 2020 02:08:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728048AbgFZCIM (ORCPT ); Thu, 25 Jun 2020 22:08:12 -0400 Received: from mga03.intel.com ([134.134.136.65]:45084 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728012AbgFZCIJ (ORCPT ); Thu, 25 Jun 2020 22:08:09 -0400 IronPort-SDR: lyQ3I/9dSU0FS8Mto3L/SJMtCQFx0W0Q6NyEKkN26fAXC83eocoQQwpSVm93sAcdeE7G/uDh78 PNp/X4Hg4u6w== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209894" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209894" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:40 -0700 IronPort-SDR: usl0Qsw51DdH0xaiSgN3sptXbgfo8DlIaPUpJULKkQQqL4YOcd5F0xJpqwLnukHch6nCA5C/Uu LvlCxge2ErFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011891" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:39 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 09/15] iecm: Init and allocate vport Date: Thu, 25 Jun 2020 19:07:31 -0700 Message-Id: <20200626020737.775377-10-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Initialize vport and allocate queue resources. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 87 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 797 +++++++++++++++++- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 37 +- 3 files changed, 890 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index efbdff575149..a51731feedb4 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -444,7 +444,15 @@ static void iecm_vport_rel_all(struct iecm_adapter *adapter) void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog __always_unused *prog) { - /* stub */ + if (prog) { + vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT; + return; + } + if (iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_HEADER_SPLIT) && + iecm_is_queue_model_split(vport->rxq_model)) + vport->rx_hsplit_en = IECM_RX_HDR_SPLIT; + else + vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT; } /** @@ -532,7 +540,12 @@ static void iecm_service_task(struct work_struct *work) */ static void iecm_up_complete(struct iecm_vport *vport) { - /* stub */ + netif_set_real_num_rx_queues(vport->netdev, vport->num_txq); + netif_set_real_num_tx_queues(vport->netdev, vport->num_rxq); + netif_carrier_on(vport->netdev); + netif_tx_start_all_queues(vport->netdev); + + vport->adapter->state = __IECM_UP; } /** @@ -541,7 +554,71 @@ static void iecm_up_complete(struct iecm_vport *vport) */ static int iecm_vport_open(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int err; + + if (vport->adapter->state != __IECM_DOWN) + return -EBUSY; + + /* we do not allow interface up just yet */ + netif_carrier_off(vport->netdev); + + if (adapter->dev_ops.vc_ops.enable_vport) { + err = adapter->dev_ops.vc_ops.enable_vport(vport); + if (err) + return -EAGAIN; + } + + if (iecm_vport_queues_alloc(vport)) { + err = -ENOMEM; + goto unroll_queues_alloc; + } + + err = iecm_vport_intr_init(vport); + if (err) + goto unroll_intr_init; + + err = vport->adapter->dev_ops.vc_ops.config_queues(vport); + if (err) + goto unroll_config_queues; + err = vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, true); + if (err) { + dev_err(&vport->adapter->pdev->dev, + "Call to irq_map_unmap returned %d\n", err); + goto unroll_config_queues; + } + err = vport->adapter->dev_ops.vc_ops.enable_queues(vport); + if (err) + goto unroll_enable_queues; + + err = vport->adapter->dev_ops.vc_ops.get_ptype(vport); + if (err) + goto unroll_get_ptype; + + if (adapter->rss_data.rss_lut) + err = iecm_config_rss(vport); + else + err = iecm_init_rss(vport); + if (err) + goto unroll_init_rss; + iecm_up_complete(vport); + + netif_info(vport->adapter, hw, vport->netdev, "%s\n", __func__); + + return 0; +unroll_init_rss: +unroll_get_ptype: + vport->adapter->dev_ops.vc_ops.disable_queues(vport); +unroll_enable_queues: + vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, false); +unroll_config_queues: + iecm_vport_intr_deinit(vport); +unroll_intr_init: + iecm_vport_queues_rel(vport); +unroll_queues_alloc: + adapter->dev_ops.vc_ops.disable_vport(vport); + + return err; } /** @@ -862,7 +939,9 @@ EXPORT_SYMBOL(iecm_shutdown); */ static int iecm_open(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return iecm_vport_open(np->vport); } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 6a853d1828cb..4e5e3d487795 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -82,7 +82,37 @@ static void iecm_tx_desc_rel_all(struct iecm_vport *vport) */ static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) { - /* stub */ + int buf_size; + int i = 0; + + /* Allocate book keeping buffers only. Buffers to be supplied to HW + * are allocated by kernel network stack and received as part of skb + */ + buf_size = sizeof(struct iecm_tx_buf) * tx_q->desc_count; + tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL); + if (!tx_q->tx_buf) + return IECM_ERR_NO_MEMORY; + + /* Initialize Tx buf stack for out-of-order completions if + * flow scheduling offload is enabled + */ + tx_q->buf_stack.bufs = + kcalloc(tx_q->desc_count, sizeof(struct iecm_tx_buf *), + GFP_KERNEL); + if (!tx_q->buf_stack.bufs) + return IECM_ERR_NO_MEMORY; + + for (i = 0; i < tx_q->desc_count; i++) { + tx_q->buf_stack.bufs[i] = kzalloc(sizeof(struct iecm_tx_buf), + GFP_KERNEL); + if (!tx_q->buf_stack.bufs[i]) + return IECM_ERR_NO_MEMORY; + } + + tx_q->buf_stack.size = tx_q->desc_count; + tx_q->buf_stack.top = tx_q->desc_count; + + return 0; } /** @@ -92,7 +122,40 @@ static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) */ static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) { - /* stub */ + struct device *dev = tx_q->dev; + enum iecm_status err = 0; + + if (bufq) { + err = iecm_tx_buf_alloc_all(tx_q); + if (err) + goto err_alloc; + tx_q->size = tx_q->desc_count * + sizeof(struct iecm_base_tx_desc); + } else { + tx_q->size = tx_q->desc_count * + sizeof(struct iecm_splitq_tx_compl_desc); + } + + /* Allocate descriptors also round up to nearest 4K */ + tx_q->size = ALIGN(tx_q->size, 4096); + tx_q->desc_ring = dmam_alloc_coherent(dev, tx_q->size, &tx_q->dma, + GFP_KERNEL); + if (!tx_q->desc_ring) { + dev_info(dev, "Unable to allocate memory for the Tx descriptor ring, size=%d\n", + tx_q->size); + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + tx_q->next_to_alloc = 0; + tx_q->next_to_use = 0; + tx_q->next_to_clean = 0; + set_bit(__IECM_Q_GEN_CHK, tx_q->flags); + +err_alloc: + if (err) + iecm_tx_desc_rel(tx_q, bufq); + return err; } /** @@ -101,7 +164,41 @@ static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) */ static enum iecm_status iecm_tx_desc_alloc_all(struct iecm_vport *vport) { - /* stub */ + struct pci_dev *pdev = vport->adapter->pdev; + enum iecm_status err = 0; + int i, j; + + /* Setup buffer queues. In single queue model buffer queues and + * completion queues will be same + */ + for (i = 0; i < vport->num_txq_grp; i++) { + for (j = 0; j < vport->txq_grps[i].num_txq; j++) { + err = iecm_tx_desc_alloc(&vport->txq_grps[i].txqs[j], + true); + if (err) { + dev_err(&pdev->dev, + "Allocation for Tx Queue %u failed\n", + i); + goto err_out; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + /* Setup completion queues */ + err = iecm_tx_desc_alloc(vport->txq_grps[i].complq, + false); + if (err) { + dev_err(&pdev->dev, + "Allocation for Tx Completion Queue %u failed\n", + i); + goto err_out; + } + } + } +err_out: + if (err) + iecm_tx_desc_rel_all(vport); + return err; } /** @@ -156,7 +253,17 @@ static void iecm_rx_desc_rel_all(struct iecm_vport *vport) */ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) { - /* stub */ + /* update next to alloc since we have filled the ring */ + rxq->next_to_alloc = val; + + rxq->next_to_use = val; + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel_relaxed(val, rxq->tail); } /** @@ -169,7 +276,34 @@ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) */ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) { - /* stub */ + struct page *page = buf->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = alloc_page(GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!page)) + return false; + + /* map page for use */ + dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + + /* if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rxq->dev, dma)) { + __free_pages(page, 0); + return false; + } + + buf->dma = dma; + buf->page = page; + buf->page_offset = iecm_rx_offset(rxq); + + return true; } /** @@ -183,7 +317,34 @@ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) static bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *hdr_buf) { - /* stub */ + struct page *page = hdr_buf->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = alloc_page(GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!page)) + return false; + + /* map page for use */ + dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + + /* if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rxq->dev, dma)) { + __free_pages(page, 0); + return false; + } + + hdr_buf->dma = dma; + hdr_buf->page = page; + hdr_buf->page_offset = 0; + + return true; } /** @@ -197,7 +358,59 @@ static bool iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, u16 cleaned_count) { - /* stub */ + struct iecm_splitq_rx_buf_desc *splitq_rx_desc = NULL; + struct iecm_rx_buf *hdr_buf = NULL; + u16 nta = rxq->next_to_alloc; + struct iecm_rx_buf *buf; + + /* do nothing if no valid netdev defined */ + if (!rxq->vport->netdev || !cleaned_count) + return false; + + splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, nta); + + buf = &rxq->rx_buf.buf[nta]; + if (rxq->rx_hsplit_en) + hdr_buf = &rxq->rx_buf.hdr_buf[nta]; + + do { + if (rxq->rx_hsplit_en) { + if (!iecm_rx_hdr_buf_hw_alloc(rxq, hdr_buf)) + break; + + splitq_rx_desc->hdr_addr = + cpu_to_le64(hdr_buf->dma + + hdr_buf->page_offset); + hdr_buf++; + } + + if (!iecm_rx_buf_hw_alloc(rxq, buf)) + break; + + /* Refresh the desc even if buffer_addrs didn't change + * because each write-back erases this info. + */ + splitq_rx_desc->pkt_addr = + cpu_to_le64(buf->dma + buf->page_offset); + splitq_rx_desc->qword0.buf_id = cpu_to_le16(nta); + + splitq_rx_desc++; + buf++; + nta++; + if (unlikely(nta == rxq->desc_count)) { + splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, 0); + buf = rxq->rx_buf.buf; + hdr_buf = rxq->rx_buf.hdr_buf; + nta = 0; + } + + cleaned_count--; + } while (cleaned_count); + + if (rxq->next_to_alloc != nta) + iecm_rx_buf_hw_update(rxq, nta); + + return !!cleaned_count; } /** @@ -206,7 +419,44 @@ iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, */ static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) { - /* stub */ + enum iecm_status err = 0; + + /* Allocate book keeping buffers */ + rxq->rx_buf.buf = kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf), + GFP_KERNEL); + if (!rxq->rx_buf.buf) { + err = IECM_ERR_NO_MEMORY; + goto rx_buf_alloc_all_out; + } + + if (rxq->rx_hsplit_en) { + rxq->rx_buf.hdr_buf = + kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf), + GFP_KERNEL); + if (!rxq->rx_buf.hdr_buf) { + err = IECM_ERR_NO_MEMORY; + goto rx_buf_alloc_all_out; + } + } else { + rxq->rx_buf.hdr_buf = NULL; + } + + /* Allocate buffers to be given to HW. Allocate one less than + * total descriptor count as RX splits 4k buffers to 2K and recycles + */ + if (iecm_is_queue_model_split(rxq->vport->rxq_model)) { + if (iecm_rx_buf_hw_alloc_all(rxq, + rxq->desc_count - 1)) + err = IECM_ERR_NO_MEMORY; + } else if (iecm_rx_singleq_buf_hw_alloc_all(rxq, + rxq->desc_count - 1)) { + err = IECM_ERR_NO_MEMORY; + } + +rx_buf_alloc_all_out: + if (err) + iecm_rx_buf_rel_all(rxq); + return err; } /** @@ -218,7 +468,50 @@ static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, enum virtchnl_queue_model q_model) { - /* stub */ + struct device *dev = rxq->dev; + enum iecm_status err = 0; + + /* As both single and split descriptors are 32 byte, memory size + * will be same for all three singleq_base Rx, buf., splitq_base + * Rx. So pick anyone of them for size + */ + if (bufq) { + rxq->size = rxq->desc_count * + sizeof(struct iecm_splitq_rx_buf_desc); + } else { + rxq->size = rxq->desc_count * + sizeof(union iecm_rx_desc); + } + + /* Allocate descriptors and also round up to nearest 4K */ + rxq->size = ALIGN(rxq->size, 4096); + rxq->desc_ring = dmam_alloc_coherent(dev, rxq->size, + &rxq->dma, GFP_KERNEL); + if (!rxq->desc_ring) { + dev_info(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", + rxq->size); + err = IECM_ERR_NO_MEMORY; + return err; + } + + rxq->next_to_alloc = 0; + rxq->next_to_clean = 0; + rxq->next_to_use = 0; + set_bit(__IECM_Q_GEN_CHK, rxq->flags); + + /* Allocate buffers for a Rx queue if the q_model is single OR if it + * is a buffer queue in split queue model + */ + if (bufq || !iecm_is_queue_model_split(q_model)) { + err = iecm_rx_buf_alloc_all(rxq); + if (err) + goto err_alloc; + } + +err_alloc: + if (err) + iecm_rx_desc_rel(rxq, bufq, q_model); + return err; } /** @@ -227,7 +520,48 @@ static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, */ static enum iecm_status iecm_rx_desc_alloc_all(struct iecm_vport *vport) { - /* stub */ + struct device *dev = &vport->adapter->pdev->dev; + enum iecm_status err = 0; + struct iecm_queue *q; + int i, j, num_rxq; + + for (i = 0; i < vport->num_rxq_grp; i++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = vport->rxq_grps[i].splitq.num_rxq_sets; + else + num_rxq = vport->rxq_grps[i].singleq.num_rxq; + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &vport->rxq_grps[i].splitq.rxq_sets[j].rxq; + else + q = &vport->rxq_grps[i].singleq.rxqs[j]; + err = iecm_rx_desc_alloc(q, false, vport->rxq_model); + if (err) { + dev_err(dev, "Memory allocation for Rx Queue %u failed\n", + i); + goto err_out; + } + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + q = + &vport->rxq_grps[i].splitq.bufq_sets[j].bufq; + err = iecm_rx_desc_alloc(q, true, + vport->rxq_model); + if (err) { + dev_err(dev, "Memory allocation for Rx Buffer Queue %u failed\n", + i); + goto err_out; + } + } + } + } +err_out: + if (err) + iecm_rx_desc_rel_all(vport); + return err; } /** @@ -279,7 +613,26 @@ void iecm_vport_queues_rel(struct iecm_vport *vport) static enum iecm_status iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) { - /* stub */ + enum iecm_status err = 0; + int i, j, k = 0; + + vport->txqs = kcalloc(vport->num_txq, sizeof(struct iecm_queue *), + GFP_KERNEL); + + if (!vport->txqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_grp = &vport->txq_grps[i]; + + for (j = 0; j < tx_grp->num_txq; j++, k++) { + vport->txqs[k] = &tx_grp->txqs[j]; + vport->txqs[k]->idx = k; + } + } +err_alloc: + return err; } /** @@ -290,7 +643,12 @@ iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) void iecm_vport_init_num_qs(struct iecm_vport *vport, struct virtchnl_create_vport *vport_msg) { - /* stub */ + vport->num_txq = vport_msg->num_tx_q; + vport->num_rxq = vport_msg->num_rx_q; + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_complq = vport_msg->num_tx_complq; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->num_bufq = vport_msg->num_rx_bufq; } /** @@ -299,7 +657,32 @@ void iecm_vport_init_num_qs(struct iecm_vport *vport, */ void iecm_vport_calc_num_q_desc(struct iecm_vport *vport) { - /* stub */ + int num_req_txq_desc = vport->adapter->config_data.num_req_txq_desc; + int num_req_rxq_desc = vport->adapter->config_data.num_req_rxq_desc; + + vport->complq_desc_count = 0; + vport->bufq_desc_count = 0; + if (num_req_txq_desc) { + vport->txq_desc_count = num_req_txq_desc; + if (iecm_is_queue_model_split(vport->txq_model)) + vport->complq_desc_count = num_req_txq_desc; + } else { + vport->txq_desc_count = + IECM_DFLT_TX_Q_DESC_COUNT; + if (iecm_is_queue_model_split(vport->txq_model)) { + vport->complq_desc_count = + IECM_DFLT_TX_COMPLQ_DESC_COUNT; + } + } + if (num_req_rxq_desc) { + vport->rxq_desc_count = num_req_rxq_desc; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->bufq_desc_count = num_req_rxq_desc; + } else { + vport->rxq_desc_count = IECM_DFLT_RX_Q_DESC_COUNT; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->bufq_desc_count = IECM_DFLT_RX_BUFQ_DESC_COUNT; + } } EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); @@ -311,7 +694,51 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, int num_req_qs) { - /* stub */ + int dflt_splitq_txq_grps, dflt_singleq_txqs; + int dflt_splitq_rxq_grps, dflt_singleq_rxqs; + int num_txq_grps, num_rxq_grps; + int num_cpus; + + /* Restrict num of queues to cpus online as a default configuration to + * give best performance. User can always override to a max number + * of queues via ethtool. + */ + num_cpus = num_online_cpus(); + dflt_splitq_txq_grps = min_t(int, IECM_DFLT_SPLITQ_TX_Q_GROUPS, + num_cpus); + dflt_singleq_txqs = min_t(int, IECM_DFLT_SINGLEQ_TXQ_PER_GROUP, + num_cpus); + dflt_splitq_rxq_grps = min_t(int, IECM_DFLT_SPLITQ_RX_Q_GROUPS, + num_cpus); + dflt_singleq_rxqs = min_t(int, IECM_DFLT_SINGLEQ_RXQ_PER_GROUP, + num_cpus); + + if (iecm_is_queue_model_split(vport_msg->txq_model)) { + num_txq_grps = num_req_qs ? num_req_qs : dflt_splitq_txq_grps; + vport_msg->num_tx_complq = num_txq_grps * + IECM_COMPLQ_PER_GROUP; + vport_msg->num_tx_q = num_txq_grps * + IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + } else { + num_txq_grps = IECM_DFLT_SINGLEQ_TX_Q_GROUPS; + vport_msg->num_tx_q = num_txq_grps * + (num_req_qs ? num_req_qs : + dflt_singleq_txqs); + vport_msg->num_tx_complq = 0; + } + if (iecm_is_queue_model_split(vport_msg->rxq_model)) { + num_rxq_grps = num_req_qs ? num_req_qs : dflt_splitq_rxq_grps; + vport_msg->num_rx_bufq = num_rxq_grps * + IECM_BUFQS_PER_RXQ_SET; + vport_msg->num_rx_q = num_rxq_grps * + IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + } else { + num_rxq_grps = IECM_DFLT_SINGLEQ_RX_Q_GROUPS; + vport_msg->num_rx_bufq = 0; + vport_msg->num_rx_q = num_rxq_grps * + (num_req_qs ? num_req_qs : + dflt_singleq_rxqs); + } } /** @@ -320,7 +747,15 @@ void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, */ void iecm_vport_calc_num_q_groups(struct iecm_vport *vport) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_txq_grp = vport->num_txq; + else + vport->num_txq_grp = IECM_DFLT_SINGLEQ_TX_Q_GROUPS; + + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->num_rxq_grp = vport->num_rxq; + else + vport->num_rxq_grp = IECM_DFLT_SINGLEQ_RX_Q_GROUPS; } EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); @@ -333,7 +768,15 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, int *num_txq, int *num_rxq) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + *num_txq = IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + else + *num_txq = vport->num_txq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + *num_rxq = IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + else + *num_rxq = vport->num_rxq; } /** @@ -344,7 +787,10 @@ static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, */ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_q_vectors = vport->num_txq_grp; + else + vport->num_q_vectors = vport->num_txq; } /** @@ -355,7 +801,68 @@ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq) { - /* stub */ + struct iecm_itr tx_itr = { 0 }; + enum iecm_status err = 0; + int i; + + vport->txq_grps = kcalloc(vport->num_txq_grp, + sizeof(*vport->txq_grps), GFP_KERNEL); + if (!vport->txq_grps) + return IECM_ERR_NO_MEMORY; + + tx_itr.target_itr = IECM_ITR_TX_DEF; + tx_itr.itr_idx = VIRTCHNL_ITR_IDX_1; + tx_itr.next_update = jiffies + 1; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + int j; + + tx_qgrp->vport = vport; + tx_qgrp->num_txq = num_txq; + tx_qgrp->txqs = kcalloc(num_txq, sizeof(*tx_qgrp->txqs), + GFP_KERNEL); + if (!tx_qgrp->txqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + for (j = 0; j < tx_qgrp->num_txq; j++) { + struct iecm_queue *q = &tx_qgrp->txqs[j]; + + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->txq_desc_count; + q->vport = vport; + q->txq_grp = tx_qgrp; + hash_init(q->sched_buf_hash); + + if (!iecm_is_queue_model_split(vport->txq_model)) + q->itr = tx_itr; + } + + if (!iecm_is_queue_model_split(vport->txq_model)) + continue; + + tx_qgrp->complq = kcalloc(IECM_COMPLQ_PER_GROUP, + sizeof(*tx_qgrp->complq), + GFP_KERNEL); + if (!tx_qgrp->complq) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + tx_qgrp->complq->dev = &vport->adapter->pdev->dev; + tx_qgrp->complq->desc_count = vport->complq_desc_count; + tx_qgrp->complq->vport = vport; + tx_qgrp->complq->txq_grp = tx_qgrp; + + tx_qgrp->complq->itr = tx_itr; + } + +err_alloc: + if (err) + iecm_txq_group_rel(vport); + return err; } /** @@ -366,7 +873,118 @@ static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq) { - /* stub */ + enum iecm_status err = 0; + struct iecm_itr rx_itr = {0}; + struct iecm_queue *q; + int i; + + vport->rxq_grps = kcalloc(vport->num_rxq_grp, + sizeof(struct iecm_rxq_group), GFP_KERNEL); + if (!vport->rxq_grps) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + rx_itr.target_itr = IECM_ITR_RX_DEF; + rx_itr.itr_idx = VIRTCHNL_ITR_IDX_0; + rx_itr.next_update = jiffies + 1; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int j; + + rx_qgrp->vport = vport; + if (iecm_is_queue_model_split(vport->rxq_model)) { + rx_qgrp->splitq.num_rxq_sets = num_rxq; + rx_qgrp->splitq.rxq_sets = + kcalloc(num_rxq, + sizeof(struct iecm_rxq_set), + GFP_KERNEL); + if (!rx_qgrp->splitq.rxq_sets) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + rx_qgrp->splitq.bufq_sets = + kcalloc(IECM_BUFQS_PER_RXQ_SET, + sizeof(struct iecm_bufq_set), + GFP_KERNEL); + if (!rx_qgrp->splitq.bufq_sets) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + int swq_size = sizeof(struct iecm_sw_queue); + + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->bufq_desc_count; + q->vport = vport; + q->rxq_grp = rx_qgrp; + q->idx = j; + q->rx_buf_size = IECM_RX_BUF_2048; + q->rsc_low_watermark = IECM_LOW_WATERMARK; + q->rx_buf_stride = IECM_RX_BUF_STRIDE; + q->itr = rx_itr; + + if (vport->rx_hsplit_en) { + q->rx_hsplit_en = vport->rx_hsplit_en; + q->rx_hbuf_size = IECM_HDR_BUF_SIZE; + } + + rx_qgrp->splitq.bufq_sets[j].num_refillqs = + num_rxq; + rx_qgrp->splitq.bufq_sets[j].refillqs = + kcalloc(num_rxq, swq_size, GFP_KERNEL); + if (!rx_qgrp->splitq.bufq_sets[j].refillqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + } + } else { + rx_qgrp->singleq.num_rxq = num_rxq; + rx_qgrp->singleq.rxqs = kcalloc(num_rxq, + sizeof(struct iecm_queue), + GFP_KERNEL); + if (!rx_qgrp->singleq.rxqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + } + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) { + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + rx_qgrp->splitq.rxq_sets[j].refillq0 = + &rx_qgrp->splitq.bufq_sets[0].refillqs[j]; + rx_qgrp->splitq.rxq_sets[j].refillq1 = + &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; + + if (vport->rx_hsplit_en) { + q->rx_hsplit_en = vport->rx_hsplit_en; + q->rx_hbuf_size = IECM_HDR_BUF_SIZE; + } + + } else { + q = &rx_qgrp->singleq.rxqs[j]; + } + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->rxq_desc_count; + q->vport = vport; + q->rxq_grp = rx_qgrp; + q->idx = (i * num_rxq) + j; + q->rx_buf_size = IECM_RX_BUF_2048; + q->rsc_low_watermark = IECM_LOW_WATERMARK; + q->rx_max_pkt_size = vport->netdev->mtu + + IECM_PACKET_HDR_PAD; + q->itr = rx_itr; + } + } +err_alloc: + if (err) + iecm_rxq_group_rel(vport); + return err; } /** @@ -376,7 +994,20 @@ static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, static enum iecm_status iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) { - /* stub */ + int num_txq, num_rxq; + enum iecm_status err; + + iecm_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq); + + err = iecm_txq_group_alloc(vport, num_txq); + if (err) + goto err_out; + + err = iecm_rxq_group_alloc(vport, num_rxq); +err_out: + if (err) + iecm_vport_queue_grp_rel_all(vport); + return err; } /** @@ -387,7 +1018,35 @@ iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) */ enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum iecm_status err; + + err = iecm_vport_queue_grp_alloc_all(vport); + if (err) + goto err_out; + + err = adapter->dev_ops.vc_ops.vport_queue_ids_init(vport); + if (err) + goto err_out; + + adapter->dev_ops.reg_ops.vportq_reg_init(vport); + + err = iecm_tx_desc_alloc_all(vport); + if (err) + goto err_out; + + err = iecm_rx_desc_alloc_all(vport); + if (err) + goto err_out; + + err = iecm_vport_init_fast_path_txqs(vport); + if (err) + goto err_out; + + return 0; +err_out: + iecm_vport_queues_rel(vport); + return err; } /** @@ -1786,7 +2445,16 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); */ int iecm_config_rss(struct iecm_vport *vport) { - /* stub */ + int err = iecm_send_get_set_rss_key_msg(vport, false); + + if (!err) + err = vport->adapter->dev_ops.vc_ops.get_set_rss_lut(vport, + false); + if (!err) + err = vport->adapter->dev_ops.vc_ops.get_set_rss_hash(vport, + false); + + return err; } /** @@ -1798,7 +2466,20 @@ int iecm_config_rss(struct iecm_vport *vport) */ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) { - /* stub */ + int i, j, k = 0; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++) + qid_list[k++] = + rx_qgrp->splitq.rxq_sets[j].rxq.q_id; + } else { + for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + qid_list[k++] = rx_qgrp->singleq.rxqs[j].q_id; + } + } } /** @@ -1810,7 +2491,13 @@ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) */ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) { - /* stub */ + int num_lut_segs, lut_seg, i, k = 0; + + num_lut_segs = vport->adapter->rss_data.rss_lut_size / vport->num_rxq; + for (lut_seg = 0; lut_seg < num_lut_segs; lut_seg++) { + for (i = 0; i < vport->num_rxq; i++) + vport->adapter->rss_data.rss_lut[k++] = qid_list[i]; + } } /** @@ -1821,7 +2508,67 @@ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) */ int iecm_init_rss(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + u16 *qid_list; + int err; + + adapter->rss_data.rss_key = kzalloc(adapter->rss_data.rss_key_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_key) + return IECM_ERR_NO_MEMORY; + adapter->rss_data.rss_lut = kzalloc(adapter->rss_data.rss_lut_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_lut) { + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + return IECM_ERR_NO_MEMORY; + } + + /* Initialize default rss key */ + netdev_rss_key_fill((void *)adapter->rss_data.rss_key, + adapter->rss_data.rss_key_size); + + /* Initialize default rss lut */ + if (adapter->rss_data.rss_lut_size % vport->num_rxq) { + u16 dflt_qid; + int i; + + /* Set all entries to a default RX queue if the algorithm below + * won't fill all entries + */ + if (iecm_is_queue_model_split(vport->rxq_model)) + dflt_qid = + vport->rxq_grps[0].splitq.rxq_sets[0].rxq.q_id; + else + dflt_qid = + vport->rxq_grps[0].singleq.rxqs[0].q_id; + + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) + adapter->rss_data.rss_lut[i] = dflt_qid; + } + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) { + kfree(adapter->rss_data.rss_lut); + adapter->rss_data.rss_lut = NULL; + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + return IECM_ERR_NO_MEMORY; + } + + iecm_get_rx_qid_list(vport, qid_list); + + /* Fill the default RSS lut values*/ + iecm_fill_dflt_rss_lut(vport, qid_list); + + kfree(qid_list); + + /* Initialize default rss HASH */ + adapter->rss_data.rss_hash = IECM_DEFAULT_RSS_HASH_EXPANDED; + + err = iecm_config_rss(vport); + + return err; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 14a4f1149599..484f916fa1a0 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -692,7 +692,20 @@ iecm_send_destroy_vport_msg(struct iecm_vport *vport) enum iecm_status iecm_send_enable_vport_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_vport v_id; + enum iecm_status err; + + v_id.vport_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_ENABLE_VPORT, + sizeof(v_id), (u8 *)&v_id); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_ENA_VPORT, + IECM_VC_ENA_VPORT_ERR); + + return err; } /** @@ -1887,7 +1900,27 @@ EXPORT_SYMBOL(iecm_vc_core_init); static void iecm_vport_init(struct iecm_vport *vport, __always_unused int vport_id) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + + vport_msg = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + vport->txq_model = vport_msg->txq_model; + vport->rxq_model = vport_msg->rxq_model; + vport->vport_type = (u16)vport_msg->vport_type; + vport->vport_id = vport_msg->vport_id; + vport->adapter->rss_data.rss_key_size = min_t(u16, NETDEV_RSS_KEY_LEN, + vport_msg->rss_key_size); + vport->adapter->rss_data.rss_lut_size = vport_msg->rss_lut_size; + ether_addr_copy(vport->default_mac_addr, vport_msg->default_mac_addr); + vport->max_mtu = IECM_MAX_MTU; + + iecm_vport_set_hsplit(vport, NULL); + + init_waitqueue_head(&vport->sw_marker_wq); + iecm_vport_init_num_qs(vport, vport_msg); + iecm_vport_calc_num_q_desc(vport); + iecm_vport_calc_num_q_groups(vport); + iecm_vport_calc_num_q_vec(vport); } /** From patchwork Fri Jun 26 02:07:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217082 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854A9C433E0 for ; Fri, 26 Jun 2020 02:09:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6201A2075D for ; Fri, 26 Jun 2020 02:09:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728087AbgFZCJW (ORCPT ); Thu, 25 Jun 2020 22:09:22 -0400 Received: from mga03.intel.com ([134.134.136.65]:45084 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728044AbgFZCIJ (ORCPT ); Thu, 25 Jun 2020 22:08:09 -0400 IronPort-SDR: haA7ZqHxzaauDNd9EYSr4X1QOuzRbA99v52+KlxeZunw9JRGvnhsK4aRK9XoAH3v8pOm5LC+4U yBqt0UqcXN5g== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209895" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209895" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:40 -0700 IronPort-SDR: hcNmtjYD9foXl9kxZk4DpFHhHCfQU2va/ZqrEyN/svIngSd/A1kYfbueDgYnQR+N5eFi0qmR0F wsAMk3mFAA+w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011895" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:40 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 10/15] iecm: Deinit vport Date: Thu, 25 Jun 2020 19:07:32 -0700 Message-Id: <20200626020737.775377-11-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement vport take down and release its queue resources. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 29 ++- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 218 ++++++++++++++++-- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 15 +- 3 files changed, 246 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index a51731feedb4..d34834422049 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -366,7 +366,27 @@ struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) */ static void iecm_vport_stop(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + + if (adapter->state <= __IECM_DOWN) + return; + adapter->dev_ops.vc_ops.irq_map_unmap(vport, false); + adapter->dev_ops.vc_ops.disable_queues(vport); + /* Normally we ask for queues in create_vport, but if we're changing + * number of requested queues we do a delete then add instead of + * deleting and reallocating the vport. + */ + if (test_and_clear_bit(__IECM_DEL_QUEUES, + vport->adapter->flags)) + iecm_send_delete_queues_msg(vport); + netif_carrier_off(vport->netdev); + netif_tx_disable(vport->netdev); + adapter->link_up = false; + iecm_vport_intr_deinit(vport); + iecm_vport_queues_rel(vport); + if (adapter->dev_ops.vc_ops.disable_vport) + adapter->dev_ops.vc_ops.disable_vport(vport); + adapter->state = __IECM_DOWN; } /** @@ -381,7 +401,11 @@ static void iecm_vport_stop(struct iecm_vport *vport) */ static int iecm_stop(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + iecm_vport_stop(np->vport); + + return 0; } /** @@ -489,6 +513,7 @@ iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) /* fill vport slot in the adapter struct */ adapter->vports[adapter->next_vport] = vport; + if (iecm_cfg_netdev(vport)) goto cfg_netdev_fail; diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 4e5e3d487795..bd358b121773 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -41,7 +41,23 @@ void iecm_get_stats64(struct net_device *netdev, */ void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf) { - /* stub */ + if (tx_buf->skb) { + dev_kfree_skb_any(tx_buf->skb); + if (dma_unmap_len(tx_buf, len)) + dma_unmap_single(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + } else if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + } + + tx_buf->next_to_watch = NULL; + tx_buf->skb = NULL; + dma_unmap_len_set(tx_buf, len, 0); } /** @@ -50,7 +66,26 @@ void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf) */ static void iecm_tx_buf_rel_all(struct iecm_queue *txq) { - /* stub */ + u16 i; + + /* Buffers already cleared, nothing to do */ + if (!txq->tx_buf) + return; + + /* Free all the Tx buffer sk_buffs */ + for (i = 0; i < txq->desc_count; i++) + iecm_tx_buf_rel(txq, &txq->tx_buf[i]); + + kfree(txq->tx_buf); + txq->tx_buf = NULL; + + if (txq->buf_stack.bufs) { + for (i = 0; i < txq->buf_stack.size; i++) { + iecm_tx_buf_rel(txq, txq->buf_stack.bufs[i]); + kfree(txq->buf_stack.bufs[i]); + } + kfree(txq->buf_stack.bufs); + } } /** @@ -62,7 +97,17 @@ static void iecm_tx_buf_rel_all(struct iecm_queue *txq) */ static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq) { - /* stub */ + if (bufq) + iecm_tx_buf_rel_all(txq); + + if (txq->desc_ring) { + dmam_free_coherent(txq->dev, txq->size, + txq->desc_ring, txq->dma); + txq->desc_ring = NULL; + txq->next_to_alloc = 0; + txq->next_to_use = 0; + txq->next_to_clean = 0; + } } /** @@ -73,7 +118,24 @@ static void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq) */ static void iecm_tx_desc_rel_all(struct iecm_vport *vport) { - /* stub */ + struct iecm_queue *txq; + int i, j; + + if (!vport->txq_grps) + return; + + for (i = 0; i < vport->num_txq_grp; i++) { + for (j = 0; j < vport->txq_grps[i].num_txq; j++) { + if (vport->txq_grps[i].txqs) { + txq = &vport->txq_grps[i].txqs[j]; + iecm_tx_desc_rel(txq, true); + } + } + if (iecm_is_queue_model_split(vport->txq_model)) { + txq = vport->txq_grps[i].complq; + iecm_tx_desc_rel(txq, false); + } + } } /** @@ -209,7 +271,21 @@ static enum iecm_status iecm_tx_desc_alloc_all(struct iecm_vport *vport) static void iecm_rx_buf_rel(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf) { - /* stub */ + struct device *dev = rxq->dev; + + if (!rx_buf->page) + return; + + if (rx_buf->skb) { + dev_kfree_skb_any(rx_buf->skb); + rx_buf->skb = NULL; + } + + dma_unmap_page(dev, rx_buf->dma, PAGE_SIZE, DMA_FROM_DEVICE); + __free_pages(rx_buf->page, 0); + + rx_buf->page = NULL; + rx_buf->page_offset = 0; } /** @@ -218,7 +294,23 @@ static void iecm_rx_buf_rel(struct iecm_queue *rxq, */ static void iecm_rx_buf_rel_all(struct iecm_queue *rxq) { - /* stub */ + u16 i; + + /* queue already cleared, nothing to do */ + if (!rxq->rx_buf.buf) + return; + + /* Free all the bufs allocated and given to HW on Rx queue */ + for (i = 0; i < rxq->desc_count; i++) { + iecm_rx_buf_rel(rxq, &rxq->rx_buf.buf[i]); + if (rxq->rx_hsplit_en) + iecm_rx_buf_rel(rxq, &rxq->rx_buf.hdr_buf[i]); + } + + kfree(rxq->rx_buf.buf); + rxq->rx_buf.buf = NULL; + kfree(rxq->rx_buf.hdr_buf); + rxq->rx_buf.hdr_buf = NULL; } /** @@ -232,7 +324,25 @@ static void iecm_rx_buf_rel_all(struct iecm_queue *rxq) static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq, enum virtchnl_queue_model q_model) { - /* stub */ + if (!rxq) + return; + + if (!bufq && iecm_is_queue_model_split(q_model) && rxq->skb) { + dev_kfree_skb_any(rxq->skb); + rxq->skb = NULL; + } + + if (bufq || !iecm_is_queue_model_split(q_model)) + iecm_rx_buf_rel_all(rxq); + + if (rxq->desc_ring) { + dmam_free_coherent(rxq->dev, rxq->size, + rxq->desc_ring, rxq->dma); + rxq->desc_ring = NULL; + rxq->next_to_alloc = 0; + rxq->next_to_clean = 0; + rxq->next_to_use = 0; + } } /** @@ -243,7 +353,49 @@ static void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq, */ static void iecm_rx_desc_rel_all(struct iecm_vport *vport) { - /* stub */ + struct iecm_rxq_group *rx_qgrp; + struct iecm_queue *q; + int i, j, num_rxq; + + if (!vport->rxq_grps) + return; + + for (i = 0; i < vport->num_rxq_grp; i++) { + rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + if (rx_qgrp->splitq.rxq_sets) { + num_rxq = rx_qgrp->splitq.num_rxq_sets; + for (j = 0; j < num_rxq; j++) { + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + iecm_rx_desc_rel(q, false, + vport->rxq_model); + } + } + + if (!rx_qgrp->splitq.bufq_sets) + continue; + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + struct iecm_bufq_set *bufq_set = + &rx_qgrp->splitq.bufq_sets[j]; + + q = &bufq_set->bufq; + iecm_rx_desc_rel(q, true, vport->rxq_model); + if (!bufq_set->refillqs) + continue; + kfree(bufq_set->refillqs); + bufq_set->refillqs = NULL; + } + } else { + if (rx_qgrp->singleq.rxqs) { + for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) { + q = &rx_qgrp->singleq.rxqs[j]; + iecm_rx_desc_rel(q, false, + vport->rxq_model); + } + } + } + } } /** @@ -570,7 +722,18 @@ static enum iecm_status iecm_rx_desc_alloc_all(struct iecm_vport *vport) */ static void iecm_txq_group_rel(struct iecm_vport *vport) { - /* stub */ + if (vport->txq_grps) { + int i; + + for (i = 0; i < vport->num_txq_grp; i++) { + kfree(vport->txq_grps[i].txqs); + vport->txq_grps[i].txqs = NULL; + kfree(vport->txq_grps[i].complq); + vport->txq_grps[i].complq = NULL; + } + kfree(vport->txq_grps); + vport->txq_grps = NULL; + } } /** @@ -579,7 +742,25 @@ static void iecm_txq_group_rel(struct iecm_vport *vport) */ static void iecm_rxq_group_rel(struct iecm_vport *vport) { - /* stub */ + if (vport->rxq_grps) { + int i; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + kfree(rx_qgrp->splitq.rxq_sets); + rx_qgrp->splitq.rxq_sets = NULL; + kfree(rx_qgrp->splitq.bufq_sets); + rx_qgrp->splitq.bufq_sets = NULL; + } else { + kfree(rx_qgrp->singleq.rxqs); + vport->rxq_grps[i].singleq.rxqs = NULL; + } + } + kfree(vport->rxq_grps); + vport->rxq_grps = NULL; + } } /** @@ -588,7 +769,8 @@ static void iecm_rxq_group_rel(struct iecm_vport *vport) */ static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport) { - /* stub */ + iecm_txq_group_rel(vport); + iecm_rxq_group_rel(vport); } /** @@ -599,7 +781,12 @@ static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport) */ void iecm_vport_queues_rel(struct iecm_vport *vport) { - /* stub */ + iecm_tx_desc_rel_all(vport); + iecm_rx_desc_rel_all(vport); + iecm_vport_queue_grp_rel_all(vport); + + kfree(vport->txqs); + vport->txqs = NULL; } /** @@ -2578,5 +2765,10 @@ int iecm_init_rss(struct iecm_vport *vport) */ void iecm_deinit_rss(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + kfree(adapter->rss_data.rss_lut); + adapter->rss_data.rss_lut = NULL; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 484f916fa1a0..ba58b3fdc2e9 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -717,7 +717,20 @@ iecm_send_enable_vport_msg(struct iecm_vport *vport) enum iecm_status iecm_send_disable_vport_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_vport v_id; + enum iecm_status err; + + v_id.vport_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_DISABLE_VPORT, + sizeof(v_id), (u8 *)&v_id); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_DIS_VPORT, + IECM_VC_DIS_VPORT_ERR); + + return err; } /** From patchwork Fri Jun 26 02:07:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B187CC433DF for ; Fri, 26 Jun 2020 02:09:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 77E6E2075D for ; Fri, 26 Jun 2020 02:09:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728079AbgFZCJS (ORCPT ); Thu, 25 Jun 2020 22:09:18 -0400 Received: from mga03.intel.com ([134.134.136.65]:45084 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728012AbgFZCIQ (ORCPT ); Thu, 25 Jun 2020 22:08:16 -0400 IronPort-SDR: 6Lydeb3rT9XKtyJ4zSDpn2nU03tTaMOX4tYc7PTaTWAgGL6aGAm2f01uM5ESJQHjLzzi8lCNXu EZTBYJKdWUuA== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209896" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209896" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:40 -0700 IronPort-SDR: aj3B4aWljV0++JpaITK+qOFZwpMVZthKgM293CXWIgC0QvJb/ZMx5GmqC2ZlHAUd/6ODcIjQZq G+wor20DEInw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011898" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:40 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 11/15] iecm: Add splitq TX/RX Date: Thu, 25 Jun 2020 19:07:33 -0700 Message-Id: <20200626020737.775377-12-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement main TX/RX flows for split queue model. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1283 ++++++++++++++++++- 1 file changed, 1235 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index bd358b121773..c6cc7fb6ace9 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -11,7 +11,12 @@ static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, struct iecm_tx_buf *buf) { - /* stub */ + if (stack->top == stack->size) + return IECM_ERR_MAX_LIMIT; + + stack->bufs[stack->top++] = buf; + + return 0; } /** @@ -20,7 +25,10 @@ static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, **/ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) { - /* stub */ + if (!stack->top) + return NULL; + + return stack->bufs[--stack->top]; } /** @@ -31,7 +39,16 @@ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) void iecm_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + iecm_send_get_stats_msg(vport); + stats->rx_packets = vport->netstats.rx_packets; + stats->tx_packets = vport->netstats.tx_packets; + stats->rx_bytes = vport->netstats.rx_bytes; + stats->tx_bytes = vport->netstats.tx_bytes; + stats->tx_errors = vport->netstats.tx_errors; + stats->rx_dropped = vport->netstats.rx_dropped; + stats->tx_dropped = vport->netstats.tx_dropped; } /** @@ -1246,7 +1263,16 @@ enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) static struct iecm_queue * iecm_tx_find_q(struct iecm_vport *vport, int q_id) { - /* stub */ + int i; + + for (i = 0; i < vport->num_txq; i++) { + struct iecm_queue *tx_q = vport->txqs[i]; + + if (tx_q->q_id == q_id) + return tx_q; + } + + return NULL; } /** @@ -1255,7 +1281,22 @@ iecm_tx_find_q(struct iecm_vport *vport, int q_id) */ static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q) { - /* stub */ + struct iecm_vport *vport = tx_q->vport; + bool drain_complete = true; + int i; + + clear_bit(__IECM_Q_SW_MARKER, tx_q->flags); + /* Hardware must write marker packets to all queues associated with + * completion queues. So check if all queues received marker packets + */ + for (i = 0; i < vport->num_txq; i++) { + if (test_bit(__IECM_Q_SW_MARKER, vport->txqs[i]->flags)) + drain_complete = false; + } + if (drain_complete) { + set_bit(__IECM_VPORT_SW_MARKER, vport->flags); + wake_up(&vport->sw_marker_wq); + } } /** @@ -1270,7 +1311,30 @@ static struct iecm_tx_queue_stats iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, int napi_budget) { - /* stub */ + struct iecm_tx_queue_stats cleaned = {0}; + struct netdev_queue *nq; + + /* update the statistics for this packet */ + cleaned.bytes = tx_buf->bytecount; + cleaned.packets = tx_buf->gso_segs; + + /* free the skb */ + napi_consume_skb(tx_buf->skb, napi_budget); + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + netdev_tx_completed_queue(nq, cleaned.packets, + cleaned.bytes); + + /* unmap skb header data */ + dma_unmap_single(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + + /* clear tx_buf data */ + tx_buf->skb = NULL; + dma_unmap_len_set(tx_buf, len, 0); + + return cleaned; } /** @@ -1282,7 +1346,33 @@ iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, static int iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf) { - /* stub */ + struct iecm_adapter *adapter = txq->vport->adapter; + struct iecm_tx_buf *shadow_buf; + + shadow_buf = iecm_buf_lifo_pop(&txq->buf_stack); + if (!shadow_buf) { + dev_err(&adapter->pdev->dev, + "No out-of-order TX buffers left!\n"); + return -ENOMEM; + } + + /* Store buffer params in shadow buffer */ + shadow_buf->skb = tx_buf->skb; + shadow_buf->bytecount = tx_buf->bytecount; + shadow_buf->gso_segs = tx_buf->gso_segs; + shadow_buf->dma = tx_buf->dma; + shadow_buf->len = tx_buf->len; + shadow_buf->compl_tag = tx_buf->compl_tag; + + /* Add buffer to buf_hash table to be freed + * later + */ + hash_add(txq->sched_buf_hash, &shadow_buf->hlist, + shadow_buf->compl_tag); + + memset(tx_buf, 0, sizeof(struct iecm_tx_buf)); + + return 0; } /** @@ -1305,7 +1395,91 @@ static struct iecm_tx_queue_stats iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, bool descs_only) { - /* stub */ + union iecm_tx_flex_desc *next_pending_desc = NULL; + struct iecm_tx_queue_stats cleaned_stats = {0}; + union iecm_tx_flex_desc *tx_desc; + s16 ntc = tx_q->next_to_clean; + struct iecm_tx_buf *tx_buf; + + tx_desc = IECM_FLEX_TX_DESC(tx_q, ntc); + next_pending_desc = IECM_FLEX_TX_DESC(tx_q, end); + tx_buf = &tx_q->tx_buf[ntc]; + ntc -= tx_q->desc_count; + + while (tx_desc != next_pending_desc) { + union iecm_tx_flex_desc *eop_desc = + (union iecm_tx_flex_desc *)tx_buf->next_to_watch; + + /* clear next_to_watch to prevent false hangs */ + tx_buf->next_to_watch = NULL; + + if (descs_only) { + if (iecm_stash_flow_sch_buffers(tx_q, tx_buf)) + goto tx_splitq_clean_out; + + while (tx_desc != eop_desc) { + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + + if (dma_unmap_len(tx_buf, len)) { + if (iecm_stash_flow_sch_buffers(tx_q, + tx_buf)) + goto tx_splitq_clean_out; + } + } + } else { + struct iecm_tx_queue_stats buf_stats = {0}; + + buf_stats = iecm_tx_splitq_clean_buf(tx_q, tx_buf, + napi_budget); + + /* update the statistics for this packet */ + cleaned_stats.bytes += buf_stats.bytes; + cleaned_stats.packets += buf_stats.packets; + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + } + } + } + + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + } + +tx_splitq_clean_out: + ntc += tx_q->desc_count; + tx_q->next_to_clean = ntc; + + return cleaned_stats; } /** @@ -1315,7 +1489,18 @@ iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, */ static inline void iecm_tx_hw_tstamp(struct sk_buff *skb, u8 *desc_ts) { - /* stub */ + struct skb_shared_hwtstamps hwtstamps; + u64 tstamp; + + /* Only report timestamp to stack if requested */ + if (!likely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) + return; + + tstamp = (desc_ts[0] | (desc_ts[1] << 8) | (desc_ts[2] & 0x3F) << 16); + hwtstamps.hwtstamp = + ns_to_ktime(tstamp << IECM_TW_TIME_STAMP_GRAN_512_DIV_S); + + skb_tstamp_tx(skb, &hwtstamps); } /** @@ -1330,7 +1515,39 @@ static struct iecm_tx_queue_stats iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, u8 *desc_ts, int budget) { - /* stub */ + struct iecm_tx_queue_stats cleaned_stats = {0}; + struct hlist_node *tmp_buf = NULL; + struct iecm_tx_buf *tx_buf = NULL; + + /* Buffer completion */ + hash_for_each_possible_safe(txq->sched_buf_hash, tx_buf, tmp_buf, + hlist, compl_tag) { + if (tx_buf->compl_tag != compl_tag) + continue; + + if (likely(tx_buf->skb)) { + /* fetch timestamp from completion + * descriptor to report to stack + */ + iecm_tx_hw_tstamp(tx_buf->skb, desc_ts); + + cleaned_stats = iecm_tx_splitq_clean_buf(txq, tx_buf, + budget); + } else if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(txq->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + } + + /* Push shadow buf back onto stack */ + iecm_buf_lifo_push(&txq->buf_stack, tx_buf); + + hash_del(&tx_buf->hlist); + } + + return cleaned_stats; } /** @@ -1343,7 +1560,109 @@ iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, static bool iecm_tx_clean_complq(struct iecm_queue *complq, int budget) { - /* stub */ + struct iecm_splitq_tx_compl_desc *tx_desc; + struct iecm_vport *vport = complq->vport; + s16 ntc = complq->next_to_clean; + unsigned int complq_budget; + + complq_budget = vport->compln_clean_budget; + tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, ntc); + ntc -= complq->desc_count; + + do { + struct iecm_tx_queue_stats cleaned_stats = {0}; + bool descs_only = false; + struct iecm_queue *tx_q; + u16 compl_tag, hw_head; + int tx_qid; + u8 ctype; /* completion type */ + u16 gen; + + /* if the descriptor isn't done, no work yet to do */ + gen = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S; + if (test_bit(__IECM_Q_GEN_CHK, complq->flags) != gen) + break; + + /* Find necessary info of TX queue to clean buffers */ + tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S; + tx_q = iecm_tx_find_q(vport, tx_qid); + if (!tx_q) { + dev_err(&complq->vport->adapter->pdev->dev, + "TxQ #%d not found\n", tx_qid); + goto fetch_next_desc; + } + + /* Determine completion type */ + ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_COMPL_TYPE_M) >> + IECM_TXD_COMPLQ_COMPL_TYPE_S; + switch (ctype) { + case IECM_TXD_COMPLT_RE: + hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + + cleaned_stats = iecm_tx_splitq_clean(tx_q, hw_head, + budget, + descs_only); + break; + case IECM_TXD_COMPLT_RS: + if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) { + compl_tag = + le16_to_cpu(tx_desc->q_head_compl_tag.compl_tag); + + cleaned_stats = + iecm_tx_clean_flow_sch_bufs(tx_q, + compl_tag, + tx_desc->ts, + budget); + } else { + hw_head = + le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + + cleaned_stats = iecm_tx_splitq_clean(tx_q, + hw_head, + budget, + false); + } + + break; + case IECM_TXD_COMPLT_SW_MARKER: + iecm_tx_handle_sw_marker(tx_q); + break; + default: + dev_err(&tx_q->vport->adapter->pdev->dev, + "Unknown TX completion type: %d\n", + ctype); + goto fetch_next_desc; + } + + tx_q->itr.stats.tx.packets += cleaned_stats.packets; + tx_q->itr.stats.tx.bytes += cleaned_stats.bytes; + u64_stats_update_begin(&tx_q->stats_sync); + tx_q->q_stats.tx.packets += cleaned_stats.packets; + tx_q->q_stats.tx.bytes += cleaned_stats.bytes; + u64_stats_update_end(&tx_q->stats_sync); + +fetch_next_desc: + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= complq->desc_count; + tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, 0); + change_bit(__IECM_Q_GEN_CHK, complq->flags); + } + + prefetch(tx_desc); + + /* update budget accounting */ + complq_budget--; + } while (likely(complq_budget)); + + ntc += complq->desc_count; + complq->next_to_clean = ntc; + + return !!complq_budget; } /** @@ -1359,7 +1678,12 @@ iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc, struct iecm_tx_splitq_params *parms, u16 td_cmd, u16 size) { - /* stub */ + desc->q.qw1.cmd_dtype = + cpu_to_le16(parms->dtype & IECM_FLEX_TXD_QW1_DTYPE_M); + desc->q.qw1.cmd_dtype |= + cpu_to_le16((td_cmd << IECM_FLEX_TXD_QW1_CMD_S) & + IECM_FLEX_TXD_QW1_CMD_M); + desc->q.qw1.buf_size = cpu_to_le16((u16)size); } /** @@ -1375,7 +1699,13 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, struct iecm_tx_splitq_params *parms, u16 td_cmd, u16 size) { - /* stub */ + desc->flow.qw1.cmd_dtype = (u16)parms->dtype | td_cmd; + desc->flow.qw1.rxr_bufsize = cpu_to_le16((u16)size); + desc->flow.qw1.compl_tag = cpu_to_le16(parms->compl_tag); + + desc->flow.qw1.ts[0] = parms->offload.desc_ts & 0xff; + desc->flow.qw1.ts[1] = (parms->offload.desc_ts >> 8) & 0xff; + desc->flow.qw1.ts[2] = (parms->offload.desc_ts >> 16) & 0xff; } /** @@ -1388,7 +1718,19 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, static int __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) { - /* stub */ + netif_stop_subqueue(tx_q->vport->netdev, tx_q->idx); + + /* Memory barrier before checking head and tail */ + smp_mb(); + + /* Check again in a case another CPU has just made room available. */ + if (likely(IECM_DESC_UNUSED(tx_q) < size)) + return -EBUSY; + + /* A reprieve! - use start_subqueue because it doesn't call schedule */ + netif_start_subqueue(tx_q->vport->netdev, tx_q->idx); + + return 0; } /** @@ -1400,7 +1742,10 @@ __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) */ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) { - /* stub */ + if (likely(IECM_DESC_UNUSED(tx_q) >= size)) + return 0; + + return __iecm_tx_maybe_stop(tx_q, size); } /** @@ -1410,7 +1755,23 @@ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) */ void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val) { - /* stub */ + struct netdev_queue *nq; + + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + tx_q->next_to_use = val; + + iecm_tx_maybe_stop(tx_q, IECM_TX_DESC_NEEDED); + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + + /* notify HW of packet */ + if (netif_xmit_stopped(nq) || !netdev_xmit_more()) + writel_relaxed(val, tx_q->tail); } /** @@ -1443,7 +1804,7 @@ void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val) */ static unsigned int __iecm_tx_desc_count_required(unsigned int size) { - /* stub */ + return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; } /** @@ -1454,13 +1815,26 @@ static unsigned int __iecm_tx_desc_count_required(unsigned int size) */ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) { - /* stub */ + const skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + unsigned int nr_frags = skb_shinfo(skb)->nr_frags; + unsigned int count = 0, size = skb_headlen(skb); + + for (;;) { + count += __iecm_tx_desc_count_required(size); + + if (!nr_frags--) + break; + + size = skb_frag_size(frag++); + } + + return count; } /** * iecm_tx_splitq_map - Build the Tx flex descriptor * @tx_q: queue to send buffer on - * @off: pointer to offload params struct + * @parms: pointer to splitq params struct * @first: first buffer info buffer to use * * This function loops over the skb data pointed to by *first @@ -1469,10 +1843,130 @@ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) */ static void iecm_tx_splitq_map(struct iecm_queue *tx_q, - struct iecm_tx_offload_params *off, + struct iecm_tx_splitq_params *parms, struct iecm_tx_buf *first) { - /* stub */ + union iecm_tx_flex_desc *tx_desc; + unsigned int data_len, size; + struct iecm_tx_buf *tx_buf; + u16 i = tx_q->next_to_use; + struct netdev_queue *nq; + struct sk_buff *skb; + skb_frag_t *frag; + u16 td_cmd = 0; + dma_addr_t dma; + + skb = first->skb; + + td_cmd = parms->offload.td_cmd; + parms->compl_tag = tx_q->tx_buf_key; + + data_len = skb->data_len; + size = skb_headlen(skb); + + tx_desc = IECM_FLEX_TX_DESC(tx_q, i); + + dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE); + + tx_buf = first; + + for (frag = &skb_shinfo(skb)->frags[0];; frag++) { + unsigned int max_data = IECM_TX_MAX_DESC_DATA_ALIGNED; + + if (dma_mapping_error(tx_q->dev, dma)) + goto dma_error; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buf, len, size); + dma_unmap_addr_set(tx_buf, dma, dma); + + /* align size to end of page */ + max_data += -dma & (IECM_TX_MAX_READ_REQ_SIZE - 1); + + /* buf_addr is in same location for both desc types */ + tx_desc->q.buf_addr = cpu_to_le64(dma); + + /* account for data chunks larger than the hardware + * can handle + */ + while (unlikely(size > IECM_TX_MAX_DESC_DATA)) { + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + + tx_desc++; + i++; + + if (i == tx_q->desc_count) { + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + i = 0; + } + + dma += max_data; + size -= max_data; + + max_data = IECM_TX_MAX_DESC_DATA_ALIGNED; + /* buf_addr is in same location for both desc types */ + tx_desc->q.buf_addr = cpu_to_le64(dma); + } + + if (likely(!data_len)) + break; + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + tx_desc++; + i++; + + if (i == tx_q->desc_count) { + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + i = 0; + } + + size = skb_frag_size(frag); + data_len -= size; + + dma = skb_frag_dma_map(tx_q->dev, frag, 0, size, + DMA_TO_DEVICE); + + tx_buf->compl_tag = parms->compl_tag; + tx_buf = &tx_q->tx_buf[i]; + } + + /* record bytecount for BQL */ + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + netdev_tx_sent_queue(nq, first->bytecount); + + /* record SW timestamp if HW timestamp is not available */ + skb_tx_timestamp(first->skb); + + /* write last descriptor with RS and EOP bits */ + td_cmd |= parms->eop_cmd; + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + i++; + if (i == tx_q->desc_count) + i = 0; + + /* set next_to_watch value indicating a packet is present */ + first->next_to_watch = tx_desc; + tx_buf->compl_tag = parms->compl_tag++; + + iecm_tx_buf_hw_update(tx_q, i); + + /* Update TXQ Completion Tag key for next buffer */ + tx_q->tx_buf_key = parms->compl_tag; + + return; + +dma_error: + /* clear DMA mappings for failed tx_buf map */ + for (;;) { + tx_buf = &tx_q->tx_buf[i]; + iecm_tx_buf_rel(tx_q, tx_buf); + if (tx_buf == first) + break; + if (i == 0) + i = tx_q->desc_count; + i--; + } + + tx_q->next_to_use = i; } /** @@ -1488,7 +1982,79 @@ iecm_tx_splitq_map(struct iecm_queue *tx_q, static int iecm_tso(struct iecm_tx_buf *first, struct iecm_tx_offload_params *off) { - /* stub */ + struct sk_buff *skb = first->skb; + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct tcphdr *tcp; + struct udphdr *udp; + unsigned char *hdr; + } l4; + u32 paylen, l4_start; + int err; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 0; + + if (!skb_is_gso(skb)) + return 0; + + err = skb_cow_head(skb, 0); + if (err < 0) + return err; + + ip.hdr = skb_network_header(skb); + l4.hdr = skb_transport_header(skb); + + /* initialize outer IP header fields */ + if (ip.v4->version == 4) { + ip.v4->tot_len = 0; + ip.v4->check = 0; + } else { + ip.v6->payload_len = 0; + } + + /* determine offset of transport header */ + l4_start = l4.hdr - skb->data; + + /* remove payload length from checksum */ + paylen = skb->len - l4_start; + + switch (skb_shinfo(skb)->gso_type) { + case SKB_GSO_TCPV4: + case SKB_GSO_TCPV6: + csum_replace_by_diff(&l4.tcp->check, + (__force __wsum)htonl(paylen)); + + /* compute length of segmentation header */ + off->tso_hdr_len = (l4.tcp->doff * 4) + l4_start; + break; + case SKB_GSO_UDP_L4: + csum_replace_by_diff(&l4.udp->check, + (__force __wsum)htonl(paylen)); + /* compute length of segmentation header */ + off->tso_hdr_len = sizeof(struct udphdr) + l4_start; + l4.udp->len = + htons(skb_shinfo(skb)->gso_size + + sizeof(struct udphdr)); + break; + default: + return -EINVAL; + } + + off->tso_len = skb->len - off->tso_hdr_len; + off->mss = skb_shinfo(skb)->gso_size; + + /* update gso_segs and bytecount */ + first->gso_segs = skb_shinfo(skb)->gso_segs; + first->bytecount += (first->gso_segs - 1) * off->tso_hdr_len; + + first->tx_flags |= IECM_TX_FLAGS_TSO; + + return 0; } /** @@ -1501,7 +2067,84 @@ static int iecm_tso(struct iecm_tx_buf *first, static netdev_tx_t iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) { - /* stub */ + struct iecm_tx_splitq_params tx_parms = { NULL, 0, 0, 0, {0} }; + struct iecm_tx_buf *first; + unsigned int count; + + count = iecm_tx_desc_count_required(skb); + + /* need: 1 descriptor per page * PAGE_SIZE/IECM_MAX_DATA_PER_TXD, + * + 1 desc for skb_head_len/IECM_MAX_DATA_PER_TXD, + * + 4 desc gap to avoid the cache line where head is, + * + 1 desc for context descriptor, + * otherwise try next time + */ + if (iecm_tx_maybe_stop(tx_q, count + IECM_TX_DESCS_PER_CACHE_LINE + + IECM_TX_DESCS_FOR_CTX)) { + return NETDEV_TX_BUSY; + } + + /* record the location of the first descriptor for this packet */ + first = &tx_q->tx_buf[tx_q->next_to_use]; + first->skb = skb; + first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN); + first->gso_segs = 1; + first->tx_flags = 0; + + if (iecm_tso(first, &tx_parms.offload) < 0) { + /* If tso returns an error, drop the packet */ + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + + if (first->tx_flags & IECM_TX_FLAGS_TSO) { + /* If TSO is needed, set up context desc */ + union iecm_flex_tx_ctx_desc *ctx_desc; + int i = tx_q->next_to_use; + + /* grab the next descriptor */ + ctx_desc = IECM_FLEX_TX_CTX_DESC(tx_q, i); + i++; + tx_q->next_to_use = (i < tx_q->desc_count) ? i : 0; + + ctx_desc->tso.qw1.cmd_dtype |= + cpu_to_le16(IECM_TX_DESC_DTYPE_FLEX_TSO_CTX | + IECM_TX_FLEX_CTX_DESC_CMD_TSO); + ctx_desc->tso.qw0.flex_tlen = + cpu_to_le32(tx_parms.offload.tso_len & + IECM_TXD_FLEX_CTX_TLEN_M); + ctx_desc->tso.qw0.mss_rt = + cpu_to_le16(tx_parms.offload.mss & + IECM_TXD_FLEX_CTX_MSS_RT_M); + ctx_desc->tso.qw0.hdr_len = tx_parms.offload.tso_hdr_len; + } + + if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) { + s64 ts_ns = first->skb->skb_mstamp_ns; + + tx_parms.offload.desc_ts = + ts_ns >> IECM_TW_TIME_STAMP_GRAN_512_DIV_S; + + tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE; + tx_parms.splitq_build_ctb = iecm_tx_splitq_build_flow_desc; + tx_parms.eop_cmd = + IECM_TXD_FLEX_FLOW_CMD_EOP | IECM_TXD_FLEX_FLOW_CMD_RE; + + if (skb->ip_summed == CHECKSUM_PARTIAL) + tx_parms.offload.td_cmd |= IECM_TXD_FLEX_FLOW_CMD_CS_EN; + + } else { + tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_DATA; + tx_parms.splitq_build_ctb = iecm_tx_splitq_build_ctb; + tx_parms.eop_cmd = IECM_TX_DESC_CMD_EOP | IECM_TX_DESC_CMD_RS; + + if (skb->ip_summed == CHECKSUM_PARTIAL) + tx_parms.offload.td_cmd |= IECM_TX_FLEX_DESC_CMD_CS_EN; + } + + iecm_tx_splitq_map(tx_q, &tx_parms, first); + + return NETDEV_TX_OK; } /** @@ -1514,7 +2157,18 @@ iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, struct net_device *netdev) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_queue *tx_q; + + tx_q = vport->txqs[skb->queue_mapping]; + + /* hardware can't handle really short frames, hardware padding works + * beyond this point + */ + if (skb_put_padto(skb, IECM_TX_MIN_LEN)) + return NETDEV_TX_OK; + + return iecm_tx_splitq_frame(skb, tx_q); } /** @@ -1529,7 +2183,18 @@ netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded = vport->rx_ptype_lkup[ptype]; + + if (!decoded.known) + return PKT_HASH_TYPE_NONE; + if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4) + return PKT_HASH_TYPE_L4; + if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3) + return PKT_HASH_TYPE_L3; + if (decoded.outer_ip == IECM_RX_PTYPE_OUTER_L2) + return PKT_HASH_TYPE_L2; + + return PKT_HASH_TYPE_NONE; } /** @@ -1543,7 +2208,17 @@ static void iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + u32 hash; + + if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXHASH)) + return; + + hash = rx_desc->status_err1 | + (rx_desc->fflags1 << 8) | + (rx_desc->ts_low << 16) | + (rx_desc->ff2_mirrid_hash2.hash2 << 24); + + skb_set_hash(skb, hash, iecm_ptype_to_htype(rxq->vport, ptype)); } /** @@ -1559,7 +2234,63 @@ static void iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded; + u8 rx_status_0_qw1, rx_status_0_qw0; + bool ipv4, ipv6; + + /* Start with CHECKSUM_NONE and by default csum_level = 0 */ + skb->ip_summed = CHECKSUM_NONE; + + /* check if Rx checksum is enabled */ + if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXCSUM)) + return; + + rx_status_0_qw1 = rx_desc->status_err0_qw1; + /* check if HW has decoded the packet and checksum */ + if (!(rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_L3L4P_S))) + return; + + decoded = rxq->vport->rx_ptype_lkup[ptype]; + if (!(decoded.known && decoded.outer_ip)) + return; + + ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4); + ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6); + + if (ipv4 && (rx_status_0_qw1 & + (BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | + BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))) + goto checksum_fail; + + rx_status_0_qw0 = rx_desc->status_err0_qw0; + if (ipv6 && (rx_status_0_qw0 & + (BIT(IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S)))) + return; + + /* check for L4 errors and handle packets that were not able to be + * checksummed + */ + if (rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) + goto checksum_fail; + + /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */ + switch (decoded.inner_prot) { + case IECM_RX_PTYPE_INNER_PROT_ICMP: + case IECM_RX_PTYPE_INNER_PROT_TCP: + case IECM_RX_PTYPE_INNER_PROT_UDP: + case IECM_RX_PTYPE_INNER_PROT_SCTP: + skb->ip_summed = CHECKSUM_UNNECESSARY; + rxq->q_stats.rx.basic_csum++; + default: + break; + } + return; + +checksum_fail: + rxq->q_stats.rx.csum_err++; + dev_dbg(rxq->dev, "RX Checksum not available\n"); } /** @@ -1575,7 +2306,74 @@ iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded; + u16 rsc_segments, rsc_payload_len; + struct ipv6hdr *ipv6h; + struct tcphdr *tcph; + struct iphdr *ipv4h; + bool ipv4, ipv6; + u16 hdr_len; + + rsc_payload_len = le16_to_cpu(rx_desc->fmd1_misc.rscseglen); + if (!rsc_payload_len) + goto rsc_err; + + decoded = rxq->vport->rx_ptype_lkup[ptype]; + if (!(decoded.known && decoded.outer_ip)) + goto rsc_err; + + ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4); + ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6); + + if (!(ipv4 ^ ipv6)) + goto rsc_err; + + if (ipv4) + hdr_len = ETH_HLEN + sizeof(struct tcphdr) + + sizeof(struct iphdr); + else + hdr_len = ETH_HLEN + sizeof(struct tcphdr) + + sizeof(struct ipv6hdr); + + rsc_segments = DIV_ROUND_UP(skb->len - hdr_len, rsc_payload_len); + + NAPI_GRO_CB(skb)->count = rsc_segments; + skb_shinfo(skb)->gso_size = rsc_payload_len; + + skb_reset_network_header(skb); + + if (ipv4) { + ipv4h = ip_hdr(skb); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + + /* Reset and set transport header offset in skb */ + skb_set_transport_header(skb, sizeof(struct iphdr)); + tcph = tcp_hdr(skb); + + /* Compute the TCP pseudo header checksum*/ + tcph->check = + ~tcp_v4_check(skb->len - skb_transport_offset(skb), + ipv4h->saddr, ipv4h->daddr, 0); + } else { + ipv6h = ipv6_hdr(skb); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + skb_set_transport_header(skb, sizeof(struct ipv6hdr)); + tcph = tcp_hdr(skb); + tcph->check = + ~tcp_v6_check(skb->len - skb_transport_offset(skb), + &ipv6h->saddr, &ipv6h->daddr, 0); + } + + tcp_gro_complete(skb); + + /* Map Rx qid to the skb*/ + skb_record_rx_queue(skb, rxq->q_id); + + return true; +rsc_err: + return false; } /** @@ -1587,7 +2385,19 @@ static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc, struct sk_buff __maybe_unused *skb) { - /* stub */ + u8 ts_lo = rx_desc->ts_low; + u32 ts_hi = 0; + u64 ts_ns = 0; + + ts_hi = le32_to_cpu(rx_desc->flex_ts.ts_high); + + ts_ns |= ts_lo | ((u64)ts_hi << 8); + + if (ts_ns) { + memset(skb_hwtstamps(skb), 0, + sizeof(struct skb_shared_hwtstamps)); + skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ts_ns); + } } /** @@ -1604,7 +2414,26 @@ static bool iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc) { - /* stub */ + bool err = false; + u16 rx_ptype; + bool rsc; + + rx_ptype = le16_to_cpu(rx_desc->ptype_err_fflags0) & + IECM_RXD_FLEX_PTYPE_M; + + /* modifies the skb - consumes the enet header */ + skb->protocol = eth_type_trans(skb, rxq->vport->netdev); + iecm_rx_csum(rxq, skb, rx_desc, rx_ptype); + /* process RSS/hash */ + iecm_rx_hash(rxq, skb, rx_desc, rx_ptype); + + rsc = le16_to_cpu(rx_desc->hdrlen_flags) & IECM_RXD_FLEX_RSC_M; + if (rsc) + err = iecm_rx_rsc(rxq, skb, rx_desc, rx_ptype); + + iecm_rx_hwtstamp(rx_desc, skb); + + return err; } /** @@ -1617,7 +2446,7 @@ iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, */ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) { - /* stub */ + napi_gro_receive(&rxq->q_vector->napi, skb); } /** @@ -1626,7 +2455,7 @@ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) */ static bool iecm_rx_page_is_reserved(struct page *page) { - /* stub */ + return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } /** @@ -1642,7 +2471,13 @@ static bool iecm_rx_page_is_reserved(struct page *page) static void iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) { - /* stub */ +#if (PAGE_SIZE < 8192) + /* flip page offset to other buffer */ + rx_buf->page_offset ^= size; +#else + /* move offset up to the next cache line */ + rx_buf->page_offset += size; +#endif } /** @@ -1656,7 +2491,34 @@ iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) */ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) { - /* stub */ +#if (PAGE_SIZE >= 8192) +#endif + unsigned int pagecnt_bias = rx_buf->pagecnt_bias; + struct page *page = rx_buf->page; + + /* avoid re-using remote pages */ + if (unlikely(iecm_rx_page_is_reserved(page))) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely((page_count(page) - pagecnt_bias) > 1)) + return false; +#else + if (rx_buf->page_offset > last_offset) + return false; +#endif /* PAGE_SIZE < 8192) */ + + /* If we have drained the page fragment pool we need to update + * the pagecnt_bias and page count so that we fully restock the + * number of references the driver holds. + */ + if (unlikely(pagecnt_bias == 1)) { + page_ref_add(page, USHRT_MAX - 1); + rx_buf->pagecnt_bias = USHRT_MAX; + } + + return true; } /** @@ -1672,7 +2534,17 @@ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { - /* stub */ +#if (PAGE_SIZE >= 8192) + unsigned int truesize = SKB_DATA_ALIGN(size); +#else + unsigned int truesize = IECM_RX_BUF_2048; +#endif + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, + rx_buf->page_offset, size, truesize); + + /* page is being used so we must update the page offset */ + iecm_rx_buf_adjust_pg_offset(rx_buf, truesize); } /** @@ -1687,7 +2559,22 @@ void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, bool hsplit, struct iecm_rx_buf *old_buf) { - /* stub */ + u16 ntu = rx_bufq->next_to_use; + struct iecm_rx_buf *new_buf; + + if (hsplit) + new_buf = &rx_bufq->rx_buf.hdr_buf[ntu]; + else + new_buf = &rx_bufq->rx_buf.buf[ntu]; + + /* Transfer page from old buffer to new buffer. + * Move each member individually to avoid possible store + * forwarding stalls and unnecessary copy of skb. + */ + new_buf->dma = old_buf->dma; + new_buf->page = old_buf->page; + new_buf->page_offset = old_buf->page_offset; + new_buf->pagecnt_bias = old_buf->pagecnt_bias; } /** @@ -1704,7 +2591,15 @@ static void iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, const unsigned int size) { - /* stub */ + prefetch(rx_buf->page); + + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(dev, rx_buf->dma, + rx_buf->page_offset, size, + DMA_FROM_DEVICE); + + /* We have pulled a buffer for use, so decrement pagecnt_bias */ + rx_buf->pagecnt_bias--; } /** @@ -1721,7 +2616,52 @@ struct sk_buff * iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, unsigned int size) { - /* stub */ + void *va = page_address(rx_buf->page) + rx_buf->page_offset; + unsigned int headlen; + struct sk_buff *skb; + + /* prefetch first cache line of first page */ + prefetch(va); +#if L1_CACHE_BYTES < 128 + prefetch((u8 *)va + L1_CACHE_BYTES); +#endif /* L1_CACHE_BYTES */ + /* allocate a skb to store the frags */ + skb = __napi_alloc_skb(&rxq->q_vector->napi, IECM_RX_HDR_SIZE, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + skb_record_rx_queue(skb, rxq->idx); + + /* Determine available headroom for copy */ + headlen = size; + if (headlen > IECM_RX_HDR_SIZE) + headlen = eth_get_headlen(skb->dev, va, IECM_RX_HDR_SIZE); + + /* align pull length to size of long to optimize memcpy performance */ + memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); + + /* if we exhaust the linear part then add what is left as a frag */ + size -= headlen; + if (size) { +#if (PAGE_SIZE >= 8192) + unsigned int truesize = SKB_DATA_ALIGN(size); +#else + unsigned int truesize = IECM_RX_BUF_2048; +#endif + skb_add_rx_frag(skb, 0, rx_buf->page, + rx_buf->page_offset + headlen, size, truesize); + /* buffer is used by skb, update page_offset */ + iecm_rx_buf_adjust_pg_offset(rx_buf, truesize); + } else { + /* buffer is unused, reset bias back to rx_buf; data was copied + * onto skb's linear part so there's no need for adjusting + * page offset and we can reuse this buffer as-is + */ + rx_buf->pagecnt_bias++; + } + + return skb; } /** @@ -1738,7 +2678,11 @@ iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, */ bool iecm_rx_cleanup_headers(struct sk_buff *skb) { - /* stub */ + /* if eth_skb_pad returns an error the skb was freed */ + if (eth_skb_pad(skb)) + return true; + + return false; } /** @@ -1751,7 +2695,7 @@ bool iecm_rx_cleanup_headers(struct sk_buff *skb) static bool iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) { - /* stub */ + return !!(stat_err_field & stat_err_bits); } /** @@ -1764,7 +2708,13 @@ iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) static bool iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) { - /* stub */ + /* if we are the last buffer then there is nothing else to do */ +#define IECM_RXD_EOF BIT(IECM_RX_FLEX_DESC_STATUS0_EOF_S) + if (likely(iecm_rx_splitq_test_staterr(rx_desc->status_err0_qw1, + IECM_RXD_EOF))) + return false; + + return true; } /** @@ -1781,7 +2731,24 @@ iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, struct iecm_rx_buf *rx_buf) { - /* stub */ + bool recycled = false; + + if (iecm_rx_can_reuse_page(rx_buf)) { + /* hand second half of page back to the queue */ + iecm_rx_reuse_page(rx_bufq, hsplit, rx_buf); + recycled = true; + } else { + /* we are not reusing the buffer so unmap it */ + dma_unmap_page_attrs(rx_bufq->dev, rx_buf->dma, PAGE_SIZE, + DMA_FROM_DEVICE, IECM_RX_DMA_ATTR); + __page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias); + } + + /* clear contents of buffer_info */ + rx_buf->page = NULL; + rx_buf->skb = NULL; + + return recycled; } /** @@ -1797,7 +2764,19 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, struct iecm_rx_buf *hdr_buf, struct iecm_rx_buf *rx_buf) { - /* stub */ + u16 ntu = rx_bufq->next_to_use; + bool recycled = false; + + if (likely(hdr_buf)) + recycled = iecm_rx_recycle_buf(rx_bufq, true, hdr_buf); + if (likely(rx_buf)) + recycled = iecm_rx_recycle_buf(rx_bufq, false, rx_buf); + + /* update, and store next to alloc if the buffer was recycled */ + if (recycled) { + ntu++; + rx_bufq->next_to_use = (ntu < rx_bufq->desc_count) ? ntu : 0; + } } /** @@ -1806,7 +2785,14 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, */ static void iecm_rx_bump_ntc(struct iecm_queue *q) { - /* stub */ + u16 ntc = q->next_to_clean + 1; + /* fetch, update, and store next to clean */ + if (ntc < q->desc_count) { + q->next_to_clean = ntc; + } else { + q->next_to_clean = 0; + change_bit(__IECM_Q_GEN_CHK, q->flags); + } } /** @@ -1823,7 +2809,158 @@ static void iecm_rx_bump_ntc(struct iecm_queue *q) */ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) { - /* stub */ + unsigned int total_rx_bytes = 0, total_rx_pkts = 0; + u16 cleaned_count[IECM_BUFQS_PER_RXQ_SET] = {0}; + struct iecm_queue *rx_bufq = NULL; + struct sk_buff *skb = rxq->skb; + bool failure = false; + int i; + + /* Process Rx packets bounded by budget */ + while (likely(total_rx_pkts < (unsigned int)budget)) { + struct iecm_flex_rx_desc *splitq_flex_rx_desc; + union iecm_rx_desc *rx_desc; + struct iecm_rx_buf *hdr_buf = NULL; + struct iecm_rx_buf *rx_buf = NULL; + unsigned int pkt_len = 0; + unsigned int hdr_len = 0; + u16 gen_id, buf_id; + u8 stat_err0_qw0; + u8 stat_err_bits; + /* Header buffer overflow only valid for header split */ + bool hbo = false; + int bufq_id; + + /* get the Rx desc from Rx queue based on 'next_to_clean' */ + rx_desc = IECM_RX_DESC(rxq, rxq->next_to_clean); + splitq_flex_rx_desc = (struct iecm_flex_rx_desc *)rx_desc; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc + */ + dma_rmb(); + + /* if the descriptor isn't done, no work yet to do */ + gen_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id); + gen_id = (gen_id & IECM_RXD_FLEX_GEN_M) >> IECM_RXD_FLEX_GEN_S; + if (test_bit(__IECM_Q_GEN_CHK, rxq->flags) != gen_id) + break; + + pkt_len = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id) & + IECM_RXD_FLEX_LEN_PBUF_M; + + hbo = splitq_flex_rx_desc->status_err0_qw1 & + BIT(IECM_RX_FLEX_DESC_STATUS0_HBO_S); + + if (unlikely(hbo)) { + rxq->q_stats.rx.hsplit_hbo++; + goto bypass_hsplit; + } + + hdr_len = + le16_to_cpu(splitq_flex_rx_desc->hdrlen_flags) & + IECM_RXD_FLEX_LEN_HDR_M; + +bypass_hsplit: + bufq_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id); + bufq_id = (bufq_id & IECM_RXD_FLEX_BUFQ_ID_M) >> + IECM_RXD_FLEX_BUFQ_ID_S; + /* retrieve buffer from the rxq */ + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq; + + buf_id = le16_to_cpu(splitq_flex_rx_desc->fmd0_bufid.buf_id); + + if (pkt_len) { + rx_buf = &rx_bufq->rx_buf.buf[buf_id]; + iecm_rx_get_buf_page(rx_bufq->dev, rx_buf, pkt_len); + } + + if (hdr_len) { + hdr_buf = &rx_bufq->rx_buf.hdr_buf[buf_id]; + iecm_rx_get_buf_page(rx_bufq->dev, hdr_buf, + hdr_len); + + skb = iecm_rx_construct_skb(rxq, hdr_buf, hdr_len); + } + + if (skb && pkt_len) + iecm_rx_add_frag(rx_buf, skb, pkt_len); + else if (pkt_len) + skb = iecm_rx_construct_skb(rxq, rx_buf, pkt_len); + + /* exit if we failed to retrieve a buffer */ + if (!skb) { + /* If we fetched a buffer, but didn't use it + * undo pagecnt_bias decrement + */ + if (rx_buf) + rx_buf->pagecnt_bias++; + break; + } + + iecm_rx_splitq_put_bufs(rx_bufq, hdr_buf, rx_buf); + iecm_rx_bump_ntc(rxq); + cleaned_count[bufq_id]++; + + /* skip if it is non EOP desc */ + if (iecm_rx_splitq_is_non_eop(splitq_flex_rx_desc)) + continue; + + stat_err_bits = BIT(IECM_RX_FLEX_DESC_STATUS0_RXE_S); + stat_err0_qw0 = splitq_flex_rx_desc->status_err0_qw0; + if (unlikely(iecm_rx_splitq_test_staterr(stat_err0_qw0, + stat_err_bits))) { + dev_kfree_skb_any(skb); + skb = NULL; + continue; + } + + /* correct empty headers and pad skb if needed (to make valid + * Ethernet frame + */ + if (iecm_rx_cleanup_headers(skb)) { + skb = NULL; + continue; + } + + /* probably a little skewed due to removing CRC */ + total_rx_bytes += skb->len; + + /* protocol */ + if (unlikely(iecm_rx_process_skb_fields(rxq, skb, + splitq_flex_rx_desc))) { + dev_kfree_skb_any(skb); + skb = NULL; + continue; + } + + /* send completed skb up the stack */ + iecm_rx_skb(rxq, skb); + skb = NULL; + + /* update budget accounting */ + total_rx_pkts++; + } + for (i = 0; i < IECM_BUFQS_PER_RXQ_SET; i++) { + if (cleaned_count[i]) { + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[i].bufq; + failure = iecm_rx_buf_hw_alloc_all(rx_bufq, + cleaned_count[i]) || + failure; + } + } + + rxq->skb = skb; + u64_stats_update_begin(&rxq->stats_sync); + rxq->q_stats.rx.packets += total_rx_pkts; + rxq->q_stats.rx.bytes += total_rx_bytes; + u64_stats_update_end(&rxq->stats_sync); + + rxq->itr.stats.rx.packets += total_rx_pkts; + rxq->itr.stats.rx.bytes += total_rx_bytes; + + /* guarantee a trip back through this routine if there was a failure */ + return failure ? budget : (int)total_rx_pkts; } /** @@ -2379,7 +3516,15 @@ iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) static inline bool iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget) { - /* stub */ + bool clean_complete = true; + int i, budget_per_q; + + budget_per_q = max(budget / q_vec->num_txq, 1); + for (i = 0; i < q_vec->num_txq; i++) { + if (!iecm_tx_clean_complq(q_vec->tx[i], budget_per_q)) + clean_complete = false; + } + return clean_complete; } /** @@ -2394,7 +3539,22 @@ static inline bool iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, int *cleaned) { - /* stub */ + bool clean_complete = true; + int pkts_cleaned_per_q; + int i, budget_per_q; + + budget_per_q = max(budget / q_vec->num_rxq, 1); + for (i = 0; i < q_vec->num_rxq; i++) { + pkts_cleaned_per_q = iecm_rx_splitq_clean(q_vec->rx[i], + budget_per_q); + /* if we clean as many as budgeted, we must not + * be done + */ + if (pkts_cleaned_per_q >= budget_per_q) + clean_complete = false; + *cleaned += pkts_cleaned_per_q; + } + return clean_complete; } /** @@ -2404,7 +3564,34 @@ iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, */ static int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) { - /* stub */ + struct iecm_q_vector *q_vector = + container_of(napi, struct iecm_q_vector, napi); + bool clean_complete; + int work_done = 0; + + clean_complete = iecm_tx_splitq_clean_all(q_vector, budget); + + /* Handle case where we are called by netpoll with a budget of 0 */ + if (budget <= 0) + return budget; + + /* We attempt to distribute budget to each Rx queue fairly, but don't + * allow the budget to go below 1 because that would exit polling early. + */ + clean_complete |= iecm_rx_splitq_clean_all(q_vector, budget, + &work_done); + + /* If work not completed, return budget and polling will return */ + if (!clean_complete) + return budget; + + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + iecm_vport_intr_update_itr_ena_irq(q_vector); + + return min_t(int, work_done, budget - 1); } /** From patchwork Fri Jun 26 02:07:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F539C433E1 for ; Fri, 26 Jun 2020 02:09:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 01D9B2075D for ; Fri, 26 Jun 2020 02:09:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728070AbgFZCJR (ORCPT ); Thu, 25 Jun 2020 22:09:17 -0400 Received: from mga03.intel.com ([134.134.136.65]:45084 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725306AbgFZCIe (ORCPT ); Thu, 25 Jun 2020 22:08:34 -0400 IronPort-SDR: JAMmVA9qO360PDQkNQBdAUvgJmqNVIdkP9R1RCbschbGWR7rdYi9agWMLIuaNb0vnOjbB/iAp2 rBXzd7hY/oCQ== X-IronPort-AV: E=McAfee;i="6000,8403,9663"; a="145209898" X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="145209898" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jun 2020 19:07:40 -0700 IronPort-SDR: I8TrKSz+rxue5dWCTHX1eC9Dh/zjLa7F2TrNfWcRsK19PSEq5t+QnFsRRKXMztjXEmR6JGR9Eb azW7M5vqNqOA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,281,1589266800"; d="scan'208";a="280011907" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2020 19:07:40 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next v3 13/15] iecm: Add ethtool Date: Thu, 25 Jun 2020 19:07:35 -0700 Message-Id: <20200626020737.775377-14-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> References: <20200626020737.775377-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement ethtool interface for the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../net/ethernet/intel/iecm/iecm_ethtool.c | 1050 ++++++++++++++++- drivers/net/ethernet/intel/iecm/iecm_lib.c | 100 +- 2 files changed, 1146 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c index a6532592f2f4..98ba3a801ea1 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c +++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c @@ -3,6 +3,1054 @@ #include +/** + * iecm_get_rxnfc - command to get RX flow classification rules + * @netdev: network interface device structure + * @cmd: ethtool rxnfc command + * @rule_locs: pointer to store rule locations + * + * Returns Success if the command is supported. + */ +static int iecm_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd, + u32 __always_unused *rule_locs) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int ret = -EOPNOTSUPP; + + switch (cmd->cmd) { + case ETHTOOL_GRXRINGS: + cmd->data = vport->num_rxq; + ret = 0; + break; + case ETHTOOL_GRXFH: + netdev_info(netdev, "RSS hash info is not available\n"); + break; + default: + break; + } + + return ret; +} + +/** + * iecm_get_rxfh_key_size - get the RSS hash key size + * @netdev: network interface device structure + * + * Returns the table size. + */ +static u32 iecm_get_rxfh_key_size(struct net_device *netdev) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (!iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + + return vport->adapter->rss_data.rss_key_size; +} + +/** + * iecm_get_rxfh_indir_size - get the Rx flow hash indirection table size + * @netdev: network interface device structure + * + * Returns the table size. + */ +static u32 iecm_get_rxfh_indir_size(struct net_device *netdev) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (!iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + + return vport->adapter->rss_data.rss_lut_size; +} + +/** + * iecm_find_virtual_qid - Finds the virtual RX qid from the absolute RX qid + * @vport: virtual port structure + * @qid_list: List of the RX qid's + * @abs_rx_qid: absolute RX qid + * + * Returns the virtual RX QID. + */ +static u32 iecm_find_virtual_qid(struct iecm_vport *vport, u16 *qid_list, + u32 abs_rx_qid) +{ + u32 i; + + for (i = 0; i < vport->num_rxq; i++) + if ((u32)qid_list[i] == abs_rx_qid) + break; + return i; +} + +/** + * iecm_get_rxfh - get the Rx flow hash indirection table + * @netdev: network interface device structure + * @indir: indirection table + * @key: hash key + * @hfunc: hash function in use + * + * Reads the indirection table directly from the hardware. Always returns 0. + */ +static int iecm_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, + u8 *hfunc) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_adapter *adapter; + u16 i, *qid_list; + u32 abs_qid; + + adapter = vport->adapter; + + if (!iecm_is_cap_ena(adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&vport->adapter->pdev->dev, "RSS is not supported on this device\n"); + return 0; + } + + if (adapter->state != __IECM_UP) + return 0; + + if (hfunc) + *hfunc = ETH_RSS_HASH_TOP; + + if (key) + memcpy(key, adapter->rss_data.rss_key, + adapter->rss_data.rss_key_size); + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) + return -ENOMEM; + + iecm_get_rx_qid_list(vport, qid_list); + + if (indir) + /* Each 32 bits pointed by 'indir' is stored with a lut entry */ + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) { + abs_qid = (u32)adapter->rss_data.rss_lut[i]; + indir[i] = iecm_find_virtual_qid(vport, qid_list, + abs_qid); + } + + kfree(qid_list); + + return 0; +} + +/** + * iecm_set_rxfh - set the Rx flow hash indirection table + * @netdev: network interface device structure + * @indir: indirection table + * @key: hash key + * @hfunc: hash function to use + * + * Returns -EINVAL if the table specifies an invalid queue id, otherwise + * returns 0 after programming the table. + */ +static int iecm_set_rxfh(struct net_device *netdev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_adapter *adapter; + u16 *qid_list; + u16 lut; + + adapter = vport->adapter; + + if (!iecm_is_cap_ena(adapter, VIRTCHNL_CAP_RSS)) { + dev_info(&adapter->pdev->dev, "RSS is not supported on this device.\n"); + return 0; + } + if (adapter->state != __IECM_UP) + return 0; + + if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) + return -EOPNOTSUPP; + + if (key) + memcpy(adapter->rss_data.rss_key, key, + adapter->rss_data.rss_key_size); + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) + return -ENOMEM; + + iecm_get_rx_qid_list(vport, qid_list); + + if (indir) { + for (lut = 0; lut < adapter->rss_data.rss_lut_size; lut++) { + int index = indir[lut]; + + if (index >= vport->num_rxq) { + kfree(qid_list); + return -EINVAL; + } + adapter->rss_data.rss_lut[lut] = qid_list[index]; + } + } else { + iecm_fill_dflt_rss_lut(vport, qid_list); + } + + kfree(qid_list); + + return iecm_config_rss(vport); +} + +/** + * iecm_get_channels: get the number of channels supported by the device + * @netdev: network interface device structure + * @ch: channel information structure + * + * Report maximum of TX and RX. Report one extra channel to match our mailbox + * Queue. + */ +static void iecm_get_channels(struct net_device *netdev, + struct ethtool_channels *ch) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + /* Report maximum channels */ + ch->max_combined = IECM_MAX_Q; + + ch->max_other = IECM_MAX_NONQ; + ch->other_count = IECM_MAX_NONQ; + + ch->combined_count = max(vport->num_txq, vport->num_rxq); +} + +/** + * iecm_set_channels: set the new channel count + * @netdev: network interface device structure + * @ch: channel information structure + * + * Negotiate a new number of channels with CP. Returns 0 on success, negative + * on failure. + */ +static int iecm_set_channels(struct net_device *netdev, + struct ethtool_channels *ch) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int num_req_q = ch->combined_count; + + if (num_req_q == max(vport->num_txq, vport->num_rxq)) + return 0; + + vport->adapter->config_data.num_req_qs = num_req_q; + + return iecm_initiate_soft_reset(vport, __IECM_SR_Q_CHANGE); +} + +/** + * iecm_get_ringparam - Get ring parameters + * @netdev: network interface device structure + * @ring: ethtool ringparam structure + * + * Returns current ring parameters. TX and RX rings are reported separately, + * but the number of rings is not reported. + */ +static void iecm_get_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + ring->rx_max_pending = IECM_MAX_RXQ_DESC; + ring->tx_max_pending = IECM_MAX_TXQ_DESC; + ring->rx_pending = vport->rxq_desc_count; + ring->tx_pending = vport->txq_desc_count; +} + +/** + * iecm_set_ringparam - Set ring parameters + * @netdev: network interface device structure + * @ring: ethtool ringparam structure + * + * Sets ring parameters. TX and RX rings are controlled separately, but the + * number of rings is not specified, so all rings get the same settings. + */ +static int iecm_set_ringparam(struct net_device *netdev, + struct ethtool_ringparam *ring) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + u32 new_rx_count, new_tx_count; + + if (ring->rx_mini_pending || ring->rx_jumbo_pending) + return -EINVAL; + + new_tx_count = ALIGN(ring->tx_pending, IECM_REQ_DESC_MULTIPLE); + new_rx_count = ALIGN(ring->rx_pending, IECM_REQ_DESC_MULTIPLE); + + /* if nothing to do return success */ + if (new_tx_count == vport->txq_desc_count && + new_rx_count == vport->rxq_desc_count) + return 0; + + vport->adapter->config_data.num_req_txq_desc = new_tx_count; + vport->adapter->config_data.num_req_rxq_desc = new_rx_count; + + return iecm_initiate_soft_reset(vport, __IECM_SR_Q_DESC_CHANGE); +} + +/** + * struct iecm_stats - definition for an ethtool statistic + * @stat_string: statistic name to display in ethtool -S output + * @sizeof_stat: the sizeof() the stat, must be no greater than sizeof(u64) + * @stat_offset: offsetof() the stat from a base pointer + * + * This structure defines a statistic to be added to the ethtool stats buffer. + * It defines a statistic as offset from a common base pointer. Stats should + * be defined in constant arrays using the IECM_STAT macro, with every element + * of the array using the same _type for calculating the sizeof_stat and + * stat_offset. + * + * The @sizeof_stat is expected to be sizeof(u8), sizeof(u16), sizeof(u32) or + * sizeof(u64). Other sizes are not expected and will produce a WARN_ONCE from + * the iecm_add_ethtool_stat() helper function. + * + * The @stat_string is interpreted as a format string, allowing formatted + * values to be inserted while looping over multiple structures for a given + * statistics array. Thus, every statistic string in an array should have the + * same type and number of format specifiers, to be formatted by variadic + * arguments to the iecm_add_stat_string() helper function. + */ +struct iecm_stats { + char stat_string[ETH_GSTRING_LEN]; + int sizeof_stat; + int stat_offset; +}; + +/* Helper macro to define an iecm_stat structure with proper size and type. + * Use this when defining constant statistics arrays. Note that @_type expects + * only a type name and is used multiple times. + */ +#define IECM_STAT(_type, _name, _stat) { \ + .stat_string = _name, \ + .sizeof_stat = sizeof_field(_type, _stat), \ + .stat_offset = offsetof(_type, _stat) \ +} + +/* Helper macro for defining some statistics related to queues */ +#define IECM_QUEUE_STAT(_name, _stat) \ + IECM_STAT(struct iecm_queue, _name, _stat) + +/* Stats associated with a Tx queue */ +static const struct iecm_stats iecm_gstrings_tx_queue_stats[] = { + IECM_QUEUE_STAT("%s-%u.packets", q_stats.tx.packets), + IECM_QUEUE_STAT("%s-%u.bytes", q_stats.tx.bytes), +}; + +/* Stats associated with an Rx queue */ +static const struct iecm_stats iecm_gstrings_rx_queue_stats[] = { + IECM_QUEUE_STAT("%s-%u.packets", q_stats.rx.packets), + IECM_QUEUE_STAT("%s-%u.bytes", q_stats.rx.bytes), + IECM_QUEUE_STAT("%s-%u.generic_csum", q_stats.rx.generic_csum), + IECM_QUEUE_STAT("%s-%u.basic_csum", q_stats.rx.basic_csum), + IECM_QUEUE_STAT("%s-%u.csum_err", q_stats.rx.csum_err), + IECM_QUEUE_STAT("%s-%u.hsplit_buf_overflow", q_stats.rx.hsplit_hbo), +}; + +#define IECM_TX_QUEUE_STATS_LEN ARRAY_SIZE(iecm_gstrings_tx_queue_stats) +#define IECM_RX_QUEUE_STATS_LEN ARRAY_SIZE(iecm_gstrings_rx_queue_stats) + +/** + * __iecm_add_stat_strings - copy stat strings into ethtool buffer + * @p: ethtool supplied buffer + * @stats: stat definitions array + * @size: size of the stats array + * + * Format and copy the strings described by stats into the buffer pointed at + * by p. + */ +static void __iecm_add_stat_strings(u8 **p, const struct iecm_stats stats[], + const unsigned int size, ...) +{ + unsigned int i; + + for (i = 0; i < size; i++) { + va_list args; + + va_start(args, size); + vsnprintf((char *)*p, ETH_GSTRING_LEN, + stats[i].stat_string, args); + *p += ETH_GSTRING_LEN; + va_end(args); + } +} + +/** + * iecm_add_stat_strings - copy stat strings into ethtool buffer + * @p: ethtool supplied buffer + * @stats: stat definitions array + * + * Format and copy the strings described by the const static stats value into + * the buffer pointed at by p. + * + * The parameter @stats is evaluated twice, so parameters with side effects + * should be avoided. Additionally, stats must be an array such that + * ARRAY_SIZE can be called on it. + */ +#define iecm_add_stat_strings(p, stats, ...) \ + __iecm_add_stat_strings(p, stats, ARRAY_SIZE(stats), ## __VA_ARGS__) + +/** + * iecm_get_stat_strings - Get stat strings + * @netdev: network interface device structure + * @data: buffer for string data + * + * Builds the statistics string table + */ +static void iecm_get_stat_strings(struct net_device __always_unused *netdev, + u8 *data) +{ + unsigned int i; + + /* It's critical that we always report a constant number of strings and + * that the strings are reported in the same order regardless of how + * many queues are actually in use. + */ + for (i = 0; i < IECM_MAX_Q; i++) + iecm_add_stat_strings(&data, iecm_gstrings_tx_queue_stats, + "tx", i); + for (i = 0; i < IECM_MAX_Q; i++) + iecm_add_stat_strings(&data, iecm_gstrings_rx_queue_stats, + "rx", i); +} + +/** + * iecm_get_strings - Get string set + * @netdev: network interface device structure + * @sset: id of string set + * @data: buffer for string data + * + * Builds string tables for various string sets + */ +static void iecm_get_strings(struct net_device *netdev, u32 sset, u8 *data) +{ + switch (sset) { + case ETH_SS_STATS: + iecm_get_stat_strings(netdev, data); + break; + default: + break; + } +} + +/** + * iecm_get_sset_count - Get length of string set + * @netdev: network interface device structure + * @sset: id of string set + * + * Reports size of various string tables. + */ +static int iecm_get_sset_count(struct net_device __always_unused *netdev, int sset) +{ + if (sset == ETH_SS_STATS) + /* This size reported back here *must* be constant throughout + * the lifecycle of the netdevice, i.e. we must report the + * maximum length even for queues that don't technically exist. + * This is due to the fact that this userspace API uses three + * separate ioctl calls to get stats data but has no way to + * communicate back to userspace when that size has changed, + * which can typically happen as a result of changing number of + * queues. If the number/order of stats change in the middle of + * this call chain it will lead to userspace crashing/accessing + * bad data through buffer under/overflow. + */ + return (IECM_TX_QUEUE_STATS_LEN * IECM_MAX_Q) + + (IECM_RX_QUEUE_STATS_LEN * IECM_MAX_Q); + else + return -EINVAL; +} + +/** + * iecm_add_one_ethtool_stat - copy the stat into the supplied buffer + * @data: location to store the stat value + * @pstat: old stat pointer to copy from + * @stat: the stat definition + * + * Copies the stat data defined by the pointer and stat structure pair into + * the memory supplied as data. Used to implement iecm_add_ethtool_stats and + * iecm_add_queue_stats. If the pointer is null, data will be zero'd. + */ +static void +iecm_add_one_ethtool_stat(u64 *data, void *pstat, + const struct iecm_stats *stat) +{ + char *p; + + if (!pstat) { + /* ensure that the ethtool data buffer is zero'd for any stats + * which don't have a valid pointer. + */ + *data = 0; + return; + } + + p = (char *)pstat + stat->stat_offset; + switch (stat->sizeof_stat) { + case sizeof(u64): + *data = *((u64 *)p); + break; + case sizeof(u32): + *data = *((u32 *)p); + break; + case sizeof(u16): + *data = *((u16 *)p); + break; + case sizeof(u8): + *data = *((u8 *)p); + break; + default: + WARN_ONCE(1, "unexpected stat size for %s", + stat->stat_string); + *data = 0; + } +} + +/** + * iecm_add_queue_stats - copy queue statistics into supplied buffer + * @data: ethtool stats buffer + * @q: the queue to copy + * + * Queue statistics must be copied while protected by + * u64_stats_fetch_begin_irq, so we can't directly use iecm_add_ethtool_stats. + * Assumes that queue stats are defined in iecm_gstrings_queue_stats. If the + * queue pointer is null, zero out the queue stat values and update the data + * pointer. Otherwise safely copy the stats from the queue into the supplied + * buffer and update the data pointer when finished. + * + * This function expects to be called while under rcu_read_lock(). + */ +static void +iecm_add_queue_stats(u64 **data, struct iecm_queue *q) +{ + const struct iecm_stats *stats; + unsigned int start; + unsigned int size; + unsigned int i; + + if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) { + size = IECM_RX_QUEUE_STATS_LEN; + stats = iecm_gstrings_rx_queue_stats; + } else { + size = IECM_TX_QUEUE_STATS_LEN; + stats = iecm_gstrings_tx_queue_stats; + } + + /* To avoid invalid statistics values, ensure that we keep retrying + * the copy until we get a consistent value according to + * u64_stats_fetch_retry_irq. But first, make sure our queue is + * non-null before attempting to access its syncp. + */ + do { + start = u64_stats_fetch_begin_irq(&q->stats_sync); + for (i = 0; i < size; i++) + iecm_add_one_ethtool_stat(&(*data)[i], q, &stats[i]); + } while (u64_stats_fetch_retry_irq(&q->stats_sync, start)); + + /* Once we successfully copy the stats in, update the data pointer */ + *data += size; +} + +/** + * iecm_add_empty_queue_stats - Add stats for a non-existent queue + * @data: pointer to data buffer + * @qtype: type of data queue + * + * We must report a constant length of stats back to userspace regardless of + * how many queues are actually in use because stats collection happens over + * three separate ioctls and there's no way to notify userspace the size + * changed between those calls. This adds empty to data to the stats since we + * don't have a real queue to refer to for this stats slot. + */ +static void +iecm_add_empty_queue_stats(u64 **data, enum virtchnl_queue_type qtype) +{ + unsigned int i; + int stats_len; + + if (qtype == VIRTCHNL_QUEUE_TYPE_RX) + stats_len = IECM_RX_QUEUE_STATS_LEN; + else + stats_len = IECM_TX_QUEUE_STATS_LEN; + + for (i = 0; i < stats_len; i++) + (*data)[i] = 0; + *data += stats_len; +} + +/** + * iecm_get_ethtool_stats - report device statistics + * @netdev: network interface device structure + * @stats: ethtool statistics structure + * @data: pointer to data buffer + * + * All statistics are added to the data buffer as an array of u64. + */ +static void iecm_get_ethtool_stats(struct net_device *netdev, + struct ethtool_stats __always_unused *stats, + u64 *data) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + enum virtchnl_queue_type qtype; + unsigned int total = 0; + unsigned int i, j; + + if (vport->adapter->state != __IECM_UP) + return; + + rcu_read_lock(); + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *txq_grp = &vport->txq_grps[i]; + + qtype = VIRTCHNL_QUEUE_TYPE_TX; + + for (j = 0; j < txq_grp->num_txq; j++, total++) { + struct iecm_queue *txq = &txq_grp->txqs[j]; + + if (!txq) + iecm_add_empty_queue_stats(&data, qtype); + else + iecm_add_queue_stats(&data, txq); + } + } + /* It is critical we provide a constant number of stats back to + * userspace regardless of how many queues are actually in use because + * there is no way to inform userspace the size has changed between + * ioctl calls. This will fill in any missing stats with zero. + */ + for (; total < IECM_MAX_Q; total++) + iecm_add_empty_queue_stats(&data, VIRTCHNL_QUEUE_TYPE_TX); + total = 0; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rxq_grp = &vport->rxq_grps[i]; + int num_rxq; + + qtype = VIRTCHNL_QUEUE_TYPE_RX; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rxq_grp->splitq.num_rxq_sets; + else + num_rxq = rxq_grp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, total++) { + struct iecm_queue *rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + rxq = &rxq_grp->splitq.rxq_sets[j].rxq; + else + rxq = &rxq_grp->singleq.rxqs[j]; + if (!rxq) + iecm_add_empty_queue_stats(&data, qtype); + else + iecm_add_queue_stats(&data, rxq); + } + } + for (; total < IECM_MAX_Q; total++) + iecm_add_empty_queue_stats(&data, VIRTCHNL_QUEUE_TYPE_RX); + rcu_read_unlock(); +} + +/** + * iecm_find_rxq - find rxq from q index + * @vport: virtual port associated to queue + * @q_num: q index used to find queue + * + * returns pointer to Rx queue + */ +static struct iecm_queue * +iecm_find_rxq(struct iecm_vport *vport, int q_num) +{ + struct iecm_queue *rxq; + int q_grp, q_idx; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + q_grp = q_num / IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + q_idx = q_num % IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + + rxq = &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx].rxq; + } else { + q_grp = q_num / IECM_DFLT_SINGLEQ_RXQ_PER_GROUP; + q_idx = q_num % IECM_DFLT_SINGLEQ_RXQ_PER_GROUP; + + rxq = &vport->rxq_grps[q_grp].singleq.rxqs[q_idx]; + } + + return rxq; +} + +/** + * iecm_find_txq - find txq from q index + * @vport: virtual port associated to queue + * @q_num: q index used to find queue + * + * returns pointer to Tx queue + */ +static struct iecm_queue * +iecm_find_txq(struct iecm_vport *vport, int q_num) +{ + struct iecm_queue *txq; + + if (iecm_is_queue_model_split(vport->txq_model)) { + int q_grp = q_num / IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + + txq = vport->txq_grps[q_grp].complq; + } else { + txq = vport->txqs[q_num]; + } + + return txq; +} + +/** + * __iecm_get_q_coalesce - get ITR values for specific queue + * @ec: ethtool structure to fill with driver's coalesce settings + * @q: queue of Rx or Tx + */ +static void +__iecm_get_q_coalesce(struct ethtool_coalesce *ec, struct iecm_queue *q) +{ + u16 itr_setting; + bool dyn_ena; + + itr_setting = IECM_ITR_SETTING(q->itr.target_itr); + dyn_ena = IECM_ITR_IS_DYNAMIC(q->itr.target_itr); + if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) { + ec->use_adaptive_rx_coalesce = dyn_ena; + ec->rx_coalesce_usecs = itr_setting; + } else { + ec->use_adaptive_tx_coalesce = dyn_ena; + ec->tx_coalesce_usecs = itr_setting; + } +} + +/** + * iecm_get_q_coalesce - get ITR values for specific queue + * @netdev: pointer to the netdev associated with this query + * @ec: coalesce settings to program the device with + * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index + * + * Return 0 on success, and negative on failure + */ +static int +iecm_get_q_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec, + u32 q_num) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (vport->adapter->state != __IECM_UP) + return 0; + + if (q_num >= vport->num_rxq && q_num >= vport->num_txq) + return -EINVAL; + + if (q_num < vport->num_rxq) { + struct iecm_queue *rxq = iecm_find_rxq(vport, q_num); + + __iecm_get_q_coalesce(ec, rxq); + } + + if (q_num < vport->num_txq) { + struct iecm_queue *txq = iecm_find_txq(vport, q_num); + + __iecm_get_q_coalesce(ec, txq); + } + + return 0; +} + +/** + * iecm_get_coalesce - get ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @ec: coalesce settings to be filled + * + * Return 0 on success, and negative on failure + */ +static int +iecm_get_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec) +{ + /* Return coalesce based on queue number zero */ + return iecm_get_q_coalesce(netdev, ec, 0); +} + +/** + * iecm_get_per_q_coalesce - get ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @q_num: queue for which the ITR values has to retrieved + * @ec: coalesce settings to be filled + * + * Return 0 on success, and negative on failure + */ + +static int +iecm_get_per_q_coalesce(struct net_device *netdev, u32 q_num, + struct ethtool_coalesce *ec) +{ + return iecm_get_q_coalesce(netdev, ec, q_num); +} + +/** + * __iecm_set_q_coalesce - set ITR values for specific queue + * @ec: ethtool structure from user to update ITR settings + * @q: queue for which ITR values has to be set + * + * Returns 0 on success, negative otherwise. + */ +static int +__iecm_set_q_coalesce(struct ethtool_coalesce *ec, struct iecm_queue *q) +{ + const char *q_type_str = (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) + ? "Rx" : "Tx"; + u32 use_adaptive_coalesce, coalesce_usecs; + struct iecm_vport *vport; + u16 itr_setting; + + itr_setting = IECM_ITR_SETTING(q->itr.target_itr); + vport = q->vport; + if (q->q_type == VIRTCHNL_QUEUE_TYPE_RX) { + use_adaptive_coalesce = ec->use_adaptive_rx_coalesce; + coalesce_usecs = ec->rx_coalesce_usecs; + } else { + use_adaptive_coalesce = ec->use_adaptive_tx_coalesce; + coalesce_usecs = ec->tx_coalesce_usecs; + } + + if (itr_setting != coalesce_usecs && use_adaptive_coalesce) { + netdev_info(vport->netdev, "%s ITR cannot be changed if adaptive-%s is enabled\n", + q_type_str, q_type_str); + return -EINVAL; + } + + if (coalesce_usecs > IECM_ITR_MAX) { + netdev_info(vport->netdev, + "Invalid value, %d-usecs range is 0-%d\n", + coalesce_usecs, IECM_ITR_MAX); + return -EINVAL; + } + + /* hardware only supports an ITR granularity of 2us */ + if (coalesce_usecs % 2 != 0) { + netdev_info(vport->netdev, + "Invalid value, %d-usecs must be even\n", + coalesce_usecs); + return -EINVAL; + } + + q->itr.target_itr = coalesce_usecs; + if (use_adaptive_coalesce) + q->itr.target_itr |= IECM_ITR_DYNAMIC; + /* Update of static/dynamic ITR will be taken care when interrupt is + * fired + */ + return 0; +} + +/** + * iecm_set_q_coalesce - set ITR values for specific queue + * @vport: vport associated to the queue that need updating + * @ec: coalesce settings to program the device with + * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index + * @is_rxq: is queue type Rx + * + * Return 0 on success, and negative on failure + */ +static int +iecm_set_q_coalesce(struct iecm_vport *vport, struct ethtool_coalesce *ec, + int q_num, bool is_rxq) +{ + if (is_rxq) { + struct iecm_queue *rxq = iecm_find_rxq(vport, q_num); + + if (rxq && __iecm_set_q_coalesce(ec, rxq)) + return -EINVAL; + } else { + struct iecm_queue *txq = iecm_find_txq(vport, q_num); + + if (txq && __iecm_set_q_coalesce(ec, txq)) + return -EINVAL; + } + + return 0; +} + +/** + * iecm_set_coalesce - set ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @ec: coalesce settings to program the device with + * + * Return 0 on success, and negative on failure + */ +static int +iecm_set_coalesce(struct net_device *netdev, struct ethtool_coalesce *ec) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int i, err = 0; + + if (vport->adapter->state != __IECM_UP) + return 0; + + for (i = 0; i < vport->num_txq; i++) { + err = iecm_set_q_coalesce(vport, ec, i, false); + if (err) + goto set_coalesce_err; + } + + for (i = 0; i < vport->num_rxq; i++) { + err = iecm_set_q_coalesce(vport, ec, i, true); + if (err) + goto set_coalesce_err; + } +set_coalesce_err: + return err; +} + +/** + * iecm_set_per_q_coalesce - set ITR values as requested by user + * @netdev: pointer to the netdev associated with this query + * @q_num: queue for which the ITR values has to be set + * @ec: coalesce settings to program the device with + * + * Return 0 on success, and negative on failure + */ +static int +iecm_set_per_q_coalesce(struct net_device *netdev, u32 q_num, + struct ethtool_coalesce *ec) +{ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + int err; + + if (vport->adapter->state != __IECM_UP) + return 0; + + err = iecm_set_q_coalesce(vport, ec, q_num, false); + if (!err) + err = iecm_set_q_coalesce(vport, ec, q_num, true); + + return err; +} + +/** + * iecm_get_msglevel - Get debug message level + * @netdev: network interface device structure + * + * Returns current debug message level. + */ +static u32 iecm_get_msglevel(struct net_device *netdev) +{ + struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev); + + return adapter->msg_enable; +} + +/** + * iecm_set_msglevel - Set debug message level + * @netdev: network interface device structure + * @data: message level + * + * Set current debug message level. Higher values cause the driver to + * be noisier. + */ +static void iecm_set_msglevel(struct net_device *netdev, u32 data) +{ + struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev); + + adapter->msg_enable = data; +} + +/** + * iecm_get_link_ksettings - Get Link Speed and Duplex settings + * @netdev: network interface device structure + * @cmd: ethtool command + * + * Reports speed/duplex settings. + **/ +static int iecm_get_link_ksettings(struct net_device *netdev, + struct ethtool_link_ksettings *cmd) +{ + struct iecm_netdev_priv *np = netdev_priv(netdev); + struct iecm_adapter *adapter = np->vport->adapter; + + ethtool_link_ksettings_zero_link_mode(cmd, supported); + cmd->base.autoneg = AUTONEG_DISABLE; + cmd->base.port = PORT_NONE; + /* Set speed and duplex */ + switch (adapter->link_speed) { + case VIRTCHNL_LINK_SPEED_40GB: + cmd->base.speed = SPEED_40000; + break; + case VIRTCHNL_LINK_SPEED_25GB: +#ifdef SPEED_25000 + cmd->base.speed = SPEED_25000; +#else + netdev_info(netdev, + "Speed is 25G, display not supported by this version of ethtool.\n"); +#endif + break; + case VIRTCHNL_LINK_SPEED_20GB: + cmd->base.speed = SPEED_20000; + break; + case VIRTCHNL_LINK_SPEED_10GB: + cmd->base.speed = SPEED_10000; + break; + case VIRTCHNL_LINK_SPEED_1GB: + cmd->base.speed = SPEED_1000; + break; + case VIRTCHNL_LINK_SPEED_100MB: + cmd->base.speed = SPEED_100; + break; + default: + break; + } + cmd->base.duplex = DUPLEX_FULL; + + return 0; +} + +/** + * iecm_get_drvinfo - Get driver info + * @netdev: network interface device structure + * @drvinfo: ethtool driver info structure + * + * Returns information about the driver and device for display to the user. + */ +static void iecm_get_drvinfo(struct net_device *netdev, + struct ethtool_drvinfo *drvinfo) +{ + struct iecm_adapter *adapter = iecm_netdev_to_adapter(netdev); + + strlcpy(drvinfo->driver, iecm_drv_name, 32); + strlcpy(drvinfo->fw_version, "N/A", 4); + strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 32); +} + +static const struct ethtool_ops iecm_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_USECS | + ETHTOOL_COALESCE_USE_ADAPTIVE, + .get_drvinfo = iecm_get_drvinfo, + .get_msglevel = iecm_get_msglevel, + .set_msglevel = iecm_set_msglevel, + .get_coalesce = iecm_get_coalesce, + .set_coalesce = iecm_set_coalesce, + .get_per_queue_coalesce = iecm_get_per_q_coalesce, + .set_per_queue_coalesce = iecm_set_per_q_coalesce, + .get_ethtool_stats = iecm_get_ethtool_stats, + .get_strings = iecm_get_strings, + .get_sset_count = iecm_get_sset_count, + .get_rxnfc = iecm_get_rxnfc, + .get_rxfh_key_size = iecm_get_rxfh_key_size, + .get_rxfh_indir_size = iecm_get_rxfh_indir_size, + .get_rxfh = iecm_get_rxfh, + .set_rxfh = iecm_set_rxfh, + .get_channels = iecm_get_channels, + .set_channels = iecm_set_channels, + .get_ringparam = iecm_get_ringparam, + .set_ringparam = iecm_set_ringparam, + .get_link_ksettings = iecm_get_link_ksettings, +}; + /** * iecm_set_ethtool_ops - Initialize ethtool ops struct * @netdev: network interface device structure @@ -12,5 +1060,5 @@ */ void iecm_set_ethtool_ops(struct net_device *netdev) { - /* stub */ + netdev->ethtool_ops = &iecm_ethtool_ops; } diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index d34834422049..a55151495e18 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -765,7 +765,37 @@ static void iecm_deinit_task(struct iecm_adapter *adapter) static enum iecm_status iecm_init_hard_reset(struct iecm_adapter *adapter) { - /* stub */ + enum iecm_status err; + + /* Prepare for reset */ + if (test_bit(__IECM_HR_FUNC_RESET, adapter->flags)) { + iecm_deinit_task(adapter); + adapter->dev_ops.reg_ops.trigger_reset(adapter, + __IECM_HR_FUNC_RESET); + set_bit(__IECM_UP_REQUESTED, adapter->flags); + clear_bit(__IECM_HR_FUNC_RESET, adapter->flags); + } else if (test_bit(__IECM_HR_CORE_RESET, adapter->flags)) { + if (adapter->state == __IECM_UP) + set_bit(__IECM_UP_REQUESTED, adapter->flags); + iecm_deinit_task(adapter); + clear_bit(__IECM_HR_CORE_RESET, adapter->flags); + } else if (test_and_clear_bit(__IECM_HR_DRV_LOAD, adapter->flags)) { + /* Trigger reset */ + } else { + dev_err(&adapter->pdev->dev, "Unhandled hard reset cause\n"); + err = IECM_ERR_PARAM; + goto handle_err; + } + + /* Reset is complete and so start building the driver resources again */ + err = iecm_init_dflt_mbx(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "Failed to initialize default mailbox: %d\n", + err); + } +handle_err: + mutex_unlock(&adapter->reset_lock); + return err; } /** @@ -794,7 +824,57 @@ static void iecm_vc_event_task(struct work_struct *work) int iecm_initiate_soft_reset(struct iecm_vport *vport, enum iecm_flags reset_cause) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum iecm_state current_state; + enum iecm_status status; + int err = 0; + + /* Make sure we do not end up in initiating multiple resets */ + mutex_lock(&adapter->reset_lock); + + current_state = vport->adapter->state; + switch (reset_cause) { + case __IECM_SR_Q_CHANGE: + /* If we're changing number of queues requested, we need to + * send a 'delete' message before freeing the queue resources. + * We'll send an 'add' message in adjust_qs which doesn't + * require the queue resources to be reallocated yet. + */ + if (current_state <= __IECM_DOWN) { + iecm_send_delete_queues_msg(vport); + } else { + set_bit(__IECM_DEL_QUEUES, adapter->flags); + iecm_vport_stop(vport); + } + iecm_deinit_rss(vport); + status = adapter->dev_ops.vc_ops.adjust_qs(vport); + if (status) { + err = -EFAULT; + goto reset_failure; + } + iecm_intr_rel(adapter); + iecm_vport_calc_num_q_vec(vport); + iecm_intr_req(adapter); + break; + case __IECM_SR_Q_DESC_CHANGE: + iecm_vport_stop(vport); + iecm_vport_calc_num_q_desc(vport); + break; + case __IECM_SR_Q_SCH_CHANGE: + case __IECM_SR_MTU_CHANGE: + iecm_vport_stop(vport); + break; + default: + dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n"); + err = -EINVAL; + goto reset_failure; + } + + if (current_state == __IECM_UP) + err = iecm_vport_open(vport); +reset_failure: + mutex_unlock(&adapter->reset_lock); + return err; } /** @@ -885,6 +965,7 @@ int iecm_probe(struct pci_dev *pdev, INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task); INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task); + adapter->dev_ops.reg_ops.reset_reg_init(&adapter->reset_reg); mutex_lock(&adapter->reset_lock); set_bit(__IECM_HR_DRV_LOAD, adapter->flags); err = iecm_init_hard_reset(adapter); @@ -978,7 +1059,20 @@ static int iecm_open(struct net_device *netdev) */ static int iecm_change_mtu(struct net_device *netdev, int new_mtu) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + if (new_mtu < netdev->min_mtu) { + netdev_err(netdev, "new MTU invalid. min_mtu is %d\n", + netdev->min_mtu); + return -EINVAL; + } else if (new_mtu > netdev->max_mtu) { + netdev_err(netdev, "new MTU invalid. max_mtu is %d\n", + netdev->max_mtu); + return -EINVAL; + } + netdev->mtu = new_mtu; + + return iecm_initiate_soft_reset(vport, __IECM_SR_MTU_CHANGE); } static const struct net_device_ops iecm_netdev_ops_splitq = {