From patchwork Thu Jun 18 05:13:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACE1C433E0 for ; Thu, 18 Jun 2020 05:14:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0982221924 for ; Thu, 18 Jun 2020 05:14:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727041AbgFRFOD (ORCPT ); Thu, 18 Jun 2020 01:14:03 -0400 Received: from mga07.intel.com ([134.134.136.100]:28000 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726987AbgFRFOB (ORCPT ); Thu, 18 Jun 2020 01:14:01 -0400 IronPort-SDR: fkmLeZuaG4v9dRN9RAnxvCgFwDJwN3JXXGXFJ9ZerxCzbxEiIrWRvYBjI5Rbolltu5YwDMLFru umag3q4OGujQ== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="207694702" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="207694702" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:50 -0700 IronPort-SDR: pX4ODUEaGhAWXYnx4ETEfcm7hZiT6QO8W4OYNVpfAgY9P23O4BhGZMpmFt1dzy8/PqdvGmZbEY ZmvxaI5ZmbOA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495586" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:49 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next 04/15] iecm: Common module introduction and function stubs Date: Wed, 17 Jun 2020 22:13:33 -0700 Message-Id: <20200618051344.516587-5-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This introduces function stubs for the framework of the common module. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../net/ethernet/intel/iecm/iecm_controlq.c | 200 +++ .../ethernet/intel/iecm/iecm_controlq_setup.c | 84 ++ .../net/ethernet/intel/iecm/iecm_ethtool.c | 16 + drivers/net/ethernet/intel/iecm/iecm_lib.c | 406 ++++++ drivers/net/ethernet/intel/iecm/iecm_main.c | 47 + drivers/net/ethernet/intel/iecm/iecm_osdep.c | 15 + .../ethernet/intel/iecm/iecm_singleq_txrx.c | 255 ++++ drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1256 +++++++++++++++++ .../net/ethernet/intel/iecm/iecm_virtchnl.c | 570 ++++++++ 9 files changed, 2849 insertions(+) create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_ethtool.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_lib.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_main.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_osdep.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_txrx.c create mode 100644 drivers/net/ethernet/intel/iecm/iecm_virtchnl.c diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq.c b/drivers/net/ethernet/intel/iecm/iecm_controlq.c new file mode 100644 index 000000000000..390c499d9eb5 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020, Intel Corporation. */ + +#include +#include +#include + +/** + * iecm_ctlq_setup_regs - initialize control queue registers + * @cq: pointer to the specific control queue + * @q_create_info: structs containing info for each queue to be initialized + */ +static void +iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq, + struct iecm_ctlq_create_info *q_create_info) +{ + /* stub */ +} + +/** + * iecm_ctlq_init_regs - Initialize control queue registers + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * @is_rxq: true if receive control queue, false otherwise + * + * Initialize registers. The caller is expected to have already initialized the + * descriptor ring memory and buffer memory + */ +static enum iecm_status iecm_ctlq_init_regs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + bool is_rxq) +{ + /* stub */ +} + +/** + * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf + * @cq: pointer to the specific Control queue + * + * Record the address of the receive queue DMA buffers in the descriptors. + * The buffers must have been previously allocated. + */ +static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_shutdown - shutdown the CQ + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * The main shutdown routine for any controq queue + */ +static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_add - add one control queue + * @hw: pointer to hardware struct + * @q_info: info for queue to be created + * @cq: (output) double pointer to control queue to be created + * + * Allocate and initialize a control queue and add it to the control queue list. + * The cq parameter will be allocated/initialized and passed back to the caller + * if no errors occur. + * + * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add + */ +enum iecm_status iecm_ctlq_add(struct iecm_hw *hw, + struct iecm_ctlq_create_info *qinfo, + struct iecm_ctlq_info **cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_remove - deallocate and remove specified control queue + * @hw: pointer to hardware struct + * @cq: pointer to control queue to be removed + */ +void iecm_ctlq_remove(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_init - main initialization routine for all control queues + * @hw: pointer to hardware struct + * @num_q: number of queues to initialize + * @q_info: array of structs containing info for each queue to be initialized + * + * This initializes any number and any type of control queues. This is an all + * or nothing routine; if one fails, all previously allocated queues will be + * destroyed. This must be called prior to using the individual add/remove + * APIs. + */ +enum iecm_status iecm_ctlq_init(struct iecm_hw *hw, u8 num_q, + struct iecm_ctlq_create_info *q_info) +{ + /* stub */ +} + +/** + * iecm_ctlq_deinit - destroy all control queues + * @hw: pointer to hw struct + */ +enum iecm_status iecm_ctlq_deinit(struct iecm_hw *hw) +{ + /* stub */ +} + +/** + * iecm_ctlq_send - send command to Control Queue (CTQ) + * @hw: pointer to hw struct + * @cq: handle to control queue struct to send on + * @num_q_msg: number of messages to send on control queue + * @q_msg: pointer to array of queue messages to be sent + * + * The caller is expected to allocate DMAable buffers and pass them to the + * send routine via the q_msg struct / control queue specific data struct. + * The control queue will hold a reference to each send message until + * the completion for that message has been cleaned. + */ +enum iecm_status iecm_ctlq_send(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 num_q_msg, + struct iecm_ctlq_msg q_msg[]) +{ + /* stub */ +} + +/** + * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the + * requested queue + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * @clean_count: (input|output) number of descriptors to clean as input, and + * number of descriptors actually cleaned as output + * @msg_status: (output) pointer to msg pointer array to be populated; needs + * to be allocated by caller + * + * Returns an an array of message pointers associated with the cleaned + * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned + * descriptors. The status will be returned for each; any messages that failed + * to send will have a non-zero status. The caller is expected to free original + * ctlq_msgs and free or reuse the DMA buffers. + */ +enum iecm_status iecm_ctlq_clean_sq(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 *clean_count, + struct iecm_ctlq_msg *msg_status[]) +{ + /* stub */ +} + +/** + * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring + * @hw: pointer to hw struct + * @cq: pointer to control queue handle + * @buff_count: (input|output) input is number of buffers caller is trying to + * return; output is number of buffers that were not posted + * @buffs: array of pointers to DMA mem structs to be given to hardware + * + * Caller uses this function to return DMA buffers to the descriptor ring after + * consuming them; buff_count will be the number of buffers. + * + * Note: this function needs to be called after a receive call even + * if there are no DMA buffers to be returned, i.e. buff_count = 0, + * buffs = NULL to support direct commands + */ +enum iecm_status iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 *buff_count, + struct iecm_dma_mem **buffs) +{ + /* stub */ +} + +/** + * iecm_ctlq_recv - receive control queue message call back + * @hw: pointer to hw struct + * @cq: pointer to control queue handle to receive on + * @num_q_msg: (input|output) input number of messages that should be received; + * output number of messages actually received + * @q_msg: (output) array of received control queue messages on this q; + * needs to be pre-allocated by caller for as many messages as requested + * + * Called by interrupt handler or polling mechanism. Caller is expected + * to free buffers + */ +enum iecm_status iecm_ctlq_recv(struct iecm_hw *hw, + struct iecm_ctlq_info *cq, + u16 *num_q_msg, struct iecm_ctlq_msg *q_msg) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c new file mode 100644 index 000000000000..2fd6e3d15a1a --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_controlq_setup.c @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020, Intel Corporation. */ + +#include +#include + +/** + * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + */ +static enum iecm_status +iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Allocate the buffer head for all control queues, and if it's a receive + * queue, allocate DMA buffers + */ +static enum iecm_status iecm_ctlq_alloc_bufs(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * This assumes the posted send buffers have already been cleaned + * and de-allocated + */ +static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_free_bufs - Free CQ buffer info elements + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX + * queues. The upper layers are expected to manage freeing of TX DMA buffers + */ +static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue + * @hw: pointer to hw struct + * @cq: pointer to the specific Control queue + * + * Free the memory used by the ring, buffers and other related structures + */ +void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq) +{ + /* stub */ +} + +/** + * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs + * @hw: pointer to hw struct + * @cq: pointer to control queue struct + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + */ +enum iecm_status iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, + struct iecm_ctlq_info *cq) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_ethtool.c b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c new file mode 100644 index 000000000000..a6532592f2f4 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_ethtool.c @@ -0,0 +1,16 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_set_ethtool_ops - Initialize ethtool ops struct + * @netdev: network interface device structure + * + * Sets ethtool ops struct in our netdev so that ethtool can call + * our functions. + */ +void iecm_set_ethtool_ops(struct net_device *netdev) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c new file mode 100644 index 000000000000..57a20204a7c8 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -0,0 +1,406 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include + +static const struct net_device_ops iecm_netdev_ops_splitq; +static const struct net_device_ops iecm_netdev_ops_singleq; +extern int debug; + +/** + * iecm_mb_intr_rel_irq - Free the IRQ association with the OS + * @adapter: adapter structure + */ +static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_rel - Release interrupt capabilities and free memory + * @adapter: adapter to disable interrupts on + */ +static void iecm_intr_rel(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_clean - Interrupt handler for the mailbox + * @irq: interrupt number + * @data: pointer to the adapter structure + */ +irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) +{ + /* stub */ +} + +/** + * iecm_mb_irq_enable - Enable MSIX interrupt for the mailbox + * @adapter: adapter to get the hardware address for register write + */ +void iecm_mb_irq_enable(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_req_irq - Request IRQ for the mailbox interrupt + * @adapter: adapter structure to pass to the mailbox IRQ handler + */ +int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_get_mb_vec_id - Get vector index for mailbox + * @adapter: adapter structure to access the vector chunks + * + * The first vector id in the requested vector chunks from the CP is for + * the mailbox + */ +void iecm_get_mb_vec_id(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_mb_intr_init - Initialize the mailbox interrupt + * @adapter: adapter structure to store the mailbox vector + */ +int iecm_mb_intr_init(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_distribute - Distribute MSIX vectors + * @adapter: adapter structure to get the vports + * + * Distribute the MSIX vectors acquired from the OS to the vports based on the + * num of vectors requested by each vport + */ +void iecm_intr_distribute(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_intr_req - Request interrupt capabilities + * @adapter: adapter to enable interrupts on + * + * Returns 0 on success, negative on failure + */ +static int iecm_intr_req(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_cfg_netdev - Allocate, configure and register a netdev + * @vport: main vport structure + * + * Returns 0 on success, negative value on failure + */ +static int iecm_cfg_netdev(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_cfg_hw - Initialize HW struct + * @adapter: adapter to setup hw struct for + * + * Returns 0 on success, negative on failure + */ +static int iecm_cfg_hw(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_get_free_slot - get the next non-NULL location index in array + * @array: array to search + * @size: size of the array + * @curr: last known occupied index to be used as a search hint + * + * void * is being used to keep the functionality generic. This lets us use this + * function on any array of pointers. + */ +static int iecm_get_free_slot(void *array, int size, int curr) +{ + /* stub */ +} + +/** + * iecm_netdev_to_vport - get a vport handle from a netdev + * @netdev: network interface device structure + */ +struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_netdev_to_adapter - get an adapter handle from a netdev + * @netdev: network interface device structure + */ +struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_vport_stop - Disable a vport + * @vport: vport to disable + */ +static void iecm_vport_stop(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_stop - Disables a network interface + * @netdev: network interface device structure + * + * The stop entry point is called when an interface is de-activated by the OS, + * and the netdevice enters the DOWN state. The hardware is still under the + * driver's control, but the netdev interface is disabled. + * + * Returns success only - not allowed to fail + */ +static int iecm_stop(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_vport_rel - Delete a vport and free its resources + * @vport: the vport being removed + * + * Returns 0 on success or < 0 on error + */ +int iecm_vport_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_rel_all - Delete all vports + * @adapter: adapter from which all vports are being removed + */ +static void iecm_vport_rel_all(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_set_hsplit - enable or disable header split on a given vport + * @vport: virtual port + * @prog: bpf_program attached to an interface or NULL + */ +void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog *prog) +{ + /* stub */ +} + +/** + * iecm_vport_alloc - Allocates the next available struct vport in the adapter + * @adapter: board private structure + * @vport_type: type of vport + * + * returns a pointer to a vport on success, NULL on failure. + */ +static struct iecm_vport * +iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) +{ + /* stub */ +} + +/** + * iecm_service_task - Delayed task for handling mailbox responses + * @work: work_struct handle to our data + * + */ +static void iecm_service_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_up_complete - Complete interface up sequence + * @vport: virtual port structure + * + */ +static void iecm_up_complete(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_open - Bring up a vport + * @vport: vport to bring up + */ +static int iecm_vport_open(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_init_task - Delayed initialization task + * @work: work_struct handle to our data + * + * Init task finishes up pending work started in probe. Due to the asynchronous + * nature in which the device communicates with hardware, we may have to wait + * several milliseconds to get a response. Instead of busy polling in probe, + * pulling it out into a delayed work task prevents us from bogging down the + * whole system waiting for a response from hardware. + */ +static void iecm_init_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_api_init - Initialize and verify device API + * @adapter: driver specific private structure + * + * Returns 0 on success, negative on failure + */ +static int iecm_api_init(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_deinit_task - Device deinit routine + * @adapter: Driver specific private structure + * + * Extended remove logic which will be used for + * hard reset as well + */ +void iecm_deinit_task(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_init_hard_reset - Initiate a hardware reset + * @adapter: Driver specific private structure + * + * Deallocate the vports and all the resources associated with them and + * reallocate. Also reinitialize the mailbox + */ +static enum iecm_status +iecm_init_hard_reset(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vc_event_task - Handle virtchannel event logic + * @work: work queue struct + */ +static void iecm_vc_event_task(struct work_struct *work) +{ + /* stub */ +} + +/** + * iecm_initiate_soft_reset - Initiate a software reset + * @vport: virtual port data struct + * @reset_cause: reason for the soft reset + * + * Soft reset does not involve bringing down the mailbox queue and also we do + * not destroy vport. Only queue resources are touched + */ +int iecm_initiate_soft_reset(struct iecm_vport *vport, + enum iecm_flags reset_cause) +{ + /* stub */ +} + +/** + * iecm_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in iecm_pci_tbl + * @adapter: driver specific private structure + * + * Returns 0 on success, negative on failure + */ +int iecm_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent, + struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_probe); + +/** + * iecm_remove - Device removal routine + * @pdev: PCI device information struct + */ +void iecm_remove(struct pci_dev *pdev) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_remove); + +/** + * iecm_shutdown - PCI callback for shutting down device + * @pdev: PCI device information struct + */ +void iecm_shutdown(struct pci_dev *pdev) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_shutdown); + +/** + * iecm_open - Called when a network interface becomes active + * @netdev: network interface device structure + * + * The open entry point is called when a network interface is made + * active by the system (IFF_UP). At this point all resources needed + * for transmit and receive operations are allocated, the interrupt + * handler is registered with the OS, the netdev watchdog is enabled, + * and the stack is notified that the interface is ready. + * + * Returns 0 on success, negative value on failure + */ +static int iecm_open(struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_change_mtu - NDO callback to change the MTU + * @netdev: network interface device structure + * @new_mtu: new value for maximum frame size + * + * Returns 0 on success, negative on failure + */ +static int iecm_change_mtu(struct net_device *netdev, int new_mtu) +{ + /* stub */ +} + +static const struct net_device_ops iecm_netdev_ops_splitq = { + .ndo_open = iecm_open, + .ndo_stop = iecm_stop, + .ndo_start_xmit = iecm_tx_splitq_start, + .ndo_validate_addr = eth_validate_addr, + .ndo_get_stats64 = iecm_get_stats64, +}; + +static const struct net_device_ops iecm_netdev_ops_singleq = { + .ndo_open = iecm_open, + .ndo_stop = iecm_stop, + .ndo_start_xmit = iecm_tx_singleq_start, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = iecm_change_mtu, + .ndo_get_stats64 = iecm_get_stats64, +}; diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c new file mode 100644 index 000000000000..0644581fc746 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_main.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include + +char iecm_drv_name[] = "iecm"; +#define DRV_SUMMARY "Intel(R) Data Plane Function Linux Driver" +static const char iecm_driver_string[] = DRV_SUMMARY; +static const char iecm_copyright[] = "Copyright (c) 2020, Intel Corporation."; + +MODULE_AUTHOR("Intel Corporation, "); +MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_LICENSE("GPL v2"); + +int debug = -1; +module_param(debug, int, 0644); +#ifndef CONFIG_DYNAMIC_DEBUG +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXXXXX)"); +#else +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); +#endif /* !CONFIG_DYNAMIC_DEBUG */ + +/** + * iecm_module_init - Driver registration routine + * + * iecm_module_init is the first routine called when the driver is + * loaded. All it does is register with the PCI subsystem. + */ +static int __init iecm_module_init(void) +{ + /* stub */ +} +module_init(iecm_module_init); + +/** + * iecm_module_exit - Driver exit cleanup routine + * + * iecm_module_exit is called just before the driver is removed + * from memory. + */ +static void __exit iecm_module_exit(void) +{ + /* stub */ +} +module_exit(iecm_module_exit); diff --git a/drivers/net/ethernet/intel/iecm/iecm_osdep.c b/drivers/net/ethernet/intel/iecm/iecm_osdep.c new file mode 100644 index 000000000000..d0534df357d0 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_osdep.c @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2020 Intel Corporation. */ + +#include +#include + +void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem, u64 size) +{ + /* stub */ +} + +void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c new file mode 100644 index 000000000000..a85471e72d66 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_singleq_txrx.c @@ -0,0 +1,255 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include +#include + +/** + * iecm_tx_singleq_build_ctob - populate command tag offset and size + * @td_cmd: Command to be filled in desc + * @td_offset: Offset to be filled in desc + * @size: Size of the buffer + * @td_tag: VLAN tag to be filled + * + * Returns the 64 bit value populated with the input parameters + */ +static __le64 +iecm_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, + u64 td_tag) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_csum - Enable Tx checksum offloads + * @first: pointer to first descriptor + * @off: pointer to struct that holds offload parameters + * + * Returns 0 or error (negative) if checksum offload + */ +static +int iecm_tx_singleq_csum(struct iecm_tx_buf *first, + struct iecm_tx_offload_params *off) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_map - Build the Tx base descriptor + * @tx_q: queue to send buffer on + * @first: first buffer info buffer to use + * @offloads: pointer to struct that holds offload parameters + * + * This function loops over the skb data pointed to by *first + * and gets a physical address for each memory location and programs + * it and the length into the transmit base mode descriptor. + */ +static void +iecm_tx_singleq_map(struct iecm_queue *tx_q, struct iecm_tx_buf *first, + struct iecm_tx_offload_params *offloads) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_frame - Sends buffer on Tx ring using base descriptors + * @skb: send buffer + * @tx_q: queue to send buffer on + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +static netdev_tx_t +iecm_tx_singleq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_start - Selects the right Tx queue to send buffer + * @skb: send buffer + * @netdev: network interface device structure + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +netdev_tx_t iecm_tx_singleq_start(struct sk_buff *skb, + struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_clean - Reclaim resources from queue + * @tx_q: Tx queue to clean + * @napi_budget: Used to determine if we are in netpoll + * + */ +static bool iecm_tx_singleq_clean(struct iecm_queue *tx_q, int napi_budget) +{ + /* stub */ +} + +/** + * iecm_tx_singleq_clean_all - Clean all Tx queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_tx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_test_staterr - tests bits in Rx descriptor + * status and error fields + * @rx_desc: pointer to receive descriptor (in le64 format) + * @stat_err_bits: value to mask + * + * This function does some fast chicanery in order to return the + * value of the mask which is really only used for boolean tests. + * The status_error_ptype_len doesn't need to be shifted because it begins + * at offset zero. + */ +static bool +iecm_rx_singleq_test_staterr(struct iecm_singleq_base_rx_desc *rx_desc, + const u64 stat_err_bits) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_is_non_eop - process handling of non-EOP buffers + * @rxq: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * @skb: Current socket buffer containing buffer in progress + */ +static bool iecm_rx_singleq_is_non_eop(struct iecm_queue *rxq, + struct iecm_singleq_base_rx_desc + *rx_desc, struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_csum - Indicate in skb if checksum is good + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: skb currently being received and modified + * @rx_desc: the receive descriptor + * @ptype: the packet type decoded by hardware + * + * skb->protocol must be set before this function is called + */ +static void iecm_rx_singleq_csum(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_singleq_base_rx_desc *rx_desc, + u8 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_process_skb_fields - Populate skb header fields from Rx + * descriptor + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, protocol, and + * other fields within the skb. + */ +static void +iecm_rx_singleq_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_singleq_base_rx_desc *rx_desc, + u8 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_buf_hw_alloc_all - Replace used receive buffers + * @rx_q: queue for which the hw buffers are allocated + * @cleaned_count: number of buffers to replace + * + * Returns false if all allocations were successful, true if any fail + */ +bool iecm_rx_singleq_buf_hw_alloc_all(struct iecm_queue *rx_q, + u16 cleaned_count) +{ + /* stub */ +} + +/** + * iecm_singleq_rx_put_buf - wrapper function to clean and recycle buffers + * @rx_bufq: Rx descriptor queue to transact packets on + * @rx_buf: Rx buffer to pull data from + + * This function will update the next_to_use/next_to_alloc if the current + * buffer is recycled. + */ +static void iecm_singleq_rx_put_buf(struct iecm_queue *rx_bufq, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value + * @q: queue to bump + */ +static void iecm_singleq_rx_bump_ntc(struct iecm_queue *q) +{ + /* stub */ +} + +/** + * iecm_singleq_rx_get_buf_page - Fetch Rx buffer page and synchronize data + * @rx_buf: Rx buf to fetch page for + * @size: size of buffer to add to skb + * + * This function will pull an Rx buffer page from the ring and synchronize it + * for use by the CPU. + */ +static struct sk_buff * +iecm_singleq_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, + const unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_clean - Reclaim resources after receive completes + * @rx_q: Rx queue to clean + * @budget: Total limit on number of packets to process + * + * Returns true if there's any budget left (e.g. the clean is finished) + */ +static int iecm_rx_singleq_clean(struct iecm_queue *rx_q, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_singleq_clean_all - Clean all Rx queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * @cleaned: returns number of packets cleaned + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_rx_singleq_clean_all(struct iecm_q_vector *q_vec, int budget, + int *cleaned) +{ + /* stub */ +} + +/** + * iecm_vport_singleq_napi_poll - NAPI handler + * @napi: struct from which you get q_vector + * @budget: budget provided by stack + */ +int iecm_vport_singleq_napi_poll(struct napi_struct *napi, int budget) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c new file mode 100644 index 000000000000..b4688daa744d --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -0,0 +1,1256 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_buf_lifo_push - push a buffer pointer onto stack + * @stack: pointer to stack struct + * @buf: pointer to buf to push + **/ +static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, + struct iecm_tx_buf *buf) +{ + /* stub */ +} + +/** + * iecm_buf_lifo_pop - pop a buffer pointer from stack + * @stack: pointer to stack struct + **/ +static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) +{ + /* stub */ +} + +/** + * iecm_get_stats64 - get statistics for network device structure + * @netdev: network interface device structure + * @stats: main device statistics structure + */ +void iecm_get_stats64(struct net_device *netdev, + struct rtnl_link_stats64 *stats) +{ + /* stub */ +} + +/** + * iecm_tx_buf_rel - Release a Tx buffer + * @tx_q: the queue that owns the buffer + * @tx_buf: the buffer to free + */ +void iecm_tx_buf_rel(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf) +{ + /* stub */ +} + +/** + * iecm_tx_buf_rel all - Free any empty Tx buffers + * @txq: queue to be cleaned + */ +void iecm_tx_buf_rel_all(struct iecm_queue *txq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_rel - Free Tx resources per queue + * @txq: Tx descriptor ring for a specific queue + * @bufq: buffer q or completion q + * + * Free all transmit software resources + */ +void iecm_tx_desc_rel(struct iecm_queue *txq, bool bufq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_rel_all - Free Tx Resources for All Queues + * @vport: virtual port structure + * + * Free all transmit software resources + */ +void iecm_tx_desc_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_buf_alloc_all - Allocate memory for all buffer resources + * @tx_q: queue for which the buffers are allocated + */ +static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_desc_alloc - Allocate the Tx descriptors + * @tx_q: the Tx ring to set up + * @bufq: buffer or completion queue + */ +static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) +{ + /* stub */ +} + +/** + * iecm_tx_desc_alloc_all - allocate all queues Tx resources + * @vport: virtual port private structure + */ +static enum iecm_status iecm_tx_desc_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_buf_rel - Release a Rx buffer + * @rxq: the queue that owns the buffer + * @rx_buf: the buffer to free + */ +static void iecm_rx_buf_rel(struct iecm_queue *rxq, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_buf_rel_all - Free all Rx buffer resources for a queue + * @rxq: queue to be cleaned + */ +void iecm_rx_buf_rel_all(struct iecm_queue *rxq) +{ + /* stub */ +} + +/** + * iecm_rx_desc_rel - Free a specific Rx q resources + * @rxq: queue to clean the resources from + * @bufq: buffer q or completion q + * @q_model: single or split q model + * + * Free a specific Rx queue resources + */ +void iecm_rx_desc_rel(struct iecm_queue *rxq, bool bufq, + enum virtchnl_queue_model q_model) +{ + /* stub */ +} + +/** + * iecm_rx_desc_rel_all - Free Rx Resources for All Queues + * @vport: virtual port structure + * + * Free all Rx queues resources + */ +void iecm_rx_desc_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_update - Store the new tail and head values + * @rxq: queue to bump + * @val: new head index + */ +void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_alloc - recycle or make a new page + * @rxq: ring to use + * @buf: rx_buffer struct to modify + * + * Returns true if the page was successfully allocated or + * reused. + */ +bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) +{ + /* stub */ +} + +/** + * iecm_rx_hdr_buf_hw_alloc - recycle or make a new page for header buffer + * @rxq: ring to use + * @hdr_buf: rx_buffer struct to modify + * + * Returns true if the page was successfully allocated or + * reused. + */ +bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq, + struct iecm_rx_buf *hdr_buf) +{ + /* stub */ +} + +/** + * iecm_rx_buf_hw_alloc_all - Replace used receive buffers + * @rxq: queue for which the hw buffers are allocated + * @cleaned_count: number of buffers to replace + * + * Returns false if all allocations were successful, true if any fail + */ +static bool +iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, + u16 cleaned_count) +{ + /* stub */ +} + +/** + * iecm_rx_buf_alloc_all - Allocate memory for all buffer resources + * @rxq: queue for which the buffers are allocated + */ +static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) +{ + /* stub */ +} + +/** + * iecm_rx_desc_alloc - Allocate queue Rx resources + * @rxq: Rx queue for which the resources are setup + * @bufq: buffer or completion queue + * @q_model: single or split queue model + */ +static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, + enum virtchnl_queue_model q_model) +{ + /* stub */ +} + +/** + * iecm_rx_desc_alloc_all - allocate all RX queues resources + * @vport: virtual port structure + */ +static enum iecm_status iecm_rx_desc_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_txq_group_rel - Release all resources for txq groups + * @vport: vport to release txq groups on + */ +static void iecm_txq_group_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_rxq_group_rel - Release all resources for rxq groups + * @vport: vport to release rxq groups on + */ +static void iecm_rxq_group_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queue_grp_rel_all - Release all queue groups + * @vport: vport to release queue groups for + */ +static void iecm_vport_queue_grp_rel_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queues_rel - Free memory for all queues + * @vport: virtual port + * + * Free the memory allocated for queues associated to a vport + */ +void iecm_vport_queues_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_init_fast_path_txqs - Initialize fast path txq array + * @vport: vport to init txqs on + * + * We get a queue index from skb->queue_mapping and we need a fast way to + * dereference the queue from queue groups. This allows us to quickly pull a + * txq based on a queue index. + */ +static enum iecm_status +iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_init_num_qs - Initialize number of queues + * @vport: vport to initialize qs + * @vport_msg: data to be filled into vport + */ +void iecm_vport_init_num_qs(struct iecm_vport *vport, + struct virtchnl_create_vport *vport_msg) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_desc - Calculate number of queue groups + * @vport: vport to calculate q groups for + */ +void iecm_vport_calc_num_q_desc(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); + +/** + * iecm_vport_calc_total_qs - Calculate total number of queues + * @vport_msg: message to fill with data + * @num_req_qs: user requested queues + */ +void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, + int num_req_qs) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_groups - Calculate number of queue groups + * @vport: vport to calculate q groups for + */ +void iecm_vport_calc_num_q_groups(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); + +/** + * iecm_vport_calc_numq_per_grp - Calculate number of queues per group + * @vport: vport to calculate queues for + * @num_txq: int return parameter + * @num_rxq: int return parameter + */ +static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, + int *num_txq, int *num_rxq) +{ + /* stub */ +} + +/** + * iecm_vport_calc_num_q_vec - Calculate total number of vectors required for + * this vport + * @vport: virtual port + * + */ +void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_txq_group_alloc - Allocate all txq group resources + * @vport: vport to allocate txq groups for + * @num_txq: number of txqs to allocate for each group + */ +static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, + int num_txq) +{ + /* stub */ +} + +/** + * iecm_rxq_group_alloc - Allocate all rxq group resources + * @vport: vport to allocate rxq groups for + * @num_rxq: number of rxqs to allocate for each group + */ +static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, + int num_rxq) +{ + /* stub */ +} + +/** + * iecm_vport_queue_grp_alloc_all - Allocate all queue groups/resources + * @vport: vport with qgrps to allocate + */ +static enum iecm_status +iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_queues_alloc - Allocate memory for all queues + * @vport: virtual port + * + * Allocate memory for queues associated with a vport + */ +enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_find_q - Find the Tx q based on q id + * @vport: the vport we care about + * @q_id: Id of the queue + * + * Returns queue ptr if found else returns NULL + */ +static struct iecm_queue * +iecm_tx_find_q(struct iecm_vport *vport, int q_id) +{ + /* stub */ +} + +/** + * iecm_tx_handle_sw_marker - Handle queue marker packet + * @tx_q: Tx queue to handle software marker + */ +static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean_buf - Clean TX buffer resources + * @tx_q: Tx queue to clean buffer from + * @tx_buf: buffer to be cleaned + * @napi_budget: Used to determine if we are in netpoll + * + * Returns the stats (bytes/packets) cleaned from this buffer + */ +static struct iecm_tx_queue_stats +iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, + int napi_budget) +{ + /* stub */ +} + +/** + * iecm_stash_flow_sch_buffers - store buffere parameter info to be freed at a + * later time (only relevant for flow scheduling mode) + * @txq: Tx queue to clean + * @tx_buf: buffer to store + */ +static int +iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean - Reclaim resources from buffer queue + * @tx_q: Tx queue to clean + * @end: queue index until which it should be cleaned + * @napi_budget: Used to determine if we are in netpoll + * @descs_only: true if queue is using flow-based scheduling and should + * not clean buffers at this time + * + * Cleans the queue descriptor ring. If the queue is using queue-based + * scheduling, the buffers will be cleaned as well and this function will + * return the number of bytes/packets cleaned. If the queue is using flow-based + * scheduling, only the descriptors are cleaned at this time. Separate packet + * completion events will be reported on the completion queue, and the + * buffers will be cleaned separately. The stats returned from this function + * when using flow-based scheduling are irrelevant. + */ +static struct iecm_tx_queue_stats +iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, + bool descs_only) +{ + /* stub */ +} + +/** + * iecm_tx_hw_tstamp - report hw timestamp from completion desc to stack + * @skb: original skb + * @desc_ts: pointer to 3 byte timestamp from descriptor + */ +static inline void iecm_tx_hw_tstamp(struct sk_buff *skb, u8 *desc_ts) +{ + /* stub */ +} + +/** + * iecm_tx_clean_flow_sch_bufs - clean bufs that were stored for + * out of order completions + * @txq: queue to clean + * @compl_tag: completion tag of packet to clean (from completion descriptor) + * @desc_ts: pointer to 3 byte timestamp from descriptor + * @budget: Used to determine if we are in netpoll + */ +static struct iecm_tx_queue_stats +iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, + u8 *desc_ts, int budget) +{ + /* stub */ +} + +/** + * iecm_tx_clean_complq - Reclaim resources on completion queue + * @complq: Tx ring to clean + * @budget: Used to determine if we are in netpoll + * + * Returns true if there's any budget left (e.g. the clean is finished) + */ +static bool +iecm_tx_clean_complq(struct iecm_queue *complq, int budget) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_build_ctb - populate command tag and size for queue + * based scheduling descriptors + * @desc: descriptor to populate + * @parms: pointer to Tx params struct + * @td_cmd: command to be filled in desc + * @size: size of buffer + */ +static inline void +iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *parms, + u16 td_cmd, u16 size) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_build_flow_desc - populate command tag and size for flow + * scheduling descriptors + * @desc: descriptor to populate + * @parms: pointer to Tx params struct + * @td_cmd: command to be filled in desc + * @size: size of buffer + */ +static inline void +iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, + struct iecm_tx_splitq_params *parms, + u16 td_cmd, u16 size) +{ + /* stub */ +} + +/** + * __iecm_tx_maybe_stop - 2nd level check for Tx stop conditions + * @tx_q: the queue to be checked + * @size: the size buffer we want to assure is available + * + * Returns -EBUSY if a stop is needed, else 0 + */ +static int +__iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_maybe_stop - 1st level check for Tx stop conditions + * @tx_q: the queue to be checked + * @size: number of descriptors we want to assure is available + * + * Returns 0 if stop is not needed + */ +int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_buf_hw_update - Store the new tail and head values + * @tx_q: queue to bump + * @val: new head index + * @skb: skb for which the descriptors are updated + */ +void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val, + struct sk_buff *skb) +{ + /* stub */ +} + +/** + * __iecm_tx_desc_count required - Get the number of descriptors needed for Tx + * @size: transmit request size in bytes + * + * Due to hardware alignment restrictions (4K alignment), we need to + * assume that we can have no more than 12K of data per descriptor, even + * though each descriptor can take up to 16K - 1 bytes of aligned memory. + * Thus, we need to divide by 12K. But division is slow! Instead, + * we decompose the operation into shifts and one relatively cheap + * multiply operation. + * + * To divide by 12K, we first divide by 4K, then divide by 3: + * To divide by 4K, shift right by 12 bits + * To divide by 3, multiply by 85, then divide by 256 + * (Divide by 256 is done by shifting right by 8 bits) + * Finally, we add one to round up. Because 256 isn't an exact multiple of + * 3, we'll underestimate near each multiple of 12K. This is actually more + * accurate as we have 4K - 1 of wiggle room that we can fit into the last + * segment. For our purposes this is accurate out to 1M which is orders of + * magnitude greater than our largest possible GSO size. + * + * This would then be implemented as: + * return (((size >> 12) * 85) >> 8) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; + * + * Since multiplication and division are commutative, we can reorder + * operations into: + * return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; + */ +static unsigned int __iecm_tx_desc_count_required(unsigned int size) +{ + /* stub */ +} + +/** + * iecm_tx_desc_count_required - calculate number of Tx descriptors needed + * @skb: send buffer + * + * Returns number of data descriptors needed for this skb. + */ +unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_map - Build the Tx flex descriptor + * @tx_q: queue to send buffer on + * @off: pointer to offload params struct + * @first: first buffer info buffer to use + * + * This function loops over the skb data pointed to by *first + * and gets a physical address for each memory location and programs + * it and the length into the transmit flex descriptor. + */ +static void +iecm_tx_splitq_map(struct iecm_queue *tx_q, + struct iecm_tx_offload_params *off, + struct iecm_tx_buf *first) +{ + /* stub */ +} + +/** + * iecm_tso - computes mss and TSO length to prepare for TSO + * @first: pointer to struct iecm_tx_buf + * @off: pointer to struct that holds offload parameters + * + * Returns error (negative) if TSO doesn't apply to the given skb, + * 0 otherwise. + * + * Note: this function can be used in the splitq and singleq paths + */ +static int iecm_tso(struct iecm_tx_buf *first, + struct iecm_tx_offload_params *off) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors + * @skb: send buffer + * @tx_q: queue to send buffer on + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +static netdev_tx_t +iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_start - Selects the right Tx queue to send buffer + * @skb: send buffer + * @netdev: network interface device structure + * + * Returns NETDEV_TX_OK if sent, else an error code + */ +netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, + struct net_device *netdev) +{ + /* stub */ +} + +/** + * iecm_ptype_to_htype - get a hash type + * @vport: virtual port data + * @ptype: the ptype value from the descriptor + * + * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by + * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of + * Rx desc. + */ +static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport, + u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_hash - set the hash value in the skb + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + */ +static void +iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_csum - Indicate in skb if checksum is good + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + * + * skb->protocol must be set before this function is called + */ +static void +iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_rsc - Set the RSC fields in the skb + * @rxq : Rx descriptor ring packet is being transacted on + * @skb : pointer to current skb being populated + * @rx_desc: Receive descriptor + * @ptype: the packet type decoded by hardware + * + * Populate the skb fields with the total number of RSC segments, RSC payload + * length and packet type. + */ +static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc, u16 ptype) +{ + /* stub */ +} + +/** + * iecm_rx_hwtstamp - check for an RX timestamp and pass up + * the stack + * @rx_desc: pointer to Rx descriptor containing timestamp + * @skb: skb to put timestamp in + */ +static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc, + struct sk_buff __maybe_unused *skb) +{ + /* stub */ +} + +/** + * iecm_rx_process_skb_fields - Populate skb header fields from Rx descriptor + * @rxq: Rx descriptor ring packet is being transacted on + * @skb: pointer to current skb being populated + * @rx_desc: Receive descriptor + * + * This function checks the ring, descriptor, and packet information in + * order to populate the hash, checksum, VLAN, protocol, and + * other fields within the skb. + */ +static bool +iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, + struct iecm_flex_rx_desc *rx_desc) +{ + /* stub */ +} + +/** + * iecm_rx_skb - Send a completed packet up the stack + * @rxq: Rx ring in play + * @skb: packet to send up + * + * This function sends the completed packet (via. skb) up the stack using + * GRO receive functions + */ +void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_page_is_reserved - check if reuse is possible + * @page: page struct to check + */ +static bool iecm_rx_page_is_reserved(struct page *page) +{ + /* stub */ +} + +/** + * iecm_rx_buf_adjust_pg_offset - Prepare Rx buffer for reuse + * @rx_buf: Rx buffer to adjust + * @size: Size of adjustment + * + * Update the offset within page so that Rx buf will be ready to be reused. + * For systems with PAGE_SIZE < 8192 this function will flip the page offset + * so the second half of page assigned to Rx buffer will be used, otherwise + * the offset is moved by the @size bytes + */ +static void +iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_can_reuse_page - Determine if page can be reused for another Rx + * @rx_buf: buffer containing the page + * + * If page is reusable, we have a green light for calling iecm_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the DMA mapping needs to be destroyed and + * page freed + */ +static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_add_frag - Add contents of Rx buffer to sk_buff as a frag + * @rx_buf: buffer containing page to add + * @skb: sk_buff to place the data into + * @size: packet length from rx_desc + * + * This function will add the data contained in rx_buf->page to the skb. + * It will just attach the page as a frag to the skb. + * The function will then update the page offset. + */ +void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, + unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_reuse_page - page flip buffer and store it back on the queue + * @rx_bufq: Rx descriptor ring to store buffers on + * @hsplit: true if header buffer, false otherwise + * @old_buf: donor buffer to have page reused + * + * Synchronizes page for reuse by the adapter + */ +void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, + bool hsplit, + struct iecm_rx_buf *old_buf) +{ + /* stub */ +} + +/** + * iecm_rx_get_buf_page - Fetch Rx buffer page and synchronize data for use + * @rx_buf: Rx buf to fetch page for + * @size: size of buffer to add to skb + * + * This function will pull an Rx buffer page from the ring and synchronize it + * for use by the CPU. + */ +static void +iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, + const unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_construct_skb - Allocate skb and populate it + * @rxq: Rx descriptor queue + * @rx_buf: Rx buffer to pull data from + * @size: the length of the packet + * + * This function allocates an skb. It then populates it with the page + * data from the current receive descriptor, taking care to set up the + * skb correctly. + */ +struct sk_buff * +iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, + unsigned int size) +{ + /* stub */ +} + +/** + * iecm_rx_cleanup_headers - Correct empty headers + * @skb: pointer to current skb being fixed + * + * Also address the case where we are pulling data in on pages only + * and as such no data is present in the skb header. + * + * In addition if skb is not at least 60 bytes we need to pad it so that + * it is large enough to qualify as a valid Ethernet frame. + * + * Returns true if an error was encountered and skb was freed. + */ +bool iecm_rx_cleanup_headers(struct sk_buff *skb) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_test_staterr - tests bits in Rx descriptor + * status and error fields + * @stat_err_field: field from descriptor to test bits in + * @stat_err_bits: value to mask + * + */ +static bool +iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_is_non_eop - process handling of non-EOP buffers + * @rx_desc: Rx descriptor for current buffer + * + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ +static bool +iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) +{ + /* stub */ +} + +/** + * iecm_rx_recycle_buf - Clean up used buffer and either recycle or free + * @rx_bufq: Rx descriptor queue to transact packets on + * @hsplit: true if buffer is a header buffer + * @rx_buf: Rx buffer to pull data from + * + * This function will clean up the contents of the rx_buf. It will either + * recycle the buffer or unmap it and free the associated resources. + * + * Returns true if the buffer is reused, false if the buffer is freed. + */ +bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_put_bufs - wrapper function to clean and recycle buffers + * @rx_bufq: Rx descriptor queue to transact packets on + * @hdr_buf: Rx header buffer to pull data from + * @rx_buf: Rx buffer to pull data from + * + * This function will update the next_to_use/next_to_alloc if the current + * buffer is recycled. + */ +static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, + struct iecm_rx_buf *hdr_buf, + struct iecm_rx_buf *rx_buf) +{ + /* stub */ +} + +/** + * iecm_rx_bump_ntc - Bump and wrap q->next_to_clean value + * @q: queue to bump + */ +static void iecm_rx_bump_ntc(struct iecm_queue *q) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_clean - Clean completed descriptors from Rx queue + * @rxq: Rx descriptor queue to retrieve receive buffer queue + * @budget: Total limit on number of packets to process + * + * This function provides a "bounce buffer" approach to Rx interrupt + * processing. The advantage to this is that on systems that have + * expensive overhead for IOMMU access this provides a means of avoiding + * it by maintaining the mapping of the page to the system. + * + * Returns amount of work completed + */ +static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) +{ + /* stub */ +} + +/** + * iecm_vport_intr_clean_queues - MSIX mode Interrupt Handler + * @irq: interrupt number + * @data: pointer to a q_vector + * + */ +irqreturn_t +iecm_vport_intr_clean_queues(int __always_unused irq, void *data) +{ + /* stub */ +} + +/** + * iecm_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport + * @vport: main vport structure + */ +static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_rel - Free memory allocated for interrupt vectors + * @vport: virtual port + * + * Free the memory allocated for interrupt vectors associated to a vport + */ +static void iecm_vport_intr_rel(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_rel_irq - Free the IRQ association with the OS + * @vport: main vport structure + */ +static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_dis_irq_all - Disable each interrupt + * @vport: main vport structure + */ +void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_buildreg_itr - Enable default interrupt generation settings + * @q_vector: pointer to q_vector + * @type: ITR index + * @itr: ITR value + */ +static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector, + const int type, u16 itr) +{ + /* stub */ +} + +static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector) +{ + /* stub */ +} + +/** + * iecm_vport_intr_set_new_itr - update the ITR value based on statistics + * @q_vector: structure containing interrupt and ring information + * @itr: structure containing queue performance data + * @q_type: queue type + * + * Stores a new ITR value based on packets and byte + * counts during the last interrupt. The advantage of per interrupt + * computation is faster updates and more accurate ITR for the current + * traffic pattern. Constants in this function were computed + * based on theoretical maximum wire speed and thresholds were set based + * on testing data as well as attempting to minimize response time + * while increasing bulk throughput. + */ +static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, + struct iecm_itr *itr, + enum virtchnl_queue_type q_type) +{ + /* stub */ +} + +/** + * iecm_vport_intr_update_itr_ena_irq - Update ITR and re-enable MSIX interrupt + * @q_vector: q_vector for which ITR is being updated and interrupt enabled + */ +void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) +{ + /* stub */ +} + +/** + * iecm_vport_intr_req_irq - get MSI-X vectors from the OS for the vport + * @vport: main vport structure + * @basename: name for the vector + */ +static int +iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) +{ + /* stub */ +} + +/** + * iecm_vport_intr_ena_irq_all - Enable IRQ for the given vport + * @vport: main vport structure + */ +void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_deinit - Release all vector associations for the vport + * @vport: main vport structure + */ +void iecm_vport_intr_deinit(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport + * @vport: main vport structure + */ +static void +iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_tx_splitq_clean_all- Clean completion queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget) +{ + /* stub */ +} + +/** + * iecm_rx_splitq_clean_all- Clean completion queues + * @q_vec: queue vector + * @budget: Used to determine if we are in netpoll + * @cleaned: returns number of packets cleaned + * + * Returns false if clean is not complete else returns true + */ +static inline bool +iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, + int *cleaned) +{ + /* stub */ +} + +/** + * iecm_vport_splitq_napi_poll - NAPI handler + * @napi: struct from which you get q_vector + * @budget: budget provided by stack + */ +int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) +{ + /* stub */ +} + +/** + * iecm_vport_intr_map_vector_to_qs - Map vectors to queues + * @vport: virtual port + * + * Mapping for vectors to queues + */ +void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_init_vec_idx - Initialize the vector indexes + * @vport: virtual port + * + * Initialize vector indexes with values returned over mailbox + */ +static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_alloc - Allocate memory for interrupt vectors + * @vport: virtual port + * + * We allocate one q_vector per queue interrupt. If allocation fails we + * return -ENOMEM. + */ +int iecm_vport_intr_alloc(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_intr_init - Setup all vectors for the given vport + * @vport: virtual port + * + * Returns 0 on success or negative on failure + */ +int iecm_vport_intr_init(struct iecm_vport *vport) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); + +/** + * iecm_config_rss - Prepare for RSS + * @vport: virtual port + * + * Return 0 on success, negative on failure + */ +int iecm_config_rss(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_get_rx_qid_list - Create a list of RX QIDs + * @vport: virtual port + * + * qid_list is created and freed by the caller + */ +void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) +{ + /* stub */ +} + +/** + * iecm_fill_dflt_rss_lut - Fill the indirection table with the default values + * @vport: virtual port structure + * @qid_list: List of the RX qid's + * + * qid_list is created and freed by the caller + */ +void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) +{ + /* stub */ +} + +/** + * iecm_init_rss - Prepare for RSS + * @vport: virtual port + * + * Return 0 on success, negative on failure + */ +int iecm_init_rss(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_deinit_rss - Prepare for RSS + * @vport: virtual port + * + */ +void iecm_deinit_rss(struct iecm_vport *vport) +{ + /* stub */ +} diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c new file mode 100644 index 000000000000..271009350503 --- /dev/null +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -0,0 +1,570 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include + +/** + * iecm_recv_event_msg - Receive virtchnl event message + * @vport: virtual port structure + * + * Receive virtchnl event message + */ +void iecm_recv_event_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_mb_clean - Reclaim the send mailbox queue entries + * @adapter: Driver specific private structure + * + * Reclaim the send mailbox queue entries to be used to send further messages + * + * Returns success or failure + */ +enum iecm_status +iecm_mb_clean(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_mb_msg - Send message over mailbox + * @adapter: Driver specific private structure + * @op: virtchnl opcode + * @msg_size: size of the payload + * @msg: pointer to buffer holding the payload + * + * Will prepare the control queue message and initiates the send API + * + * Returns success or failure + */ +enum iecm_status +iecm_send_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + u16 msg_size, u8 *msg) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_send_mb_msg); + +/** + * iecm_recv_mb_msg - Receive message over mailbox + * @adapter: Driver specific private structure + * @op: virtchnl operation code + * @msg: Received message holding buffer + * @msg_size: message size + * + * Will receive control queue message and posts the receive buffer + */ +enum iecm_status +iecm_recv_mb_msg(struct iecm_adapter *adapter, enum virtchnl_ops op, + void *msg, int msg_size) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_recv_mb_msg); + +/** + * iecm_send_ver_msg - send virtchnl version message + * @adapter: Driver specific private structure + * + * Send virtchnl version message + */ +static enum iecm_status +iecm_send_ver_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_recv_ver_msg - Receive virtchnl version message + * @adapter: Driver specific private structure + * + * Receive virtchnl version message + */ +static enum iecm_status +iecm_recv_ver_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_get_caps_msg - Send virtchnl get capabilities message + * @adapter: Driver specific private structure + * + * send virtchnl get capabilities message + */ +enum iecm_status +iecm_send_get_caps_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_send_get_caps_msg); + +/** + * iecm_recv_get_caps_msg - Receive virtchnl get capabilities message + * @adapter: Driver specific private structure + * + * Receive virtchnl get capabilities message + */ +static enum iecm_status +iecm_recv_get_caps_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_send_create_vport_msg - Send virtchnl create vport message + * @adapter: Driver specific private structure + * + * send virtchnl create vport message + * + * Returns success or failure + */ +static enum iecm_status +iecm_send_create_vport_msg(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_recv_create_vport_msg - Receive virtchnl create vport message + * @adapter: Driver specific private structure + * @vport_id: Virtual port identifier + * + * Receive virtchnl create vport message + * + * Returns success or failure + */ +static enum iecm_status +iecm_recv_create_vport_msg(struct iecm_adapter *adapter, + int *vport_id) +{ + /* stub */ +} + +/** + * iecm_wait_for_event - wait for virtchnl response + * @adapter: Driver private data structure + * @state: check on state upon timeout after 500ms + * @err_check: check if this specific error bit is set + * + * checks if state is set upon expiry of timeout + * + * Returns success or failure + */ +enum iecm_status +iecm_wait_for_event(struct iecm_adapter *adapter, + enum iecm_vport_vc_state state, + enum iecm_vport_vc_state err_check) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_wait_for_event); + +/** + * iecm_send_destroy_vport_msg - Send virtchnl destroy vport message + * @vport: virtual port data structure + * + * send virtchnl destroy vport message + */ +enum iecm_status +iecm_send_destroy_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_enable_vport_msg - Send virtchnl enable vport message + * @vport: virtual port data structure + * + * send enable vport virtchnl message + */ +enum iecm_status +iecm_send_enable_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_disable_vport_msg - Send virtchnl disable vport message + * @vport: virtual port data structure + * + * send disable vport virtchnl message + */ +enum iecm_status +iecm_send_disable_vport_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_tx_queues_msg - Send virtchnl config Tx queues message + * @vport: virtual port data structure + * + * send config Tx queues virtchnl message + * + * Returns success or failure + */ +enum iecm_status +iecm_send_config_tx_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_rx_queues_msg - Send virtchnl config Rx queues message + * @vport: virtual port data structure + * + * send config Rx queues virtchnl message + * + * Returns success or failure + */ +enum iecm_status +iecm_send_config_rx_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_ena_dis_queues_msg - Send virtchnl enable or disable + * queues message + * @vport: virtual port data structure + * @vc_op: virtchnl op code to send + * + * send enable or disable queues virtchnl message + * + * Returns success or failure + */ +static enum iecm_status +iecm_send_ena_dis_queues_msg(struct iecm_vport *vport, + enum virtchnl_ops vc_op) +{ + /* stub */ +} + +/** + * iecm_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue + * vector message + * @vport: virtual port data structure + * @map: true for map and false for unmap + * + * send map or unmap queue vector virtchnl message + * + * Returns success or failure + */ +static enum iecm_status +iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, + bool map) +{ + /* stub */ +} + +/** + * iecm_send_enable_queues_msg - send enable queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send enable queues virtchnl message + */ +static enum iecm_status +iecm_send_enable_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_disable_queues_msg - send disable queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send disable queues virtchnl message + */ +static enum iecm_status +iecm_send_disable_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_delete_queues_msg - send delete queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send delete queues virtchnl message + */ +enum iecm_status +iecm_send_delete_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_config_queues_msg - Send config queues virtchnl message + * @vport: Virtual port private data structure + * + * Will send config queues virtchnl message + */ +static enum iecm_status +iecm_send_config_queues_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_add_queues_msg - Send virtchnl add queues message + * @vport: Virtual port private data structure + * @num_tx_q: number of transmit queues + * @num_complq: number of transmit completion queues + * @num_rx_q: number of receive queues + * @num_rx_bufq: number of receive buffer queues + * + * Returns success or failure + */ +enum iecm_status +iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, + u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) +{ + /* stub */ +} + +/** + * iecm_send_get_stats_msg - Send virtchnl get statistics message + * @adapter: Driver specific private structure + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_stats_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_hash_msg - Send set or get RSS hash message + * @vport: virtual port data structure + * @get: flag to get or set RSS hash + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_lut_msg - Send virtchnl get or set RSS lut message + * @vport: virtual port data structure + * @get: flag to set or get RSS look up table + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_set_rss_key_msg - Send virtchnl get or set RSS key message + * @vport: virtual port data structure + * @get: flag to set or get RSS look up table + * + * Returns success or failure + */ +enum iecm_status +iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) +{ + /* stub */ +} + +/** + * iecm_send_get_rx_ptype_msg - Send virtchnl get or set RSS key message + * @vport: virtual port data structure + * + * Returns success or failure + */ +enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_find_ctlq - Given a type and id, find ctlq info + * @adapter: adapter info struct + * @type: type of ctrlq to find + * @id: ctlq id to find + * + * Returns pointer to found ctlq info struct, NULL otherwise. + */ +static struct iecm_ctlq_info *iecm_find_ctlq(struct iecm_hw *hw, + enum iecm_ctlq_type type, int id) +{ + /* stub */ +} + +/** + * iecm_deinit_dflt_mbx - De initialize mailbox + * @adapter: adapter info struct + */ +void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_init_dflt_mbx - Setup default mailbox parameters and make request + * @adapter: adapter info struct + * + * Returns 0 on success, negative otherwise + */ +enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_params_buf_alloc - Allocate memory for mailbox resources + * @adapter: Driver specific private data structure + * + * Will alloc memory to hold the vport parameters received on mailbox + */ +int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vport_params_buf_rel - Release memory for mailbox resources + * @adapter: Driver specific private data structure + * + * Will release memory to hold the vport parameters received on mailbox + */ +void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) +{ + /* stub */ +} + +/** + * iecm_vc_core_init - Initialize mailbox and get resources + * @adapter: Driver specific private structure + * @vport_id: Virtual port identifier + * + * Will check if HW is ready with reset complete. Initializes the mailbox and + * communicate with master to get all the default vport parameters. + */ +int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vc_core_init); + +/** + * iecm_vport_init - Initialize virtual port + * @vport: virtual port to be initialized + * @vport_id: Unique identification number of vport + * + * Will initialize vport with the info received through MB earlier + */ +static void iecm_vport_init(struct iecm_vport *vport, int vport_id) +{ + /* stub */ +} + +/** + * iecm_vport_get_vec_ids - Initialize vector id from Mailbox parameters + * @vecids: Array of vector ids + * @num_vecids: number of vector ids + * @chunks: vector ids received over mailbox + * + * Will initialize all vector ids with ids received as mailbox parameters + * Returns number of ids filled + */ +int +iecm_vport_get_vec_ids(u16 *vecids, int num_vecids, + struct virtchnl_vector_chunks *chunks) +{ + /* stub */ +} + +/** + * iecm_vport_get_queue_ids - Initialize queue id from Mailbox parameters + * @qids: Array of queue ids + * @num_qids: number of queue ids + * @q_type: queue model + * @chunks: queue ids received over mailbox + * + * Will initialize all queue ids with ids received as mailbox parameters + * Returns number of ids filled + */ +static int +iecm_vport_get_queue_ids(u16 *qids, int num_qids, + enum virtchnl_queue_type q_type, + struct virtchnl_queue_chunks *chunks) +{ + /* stub */ +} + +/** + * __iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters + * @vport: virtual port for which the queues ids are initialized + * @qids: queue ids + * @num_qids: number of queue ids + * @q_type: type of queue + * + * Will initialize all queue ids with ids received as mailbox + * parameters. Returns number of queue ids initialized. + */ +static int +__iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, + int num_qids, enum virtchnl_queue_type q_type) +{ + /* stub */ +} + +/** + * iecm_vport_queue_ids_init - Initialize queue ids from Mailbox parameters + * @vport: virtual port for which the queues ids are initialized + * + * Will initialize all queue ids with ids received as mailbox + * parameters. Returns error if all the queues are not initialized + */ +static +enum iecm_status iecm_vport_queue_ids_init(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_vport_adjust_qs - Adjust to new requested queues + * @vport: virtual port data struct + * + * Renegotiate queues + */ +enum iecm_status iecm_vport_adjust_qs(struct iecm_vport *vport) +{ + /* stub */ +} + +/** + * iecm_is_capability_ena - Default implementation of capability checking + * @adapter: Private data struct + * @flag: flag to check + * + * Return true if capability is supported, false otherwise + */ +static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) +{ + /* stub */ +} + +/** + * iecm_vc_ops_init - Initialize virtchnl common API + * @adapter: Driver specific private structure + * + * Initialize the function pointers with the extended feature set functions + * as APF will deal only with new set of opcodes. + */ +void iecm_vc_ops_init(struct iecm_adapter *adapter) +{ + /* stub */ +} +EXPORT_SYMBOL(iecm_vc_ops_init); From patchwork Thu Jun 18 05:13:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F570C433DF for ; Thu, 18 Jun 2020 05:20:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7AC8020899 for ; Thu, 18 Jun 2020 05:20:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726909AbgFRFUy (ORCPT ); Thu, 18 Jun 2020 01:20:54 -0400 Received: from mga01.intel.com ([192.55.52.88]:59432 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725912AbgFRFUx (ORCPT ); Thu, 18 Jun 2020 01:20:53 -0400 IronPort-SDR: rGu7ZLDOQu//kT+4gyhLiC3fNeGuDVJJcAMw2u3NW2flcaidkZ/5CKOlQ+UjYoltBbRur2nsfL RJRxpRbHQ23Q== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="160505313" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="160505313" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:50 -0700 IronPort-SDR: yjOK5C92848D/udjI2964taELtKbXD5JYxEFUYK7TN8QG0UUYhIBuBB2iObA+EZm7AjmnRzi85 gC8ehNUdndxQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495590" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:50 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next 05/15] iecm: Add basic netdevice functionality Date: Wed, 17 Jun 2020 22:13:34 -0700 Message-Id: <20200618051344.516587-6-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This implements probe, interface up/down, and netdev_ops. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 404 +++++++++++++++++- drivers/net/ethernet/intel/iecm/iecm_main.c | 7 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 6 +- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 73 +++- 4 files changed, 467 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 57a20204a7c8..6023d0c727fb 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -24,7 +24,17 @@ static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) */ static void iecm_intr_rel(struct iecm_adapter *adapter) { - /* stub */ + if (!adapter->msix_entries) + return; + clear_bit(__IECM_MB_INTR_MODE, adapter->flags); + clear_bit(__IECM_MB_INTR_TRIGGER, adapter->flags); + iecm_mb_intr_rel_irq(adapter); + + pci_free_irq_vectors(adapter->pdev); + kfree(adapter->msix_entries); + adapter->msix_entries = NULL; + kfree(adapter->req_vec_chunks); + adapter->req_vec_chunks = NULL; } /** @@ -96,7 +106,53 @@ void iecm_intr_distribute(struct iecm_adapter *adapter) */ static int iecm_intr_req(struct iecm_adapter *adapter) { - /* stub */ + int min_vectors, max_vectors, err = 0; + unsigned int vector; + int num_vecs; + int v_actual; + + num_vecs = adapter->vports[0]->num_q_vectors + + IECM_MAX_NONQ_VEC + IECM_MAX_RDMA_VEC; + + min_vectors = IECM_MIN_VEC; +#define IECM_MAX_EVV_MAPPED_VEC 16 + max_vectors = min(num_vecs, IECM_MAX_EVV_MAPPED_VEC); + + v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors, + max_vectors, PCI_IRQ_MSIX); + if (v_actual < 0) { + dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n", + v_actual); + return v_actual; + } + + adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry), + GFP_KERNEL); + + if (!adapter->msix_entries) { + pci_free_irq_vectors(adapter->pdev); + return -ENOMEM; + } + + for (vector = 0; vector < v_actual; vector++) { + adapter->msix_entries[vector].entry = vector; + adapter->msix_entries[vector].vector = + pci_irq_vector(adapter->pdev, vector); + } + adapter->num_msix_entries = v_actual; + adapter->num_req_msix = num_vecs; + + iecm_intr_distribute(adapter); + + err = iecm_mb_intr_init(adapter); + if (err) + goto intr_rel; + iecm_mb_irq_enable(adapter); + return err; + +intr_rel: + iecm_intr_rel(adapter); + return err; } /** @@ -118,7 +174,21 @@ static int iecm_cfg_netdev(struct iecm_vport *vport) */ static int iecm_cfg_hw(struct iecm_adapter *adapter) { - /* stub */ + struct pci_dev *pdev = adapter->pdev; + struct iecm_hw *hw = &adapter->hw; + + hw->hw_addr_len = pci_resource_len(pdev, 0); + hw->hw_addr = ioremap(pci_resource_start(pdev, 0), hw->hw_addr_len); + + if (!hw->hw_addr) + return -EIO; + + hw->back = adapter; + hw->bus.device = PCI_SLOT(pdev->devfn); + hw->bus.func = PCI_FUNC(pdev->devfn); + hw->bus.bus_id = pdev->bus->number; + + return 0; } /** @@ -132,7 +202,22 @@ static int iecm_cfg_hw(struct iecm_adapter *adapter) */ static int iecm_get_free_slot(void *array, int size, int curr) { - /* stub */ + int **tmp_array = (int **)array; + int next; + + if (curr < (size - 1) && !tmp_array[curr + 1]) { + next = curr + 1; + } else { + int i = 0; + + while ((i < size) && (tmp_array[i])) + i++; + if (i == size) + next = IECM_NO_FREE_SLOT; + else + next = i; + } + return next; } /** @@ -141,7 +226,9 @@ static int iecm_get_free_slot(void *array, int size, int curr) */ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return np->vport; } /** @@ -150,7 +237,9 @@ struct iecm_vport *iecm_netdev_to_vport(struct net_device *netdev) */ struct iecm_adapter *iecm_netdev_to_adapter(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return np->vport->adapter; } /** @@ -185,7 +274,22 @@ static int iecm_stop(struct net_device *netdev) */ int iecm_vport_rel(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter; + + if (!vport->adapter) + return -ENODEV; + adapter = vport->adapter; + + iecm_vport_stop(vport); + iecm_deinit_rss(vport); + unregister_netdev(vport->netdev); + free_netdev(vport->netdev); + vport->netdev = NULL; + if (adapter->dev_ops.vc_ops.destroy_vport) + adapter->dev_ops.vc_ops.destroy_vport(vport); + kfree(vport); + + return 0; } /** @@ -194,7 +298,24 @@ int iecm_vport_rel(struct iecm_vport *vport) */ static void iecm_vport_rel_all(struct iecm_adapter *adapter) { - /* stub */ + int err, i; + + if (!adapter->vports) + return; + + for (i = 0; i < adapter->num_alloc_vport; i++) { + if (!adapter->vports[i]) + continue; + + err = iecm_vport_rel(adapter->vports[i]); + if (err) + dev_dbg(&adapter->pdev->dev, + "Failed to release adapter->vport[%d], err %d,\n", + i, err); + else + adapter->vports[i] = NULL; + } + adapter->num_alloc_vport = 0; } /** @@ -217,7 +338,47 @@ void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog *prog) static struct iecm_vport * iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) { - /* stub */ + struct iecm_vport *vport = NULL; + + if (adapter->next_vport == IECM_NO_FREE_SLOT) + return vport; + + /* Need to protect the allocation of the vports at the adapter level */ + mutex_lock(&adapter->sw_mutex); + + vport = kzalloc(sizeof(*vport), GFP_KERNEL); + if (!vport) + goto unlock_adapter; + + vport->adapter = adapter; + vport->idx = adapter->next_vport; + vport->compln_clean_budget = IECM_TX_COMPLQ_CLEAN_BUDGET; + adapter->num_alloc_vport++; + adapter->dev_ops.vc_ops.vport_init(vport, vport_id); + + /* Setup default MSIX irq handler for the vport */ + vport->irq_q_handler = iecm_vport_intr_clean_queues; + vport->q_vector_base = IECM_MAX_NONQ_VEC; + + /* fill vport slot in the adapter struct */ + adapter->vports[adapter->next_vport] = vport; + if (iecm_cfg_netdev(vport)) + goto cfg_netdev_fail; + + /* prepare adapter->next_vport for next use */ + adapter->next_vport = iecm_get_free_slot(adapter->vports, + adapter->num_alloc_vport, + adapter->next_vport); + + goto unlock_adapter; + +cfg_netdev_fail: + adapter->vports[adapter->next_vport] = NULL; + kfree(vport); + vport = NULL; +unlock_adapter: + mutex_unlock(&adapter->sw_mutex); + return vport; } /** @@ -227,7 +388,22 @@ iecm_vport_alloc(struct iecm_adapter *adapter, int vport_id) */ static void iecm_service_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + serv_task.work); + + if (test_bit(__IECM_MB_INTR_MODE, adapter->flags)) { + if (test_and_clear_bit(__IECM_MB_INTR_TRIGGER, + adapter->flags)) { + iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0); + iecm_mb_irq_enable(adapter); + } + } else { + iecm_recv_mb_msg(adapter, VIRTCHNL_OP_UNKNOWN, NULL, 0); + } + + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(300)); } /** @@ -261,7 +437,41 @@ static int iecm_vport_open(struct iecm_vport *vport) */ static void iecm_init_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + init_task.work); + struct iecm_vport *vport; + struct pci_dev *pdev; + int vport_id, err; + + err = adapter->dev_ops.vc_ops.core_init(adapter, &vport_id); + if (err) + return; + + pdev = adapter->pdev; + vport = iecm_vport_alloc(adapter, vport_id); + if (!vport) { + err = -EFAULT; + dev_err(&pdev->dev, "probe failed on vport setup:%d\n", + err); + return; + } + /* Start the service task before requesting vectors. This will ensure + * vector information response from mailbox is handled + */ + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(5 * (pdev->devfn & 0x07))); + err = iecm_intr_req(adapter); + if (err) { + dev_err(&pdev->dev, "failed to enable interrupt vectors: %d\n", + err); + iecm_vport_rel(vport); + return; + } + /* Once state is put into DOWN, driver is ready for dev_open */ + adapter->state = __IECM_DOWN; + if (test_and_clear_bit(__IECM_UP_REQUESTED, adapter->flags)) + iecm_vport_open(vport); } /** @@ -272,7 +482,40 @@ static void iecm_init_task(struct work_struct *work) */ static int iecm_api_init(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_reg_ops *reg_ops = &adapter->dev_ops.reg_ops; + struct pci_dev *pdev = adapter->pdev; + + if (!adapter->dev_ops.reg_ops_init) { + dev_err(&pdev->dev, "Invalid device, register API init not defined.\n"); + return -EINVAL; + } + adapter->dev_ops.reg_ops_init(adapter); + if (!(reg_ops->ctlq_reg_init && reg_ops->vportq_reg_init && + reg_ops->intr_reg_init && reg_ops->mb_intr_reg_init && + reg_ops->reset_reg_init && reg_ops->trigger_reset)) { + dev_err(&pdev->dev, "Invalid device, missing one or more register functions\n"); + return -EINVAL; + } + + if (adapter->dev_ops.vc_ops_init) { + struct iecm_virtchnl_ops *vc_ops; + + adapter->dev_ops.vc_ops_init(adapter); + vc_ops = &adapter->dev_ops.vc_ops; + if (!(vc_ops->core_init && vc_ops->vport_init && + vc_ops->vport_queue_ids_init && vc_ops->get_caps && + vc_ops->config_queues && vc_ops->enable_queues && + vc_ops->disable_queues && vc_ops->irq_map_unmap && + vc_ops->get_set_rss_lut && vc_ops->get_set_rss_hash && + vc_ops->adjust_qs && vc_ops->get_ptype)) { + dev_err(&pdev->dev, "Invalid device, missing one or more virtchnl functions\n"); + return -EINVAL; + } + } else { + iecm_vc_ops_init(adapter); + } + + return 0; } /** @@ -284,7 +527,11 @@ static int iecm_api_init(struct iecm_adapter *adapter) */ void iecm_deinit_task(struct iecm_adapter *adapter) { - /* stub */ + iecm_vport_rel_all(adapter); + cancel_delayed_work_sync(&adapter->serv_task); + iecm_deinit_dflt_mbx(adapter); + iecm_vport_params_buf_rel(adapter); + iecm_intr_rel(adapter); } /** @@ -306,7 +553,13 @@ iecm_init_hard_reset(struct iecm_adapter *adapter) */ static void iecm_vc_event_task(struct work_struct *work) { - /* stub */ + struct iecm_adapter *adapter = container_of(work, + struct iecm_adapter, + vc_event_task.work); + + if (test_bit(__IECM_HR_CORE_RESET, adapter->flags) || + test_bit(__IECM_HR_FUNC_RESET, adapter->flags)) + iecm_init_hard_reset(adapter); } /** @@ -335,7 +588,103 @@ int iecm_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent, struct iecm_adapter *adapter) { - /* stub */ + int err; + + adapter->pdev = pdev; + err = iecm_api_init(adapter); + if (err) { + dev_err(&pdev->dev, "Device API is incorrectly configured\n"); + return err; + } + + err = pcim_iomap_regions(pdev, BIT(IECM_BAR0), pci_name(pdev)); + if (err) { + dev_err(&pdev->dev, "BAR0 I/O map error %d\n", err); + return err; + } + + /* set up for high or low DMA */ + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (err) + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + if (err) { + dev_err(&pdev->dev, "DMA configuration failed: 0x%x\n", err); + return err; + } + + pci_enable_pcie_error_reporting(pdev); + pci_set_master(pdev); + pci_set_drvdata(pdev, adapter); + + adapter->init_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + if (!adapter->init_wq) { + dev_err(&pdev->dev, "Failed to allocate workqueue\n"); + err = -ENOMEM; + goto err_wq_alloc; + } + + adapter->serv_wq = + alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + if (!adapter->serv_wq) { + dev_err(&pdev->dev, "Failed to allocate workqueue\n"); + err = -ENOMEM; + goto err_mbx_wq_alloc; + } + /* setup msglvl */ + adapter->msg_enable = netif_msg_init(debug, IECM_DFLT_NETIF_M); + + adapter->vports = kcalloc(IECM_MAX_NUM_VPORTS, + sizeof(*adapter->vports), GFP_KERNEL); + if (!adapter->vports) { + err = -ENOMEM; + goto err_vport_alloc; + } + + err = iecm_vport_params_buf_alloc(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to alloc vport params buffer: %d\n", + err); + goto err_mb_res; + } + + err = iecm_cfg_hw(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to configure HW structure for adapter: %d\n", + err); + goto err_cfg_hw; + } + + mutex_init(&adapter->sw_mutex); + mutex_init(&adapter->vc_msg_lock); + mutex_init(&adapter->reset_lock); + init_waitqueue_head(&adapter->vchnl_wq); + + INIT_DELAYED_WORK(&adapter->serv_task, iecm_service_task); + INIT_DELAYED_WORK(&adapter->init_task, iecm_init_task); + INIT_DELAYED_WORK(&adapter->vc_event_task, iecm_vc_event_task); + + mutex_lock(&adapter->reset_lock); + set_bit(__IECM_HR_DRV_LOAD, adapter->flags); + err = iecm_init_hard_reset(adapter); + if (err) { + dev_err(&pdev->dev, "Failed to reset device: %d\n", err); + goto err_mb_init; + } + + return 0; +err_mb_init: +err_cfg_hw: + iecm_vport_params_buf_rel(adapter); +err_mb_res: + kfree(adapter->vports); +err_vport_alloc: + destroy_workqueue(adapter->serv_wq); +err_mbx_wq_alloc: + destroy_workqueue(adapter->init_wq); +err_wq_alloc: + pci_disable_pcie_error_reporting(pdev); + return err; } EXPORT_SYMBOL(iecm_probe); @@ -345,7 +694,22 @@ EXPORT_SYMBOL(iecm_probe); */ void iecm_remove(struct pci_dev *pdev) { - /* stub */ + struct iecm_adapter *adapter = pci_get_drvdata(pdev); + + if (!adapter) + return; + + iecm_deinit_task(adapter); + cancel_delayed_work_sync(&adapter->vc_event_task); + destroy_workqueue(adapter->serv_wq); + destroy_workqueue(adapter->init_wq); + kfree(adapter->vports); + kfree(adapter->vport_params_recvd); + kfree(adapter->vport_params_reqd); + mutex_destroy(&adapter->sw_mutex); + mutex_destroy(&adapter->vc_msg_lock); + mutex_destroy(&adapter->reset_lock); + pci_disable_pcie_error_reporting(pdev); } EXPORT_SYMBOL(iecm_remove); @@ -355,7 +719,13 @@ EXPORT_SYMBOL(iecm_remove); */ void iecm_shutdown(struct pci_dev *pdev) { - /* stub */ + struct iecm_adapter *adapter; + + adapter = pci_get_drvdata(pdev); + adapter->state = __IECM_REMOVE; + + if (system_state == SYSTEM_POWER_OFF) + pci_set_power_state(pdev, PCI_D3hot); } EXPORT_SYMBOL(iecm_shutdown); diff --git a/drivers/net/ethernet/intel/iecm/iecm_main.c b/drivers/net/ethernet/intel/iecm/iecm_main.c index 0644581fc746..3b6eb44643de 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_main.c +++ b/drivers/net/ethernet/intel/iecm/iecm_main.c @@ -30,7 +30,10 @@ MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); */ static int __init iecm_module_init(void) { - /* stub */ + pr_info("%s\n", iecm_driver_string); + pr_info("%s\n", iecm_copyright); + + return 0; } module_init(iecm_module_init); @@ -42,6 +45,6 @@ module_init(iecm_module_init); */ static void __exit iecm_module_exit(void) { - /* stub */ + pr_info("module unloaded\n"); } module_exit(iecm_module_exit); diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index b4688daa744d..0d684adc15e5 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -989,7 +989,11 @@ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) irqreturn_t iecm_vport_intr_clean_queues(int __always_unused irq, void *data) { - /* stub */ + struct iecm_q_vector *q_vector = (struct iecm_q_vector *)data; + + napi_schedule(&q_vector->napi); + + return IRQ_HANDLED; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 271009350503..7bf7c02f2d6f 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -418,7 +418,47 @@ void iecm_deinit_dflt_mbx(struct iecm_adapter *adapter) */ enum iecm_status iecm_init_dflt_mbx(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_ctlq_create_info ctlq_info[] = { + { + .type = IECM_CTLQ_TYPE_MAILBOX_TX, + .id = IECM_DFLT_MBX_ID, + .len = IECM_DFLT_MBX_Q_LEN, + .buf_size = IECM_DFLT_MBX_BUF_SIZE + }, + { + .type = IECM_CTLQ_TYPE_MAILBOX_RX, + .id = IECM_DFLT_MBX_ID, + .len = IECM_DFLT_MBX_Q_LEN, + .buf_size = IECM_DFLT_MBX_BUF_SIZE + } + }; + struct iecm_hw *hw = &adapter->hw; + enum iecm_status ret; + + adapter->dev_ops.reg_ops.ctlq_reg_init(ctlq_info); + +#define NUM_Q 2 + ret = iecm_ctlq_init(hw, NUM_Q, ctlq_info); + if (ret) + goto init_mbx_done; + + hw->asq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_TX, + IECM_DFLT_MBX_ID); + hw->arq = iecm_find_ctlq(hw, IECM_CTLQ_TYPE_MAILBOX_RX, + IECM_DFLT_MBX_ID); + + if (!hw->asq || !hw->arq) { + iecm_ctlq_deinit(hw); + ret = IECM_ERR_CTLQ_ERROR; + } + adapter->state = __IECM_STARTUP; + /* Skew the delay for init tasks for each function based on fn number + * to prevent every function from making the same call simultaneously. + */ + queue_delayed_work(adapter->init_wq, &adapter->init_task, + msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07))); +init_mbx_done: + return ret; } /** @@ -440,7 +480,15 @@ int iecm_vport_params_buf_alloc(struct iecm_adapter *adapter) */ void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) { - /* stub */ + int i = 0; + + for (i = 0; i < IECM_MAX_NUM_VPORTS; i++) { + kfree(adapter->vport_params_recvd[i]); + kfree(adapter->vport_params_reqd[i]); + } + + kfree(adapter->caps); + kfree(adapter->config_data.req_qs_chunks); } /** @@ -565,6 +613,25 @@ static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) */ void iecm_vc_ops_init(struct iecm_adapter *adapter) { - /* stub */ + adapter->dev_ops.vc_ops.core_init = iecm_vc_core_init; + adapter->dev_ops.vc_ops.vport_init = iecm_vport_init; + adapter->dev_ops.vc_ops.vport_queue_ids_init = + iecm_vport_queue_ids_init; + adapter->dev_ops.vc_ops.get_caps = iecm_send_get_caps_msg; + adapter->dev_ops.vc_ops.is_cap_ena = iecm_is_capability_ena; + adapter->dev_ops.vc_ops.config_queues = iecm_send_config_queues_msg; + adapter->dev_ops.vc_ops.enable_queues = iecm_send_enable_queues_msg; + adapter->dev_ops.vc_ops.disable_queues = iecm_send_disable_queues_msg; + adapter->dev_ops.vc_ops.irq_map_unmap = + iecm_send_map_unmap_queue_vector_msg; + adapter->dev_ops.vc_ops.enable_vport = iecm_send_enable_vport_msg; + adapter->dev_ops.vc_ops.disable_vport = iecm_send_disable_vport_msg; + adapter->dev_ops.vc_ops.destroy_vport = iecm_send_destroy_vport_msg; + adapter->dev_ops.vc_ops.get_ptype = iecm_send_get_rx_ptype_msg; + adapter->dev_ops.vc_ops.get_set_rss_lut = iecm_send_get_set_rss_lut_msg; + adapter->dev_ops.vc_ops.get_set_rss_hash = + iecm_send_get_set_rss_hash_msg; + adapter->dev_ops.vc_ops.adjust_qs = iecm_vport_adjust_qs; + adapter->dev_ops.vc_ops.recv_mbx_msg = NULL; } EXPORT_SYMBOL(iecm_vc_ops_init); From patchwork Thu Jun 18 05:13:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDA86C433DF for ; Thu, 18 Jun 2020 05:14:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C5C721852 for ; Thu, 18 Jun 2020 05:14:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727010AbgFRFOA (ORCPT ); Thu, 18 Jun 2020 01:14:00 -0400 Received: from mga03.intel.com ([134.134.136.65]:25340 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726971AbgFRFOA (ORCPT ); Thu, 18 Jun 2020 01:14:00 -0400 IronPort-SDR: JUgpbx7EdxhDzVaVO+AcW256LdU+uKYj1lPwggUUrw7Df0NR0znphROW3lV7pODGPcFOy3bc6y a6dfUBaRssQw== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="142378052" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="142378052" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:51 -0700 IronPort-SDR: KLFGwplpSnSKZmP8oMIAy3SZyjRg2HYeNdjC9yua+uEIuwiqLBnQJgQ4Mq//1mdcbXTwx+YymX mS4ttNso1bTw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495596" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:50 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next 07/15] iecm: Implement virtchnl commands Date: Wed, 17 Jun 2020 22:13:36 -0700 Message-Id: <20200618051344.516587-8-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement various virtchnl commands that enable communication with hardware. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 1171 ++++++++++++++++- 1 file changed, 1144 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 0fdf87d6e98f..57862fbfdb9b 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -442,7 +442,13 @@ EXPORT_SYMBOL(iecm_recv_mb_msg); static enum iecm_status iecm_send_ver_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_version_info vvi; + + vvi.major = VIRTCHNL_VERSION_MAJOR; + vvi.minor = VIRTCHNL_VERSION_MINOR; + + return iecm_send_mb_msg(adapter, VIRTCHNL_OP_VERSION, sizeof(vvi), + (u8 *)&vvi); } /** @@ -454,7 +460,19 @@ iecm_send_ver_msg(struct iecm_adapter *adapter) static enum iecm_status iecm_recv_ver_msg(struct iecm_adapter *adapter) { - /* stub */ + enum iecm_status err = 0; + struct virtchnl_version_info vvi; + + err = iecm_recv_mb_msg(adapter, VIRTCHNL_OP_VERSION, &vvi, sizeof(vvi)); + if (err) + goto error; + + if (vvi.major > VIRTCHNL_VERSION_MAJOR || + (vvi.major == VIRTCHNL_VERSION_MAJOR && + vvi.minor > VIRTCHNL_VERSION_MINOR)) + dev_warn(&adapter->pdev->dev, "Virtchnl version not matched\n"); +error: + return err; } /** @@ -466,7 +484,25 @@ iecm_recv_ver_msg(struct iecm_adapter *adapter) enum iecm_status iecm_send_get_caps_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_get_capabilities caps = {0}; + int buf_size; + + buf_size = sizeof(struct virtchnl_get_capabilities); + adapter->caps = kzalloc(buf_size, GFP_KERNEL); + if (!adapter->caps) + return IECM_ERR_NO_MEMORY; + + caps.cap_flags = VIRTCHNL_CAP_STATELESS_OFFLOADS | + VIRTCHNL_CAP_UDP_SEG_OFFLOAD | + VIRTCHNL_CAP_RSS | + VIRTCHNL_CAP_TCP_RSC | + VIRTCHNL_CAP_HEADER_SPLIT | + VIRTCHNL_CAP_RDMA | + VIRTCHNL_CAP_SRIOV | + VIRTCHNL_CAP_EDT; + + return iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_CAPS, sizeof(caps), + (u8 *)&caps); } EXPORT_SYMBOL(iecm_send_get_caps_msg); @@ -479,7 +515,8 @@ EXPORT_SYMBOL(iecm_send_get_caps_msg); static enum iecm_status iecm_recv_get_caps_msg(struct iecm_adapter *adapter) { - /* stub */ + return iecm_recv_mb_msg(adapter, VIRTCHNL_OP_GET_CAPS, adapter->caps, + sizeof(struct virtchnl_get_capabilities)); } /** @@ -493,7 +530,30 @@ iecm_recv_get_caps_msg(struct iecm_adapter *adapter) static enum iecm_status iecm_send_create_vport_msg(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + enum iecm_status err = 0; + int buf_size; + + buf_size = sizeof(struct virtchnl_create_vport); + if (!adapter->vport_params_reqd[0]) { + adapter->vport_params_reqd[0] = kzalloc(buf_size, GFP_KERNEL); + if (!adapter->vport_params_reqd[0]) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + } + + vport_msg = (struct virtchnl_create_vport *) + adapter->vport_params_reqd[0]; + vport_msg->vport_type = VIRTCHNL_VPORT_TYPE_DEFAULT; + vport_msg->txq_model = VIRTCHNL_QUEUE_MODEL_SPLIT; + vport_msg->rxq_model = VIRTCHNL_QUEUE_MODEL_SPLIT; + iecm_vport_calc_total_qs(vport_msg, 0); + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_CREATE_VPORT, buf_size, + (u8 *)vport_msg); +error: + return err; } /** @@ -509,7 +569,26 @@ static enum iecm_status iecm_recv_create_vport_msg(struct iecm_adapter *adapter, int *vport_id) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + enum iecm_status err = 0; + + if (!adapter->vport_params_recvd[0]) { + adapter->vport_params_recvd[0] = kzalloc(IECM_DFLT_MBX_BUF_SIZE, + GFP_KERNEL); + if (!adapter->vport_params_recvd[0]) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + } + + vport_msg = (struct virtchnl_create_vport *) + adapter->vport_params_recvd[0]; + + err = iecm_recv_mb_msg(adapter, VIRTCHNL_OP_CREATE_VPORT, vport_msg, + IECM_DFLT_MBX_BUF_SIZE); + *vport_id = vport_msg->vport_id; +error: + return err; } /** @@ -560,7 +639,20 @@ EXPORT_SYMBOL(iecm_wait_for_event); enum iecm_status iecm_send_destroy_vport_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_vport v_id; + enum iecm_status err; + + v_id.vport_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_DESTROY_VPORT, + sizeof(v_id), (u8 *)&v_id); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_DESTROY_VPORT, + IECM_VC_DESTROY_VPORT_ERR); + + return err; } /** @@ -598,7 +690,121 @@ iecm_send_disable_vport_msg(struct iecm_vport *vport) enum iecm_status iecm_send_config_tx_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_config_tx_queues *ctq = NULL; + struct virtchnl_txq_info_v2 *qi; + enum iecm_status err = 0; + int totqs, num_msgs; + int i, k = 0; + int num_qs; + + totqs = vport->num_txq + vport->num_complq; + qi = kcalloc(totqs, sizeof(struct virtchnl_txq_info_v2), GFP_KERNEL); + if (!qi) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + /* Populate the queue info buffer with all queue context info */ + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + int j; + + for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + qi[k].queue_id = tx_qgrp->txqs[j].q_id; + qi[k].model = vport->txq_model; + qi[k].type = tx_qgrp->txqs[j].q_type; + qi[k].ring_len = tx_qgrp->txqs[j].desc_count; + qi[k].dma_ring_addr = tx_qgrp->txqs[j].dma; + if (iecm_is_queue_model_split(vport->txq_model)) { + struct iecm_queue *q = &tx_qgrp->txqs[j]; + + qi[k].tx_compl_queue_id = tx_qgrp->complq->q_id; + qi[k].desc_profile = + VIRTCHNL_TXQ_DESC_PROFILE_NATIVE; + + if (test_bit(__IECM_Q_FLOW_SCH_EN, q->flags)) + qi[k].sched_mode = + VIRTCHNL_TXQ_SCHED_MODE_FLOW; + else + qi[k].sched_mode = + VIRTCHNL_TXQ_SCHED_MODE_QUEUE; + } else { + qi[k].sched_mode = + VIRTCHNL_TXQ_SCHED_MODE_QUEUE; + qi[k].desc_profile = + VIRTCHNL_TXQ_DESC_PROFILE_BASE; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + qi[k].queue_id = tx_qgrp->complq->q_id; + qi[k].model = vport->txq_model; + qi[k].type = tx_qgrp->complq->q_type; + qi[k].desc_profile = VIRTCHNL_TXQ_DESC_PROFILE_NATIVE; + qi[k].ring_len = tx_qgrp->complq->desc_count; + qi[k].dma_ring_addr = tx_qgrp->complq->dma; + k++; + } + } + + if (k != totqs) { + err = IECM_ERR_CFG; + goto error; + } + + /* Chunk up the queue contexts into multiple messages to avoid + * sending a control queue message buffer that is too large + */ + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + else + num_qs = IECM_NUM_QCTX_PER_MSG; + + num_msgs = totqs / IECM_NUM_QCTX_PER_MSG; + if (totqs % IECM_NUM_QCTX_PER_MSG) + num_msgs++; + + for (i = 0, k = 0; i < num_msgs || num_qs; i++) { + int buf_size = sizeof(struct virtchnl_config_tx_queues) + + (sizeof(struct virtchnl_txq_info_v2) * (num_qs - 1)); + if (!ctq || num_qs != IECM_NUM_QCTX_PER_MSG) { + kfree(ctq); + ctq = kzalloc(buf_size, GFP_KERNEL); + if (!ctq) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + } else { + memset(ctq, 0, buf_size); + } + + ctq->vport_id = vport->vport_id; + ctq->num_qinfo = num_qs; + memcpy(ctq->qinfo, &qi[k], + sizeof(struct virtchnl_txq_info_v2) * num_qs); + + err = iecm_send_mb_msg(vport->adapter, + VIRTCHNL_OP_CONFIG_TX_QUEUES, + buf_size, (u8 *)ctq); + + if (!err) + err = iecm_wait_for_event(vport->adapter, + IECM_VC_CONFIG_TXQ, + IECM_VC_CONFIG_TXQ_ERR); + if (err) + goto mbx_error; + + k += num_qs; + totqs -= num_qs; + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + } + +mbx_error: + kfree(ctq); +error: + kfree(qi); + return err; } /** @@ -612,7 +818,148 @@ iecm_send_config_tx_queues_msg(struct iecm_vport *vport) enum iecm_status iecm_send_config_rx_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_config_rx_queues *crq = NULL; + struct virtchnl_rxq_info_v2 *qi; + enum iecm_status err = 0; + int totqs, num_msgs; + int i, k = 0; + int num_qs; + + totqs = vport->num_rxq + vport->num_bufq; + qi = kcalloc(totqs, sizeof(struct virtchnl_rxq_info_v2), GFP_KERNEL); + if (!qi) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + /* Populate the queue info buffer with all queue context info */ + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int num_rxq; + int j; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, k++) { + struct iecm_queue *rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + rxq = &rx_qgrp->splitq.rxq_sets[j].rxq; + qi[k].rx_bufq1_id = + rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id; + qi[k].rx_bufq2_id = + rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id; + qi[k].hdr_buffer_size = rxq->rx_hbuf_size; + qi[k].rsc_low_watermark = + rxq->rsc_low_watermark; + + if (rxq->rx_hsplit_en) { + qi[k].queue_flags = + VIRTCHNL_RXQ_HDR_SPLIT; + qi[k].hdr_buffer_size = + rxq->rx_hbuf_size; + } + if (iecm_is_feature_ena(vport, NETIF_F_GRO_HW)) + qi[k].queue_flags |= VIRTCHNL_RXQ_RSC; + } else { + rxq = &rx_qgrp->singleq.rxqs[j]; + } + + qi[k].queue_id = rxq->q_id; + qi[k].model = vport->rxq_model; + qi[k].type = rxq->q_type; + qi[k].desc_profile = VIRTCHNL_TXQ_DESC_PROFILE_BASE; + qi[k].desc_size = VIRTCHNL_RXQ_DESC_SIZE_32BYTE; + qi[k].ring_len = rxq->desc_count; + qi[k].dma_ring_addr = rxq->dma; + qi[k].max_pkt_size = rxq->rx_max_pkt_size; + qi[k].data_buffer_size = rxq->rx_buf_size; + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++, k++) { + struct iecm_queue *bufq = + &rx_qgrp->splitq.bufq_sets[j].bufq; + + qi[k].queue_id = bufq->q_id; + qi[k].model = vport->rxq_model; + qi[k].type = bufq->q_type; + qi[k].desc_profile = + VIRTCHNL_TXQ_DESC_PROFILE_NATIVE; + qi[k].desc_size = + VIRTCHNL_RXQ_DESC_SIZE_32BYTE; + qi[k].ring_len = bufq->desc_count; + qi[k].dma_ring_addr = bufq->dma; + qi[k].data_buffer_size = bufq->rx_buf_size; + qi[k].buffer_notif_stride = + bufq->rx_buf_stride; + qi[k].rsc_low_watermark = + bufq->rsc_low_watermark; + } + } + } + + if (k != totqs) { + err = IECM_ERR_CFG; + goto error; + } + + /* Chunk up the queue contexts into multiple messages to avoid + * sending a control queue message buffer that is too large + */ + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + else + num_qs = IECM_NUM_QCTX_PER_MSG; + + num_msgs = totqs / IECM_NUM_QCTX_PER_MSG; + if (totqs % IECM_NUM_QCTX_PER_MSG) + num_msgs++; + + for (i = 0, k = 0; i < num_msgs || num_qs; i++) { + int buf_size = sizeof(struct virtchnl_config_rx_queues) + + (sizeof(struct virtchnl_rxq_info_v2) * (num_qs - 1)); + if (!crq || num_qs != IECM_NUM_QCTX_PER_MSG) { + kfree(crq); + crq = kzalloc(buf_size, GFP_KERNEL); + if (!crq) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + } else { + memset(crq, 0, buf_size); + } + + crq->vport_id = vport->vport_id; + crq->num_qinfo = num_qs; + memcpy(crq->qinfo, &qi[k], + sizeof(struct virtchnl_rxq_info_v2) * num_qs); + + err = iecm_send_mb_msg(vport->adapter, + VIRTCHNL_OP_CONFIG_RX_QUEUES, + buf_size, (u8 *)crq); + + if (!err) + err = iecm_wait_for_event(vport->adapter, + IECM_VC_CONFIG_RXQ, + IECM_VC_CONFIG_RXQ_ERR); + if (err) + goto mbx_error; + + k += num_qs; + totqs -= num_qs; + if (totqs < IECM_NUM_QCTX_PER_MSG) + num_qs = totqs; + } + +mbx_error: + kfree(crq); +error: + kfree(qi); + return err; } /** @@ -629,7 +976,118 @@ static enum iecm_status iecm_send_ena_dis_queues_msg(struct iecm_vport *vport, enum virtchnl_ops vc_op) { - /* stub */ + struct virtchnl_del_ena_dis_queues *eq; + struct virtchnl_queue_chunk *qc; + int num_txq, num_rxq, num_q; + int i, j, k = 0, buf_size; + enum iecm_status err = 0; + + /* validate virtchnl op */ + switch (vc_op) { + case VIRTCHNL_OP_ENABLE_QUEUES_V2: + case VIRTCHNL_OP_DISABLE_QUEUES_V2: + break; + default: + err = IECM_ERR_CFG; + goto error; + } + + num_txq = vport->num_txq + vport->num_complq; + num_rxq = vport->num_rxq + vport->num_bufq; + num_q = num_txq + num_rxq; + buf_size = sizeof(struct virtchnl_del_ena_dis_queues) + + (sizeof(struct virtchnl_queue_chunk) * (num_q - 1)); + eq = kzalloc(buf_size, GFP_KERNEL); + if (!eq) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + eq->vport_id = vport->vport_id; + eq->chunks.num_chunks = num_q; + qc = eq->chunks.chunks; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + qc[k].type = tx_qgrp->txqs[j].q_type; + qc[k].start_queue_id = tx_qgrp->txqs[j].q_id; + qc[k].num_queues = 1; + } + } + if (vport->num_txq != k) { + err = IECM_ERR_CFG; + goto err_cfg; + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + for (i = 0; i < vport->num_txq_grp; i++, k++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + qc[k].type = tx_qgrp->complq->q_type; + qc[k].start_queue_id = tx_qgrp->complq->q_id; + qc[k].num_queues = 1; + } + if (vport->num_complq != (k - vport->num_txq)) { + err = IECM_ERR_CFG; + goto err_cfg; + } + } + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, k++) { + if (iecm_is_queue_model_split(vport->rxq_model)) { + qc[k].start_queue_id = + rx_qgrp->splitq.rxq_sets[j].rxq.q_id; + qc[k].type = + rx_qgrp->splitq.rxq_sets[j].rxq.q_type; + } else { + qc[k].start_queue_id = + rx_qgrp->singleq.rxqs[j].q_id; + qc[k].type = + rx_qgrp->singleq.rxqs[j].q_type; + } + qc[k].num_queues = 1; + } + } + if (vport->num_rxq != k - (vport->num_txq + vport->num_complq)) { + err = IECM_ERR_CFG; + goto err_cfg; + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + struct iecm_queue *q; + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++, k++) { + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + qc[k].type = q->q_type; + qc[k].start_queue_id = q->q_id; + qc[k].num_queues = 1; + } + } + if (vport->num_bufq != k - (vport->num_txq + + vport->num_complq + + vport->num_rxq)) { + err = IECM_ERR_CFG; + goto err_cfg; + } + } + + err = iecm_send_mb_msg(vport->adapter, vc_op, buf_size, (u8 *)eq); +err_cfg: + kfree(eq); +error: + return err; } /** @@ -646,7 +1104,107 @@ static enum iecm_status iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, bool map) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_queue_vector_maps *vqvm; + struct virtchnl_queue_vector *vqv; + int buf_size, num_q, i, j, k = 0; + enum iecm_status err = 0; + + num_q = vport->num_txq + vport->num_rxq; + + buf_size = sizeof(struct virtchnl_queue_vector_maps) + + (sizeof(struct virtchnl_queue_vector) * (num_q - 1)); + vqvm = kzalloc(buf_size, GFP_KERNEL); + if (!vqvm) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + vqvm->vport_id = vport->vport_id; + vqvm->num_maps = num_q; + vqv = vqvm->qv_maps; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + vqv[k].queue_type = tx_qgrp->txqs[j].q_type; + vqv[k].queue_id = tx_qgrp->txqs[j].q_id; + + if (iecm_is_queue_model_split(vport->txq_model)) { + vqv[k].vector_id = + tx_qgrp->complq->q_vector->v_idx; + vqv[k].itr_idx = + tx_qgrp->complq->itr.itr_idx; + } else { + vqv[k].vector_id = + tx_qgrp->txqs[j].q_vector->v_idx; + vqv[k].itr_idx = + tx_qgrp->txqs[j].itr.itr_idx; + } + } + } + + if (vport->num_txq != k) { + err = IECM_ERR_CFG; + goto err_cfg; + } + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int num_rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++, k++) { + struct iecm_queue *rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + rxq = &rx_qgrp->splitq.rxq_sets[j].rxq; + else + rxq = &rx_qgrp->singleq.rxqs[j]; + + vqv[k].queue_type = rxq->q_type; + vqv[k].queue_id = rxq->q_id; + vqv[k].vector_id = rxq->q_vector->v_idx; + vqv[k].itr_idx = rxq->itr.itr_idx; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + if (vport->num_rxq != k - vport->num_complq) { + err = IECM_ERR_CFG; + goto err_cfg; + } + } else { + if (vport->num_rxq != k - vport->num_txq) { + err = IECM_ERR_CFG; + goto err_cfg; + } + } + + if (map) { + err = iecm_send_mb_msg(adapter, + VIRTCHNL_OP_MAP_QUEUE_VECTOR, + buf_size, (u8 *)vqvm); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_MAP_IRQ, + IECM_VC_MAP_IRQ_ERR); + } else { + err = iecm_send_mb_msg(adapter, + VIRTCHNL_OP_UNMAP_QUEUE_VECTOR, + buf_size, (u8 *)vqvm); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_UNMAP_IRQ, + IECM_VC_UNMAP_IRQ_ERR); + } +err_cfg: + kfree(vqvm); +error: + return err; } /** @@ -658,7 +1216,16 @@ iecm_send_map_unmap_queue_vector_msg(struct iecm_vport *vport, static enum iecm_status iecm_send_enable_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum iecm_status err; + + err = iecm_send_ena_dis_queues_msg(vport, + VIRTCHNL_OP_ENABLE_QUEUES_V2); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_ENA_QUEUES, + IECM_VC_ENA_QUEUES_ERR); + + return err; } /** @@ -670,7 +1237,15 @@ iecm_send_enable_queues_msg(struct iecm_vport *vport) static enum iecm_status iecm_send_disable_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum iecm_status err; + + err = iecm_send_ena_dis_queues_msg(vport, + VIRTCHNL_OP_DISABLE_QUEUES_V2); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_DIS_QUEUES, + IECM_VC_DIS_QUEUES_ERR); + return err; } /** @@ -682,7 +1257,48 @@ iecm_send_disable_queues_msg(struct iecm_vport *vport) enum iecm_status iecm_send_delete_queues_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_create_vport *vport_params; + struct virtchnl_del_ena_dis_queues *eq; + struct virtchnl_queue_chunks *chunks; + int buf_size, num_chunks; + enum iecm_status err; + + if (vport->adapter->config_data.req_qs_chunks) { + struct virtchnl_add_queues *vc_aq = + (struct virtchnl_add_queues *) + vport->adapter->config_data.req_qs_chunks; + chunks = &vc_aq->chunks; + } else { + vport_params = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + chunks = &vport_params->chunks; + } + + num_chunks = chunks->num_chunks; + buf_size = sizeof(struct virtchnl_del_ena_dis_queues) + + (sizeof(struct virtchnl_queue_chunk) * + (num_chunks - 1)); + + eq = kzalloc(buf_size, GFP_KERNEL); + if (!eq) { + err = IECM_ERR_NO_MEMORY; + return err; + } + eq->vport_id = vport->vport_id; + eq->chunks.num_chunks = num_chunks; + + memcpy(eq->chunks.chunks, chunks->chunks, num_chunks * + sizeof(struct virtchnl_queue_chunk)); + + err = iecm_send_mb_msg(vport->adapter, VIRTCHNL_OP_DEL_QUEUES, + buf_size, (u8 *)eq); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_DEL_QUEUES, + IECM_VC_DEL_QUEUES_ERR); + + kfree(eq); + return err; } /** @@ -694,7 +1310,14 @@ iecm_send_delete_queues_msg(struct iecm_vport *vport) static enum iecm_status iecm_send_config_queues_msg(struct iecm_vport *vport) { - /* stub */ + enum iecm_status err; + + err = iecm_send_config_tx_queues_msg(vport); + + if (!err) + err = iecm_send_config_rx_queues_msg(vport); + + return err; } /** @@ -711,7 +1334,55 @@ enum iecm_status iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, u16 num_complq, u16 num_rx_q, u16 num_rx_bufq) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_add_queues aq = {0}; + enum iecm_status err; + int size; + + aq.vport_id = vport->vport_id; + aq.num_tx_q = num_tx_q; + aq.num_tx_complq = num_complq; + aq.num_rx_q = num_rx_q; + aq.num_rx_bufq = num_rx_bufq; + + err = iecm_send_mb_msg(adapter, + VIRTCHNL_OP_ADD_QUEUES, + sizeof(struct virtchnl_add_queues), (u8 *)&aq); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_ADD_QUEUES, + IECM_VC_ADD_QUEUES_ERR); + + if (!err) { + struct virtchnl_add_queues *vc_msg = + (struct virtchnl_add_queues *)adapter->vc_msg; + + kfree(adapter->config_data.req_qs_chunks); + adapter->config_data.req_qs_chunks = NULL; + + /* compare vc_msg num queues with vport num queues */ + if (vc_msg->num_tx_q != num_tx_q || + vc_msg->num_rx_q != num_rx_q || + vc_msg->num_tx_complq != num_complq || + vc_msg->num_rx_bufq != num_rx_bufq) + return IECM_ERR_CFG; + + size = sizeof(struct virtchnl_add_queues) + + ((vc_msg->chunks.num_chunks - 1) * + sizeof(struct virtchnl_queue_chunk)); + adapter->config_data.req_qs_chunks = + kzalloc(size, GFP_KERNEL); + if (!adapter->config_data.req_qs_chunks) { + err = IECM_ERR_NO_MEMORY; + mutex_unlock(&adapter->vc_msg_lock); + goto mem_err; + } + memcpy(adapter->config_data.req_qs_chunks, + adapter->vc_msg, size); + mutex_unlock(&adapter->vc_msg_lock); + } +mem_err: + return err; } /** @@ -723,7 +1394,44 @@ iecm_send_add_queues_msg(struct iecm_vport *vport, u16 num_tx_q, enum iecm_status iecm_send_get_stats_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_queue_select vqs; + enum iecm_status err; + + /* Don't send get_stats message if one is pending or the + * link is down + */ + if (test_bit(IECM_VC_GET_STATS, adapter->vc_state) || + adapter->state <= __IECM_DOWN) + return 0; + + vqs.vsi_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_STATS, + sizeof(vqs), (u8 *)&vqs); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_GET_STATS, + IECM_VC_GET_STATS_ERR); + + if (!err) { + struct virtchnl_eth_stats *stats = + (struct virtchnl_eth_stats *)adapter->vc_msg; + vport->netstats.rx_packets = stats->rx_unicast + + stats->rx_multicast + + stats->rx_broadcast; + vport->netstats.tx_packets = stats->tx_unicast + + stats->tx_multicast + + stats->tx_broadcast; + vport->netstats.rx_bytes = stats->rx_bytes; + vport->netstats.tx_bytes = stats->tx_bytes; + vport->netstats.tx_errors = stats->tx_errors; + vport->netstats.rx_dropped = stats->rx_discards; + vport->netstats.tx_dropped = stats->tx_discards; + mutex_unlock(&adapter->vc_msg_lock); + } + + return err; } /** @@ -736,7 +1444,36 @@ iecm_send_get_stats_msg(struct iecm_vport *vport) enum iecm_status iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_rss_hash rh = {0}; + enum iecm_status err; + + rh.vport_id = vport->vport_id; + rh.hash = adapter->rss_data.rss_hash; + + if (get) { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_RSS_HASH, + sizeof(rh), (u8 *)&rh); + if (!err) { + err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_HASH, + IECM_VC_GET_RSS_HASH_ERR); + if (!err) { + memcpy(&rh, adapter->vc_msg, sizeof(rh)); + adapter->rss_data.rss_hash = rh.hash; + /* Leave the buffer clean for next message */ + memset(adapter->vc_msg, 0, + IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + } + } + } else { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_SET_RSS_HASH, + sizeof(rh), (u8 *)&rh); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_SET_RSS_HASH, + IECM_VC_SET_RSS_HASH_ERR); + } + return err; } /** @@ -749,7 +1486,72 @@ iecm_send_get_set_rss_hash_msg(struct iecm_vport *vport, bool get) enum iecm_status iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_rss_lut_v2 *recv_rl; + struct virtchnl_rss_lut_v2 *rl; + enum iecm_status err; + int i, buf_size; + + buf_size = sizeof(struct virtchnl_rss_lut_v2) + + (sizeof(u16) * (adapter->rss_data.rss_lut_size - 1)); + rl = kzalloc(buf_size, GFP_KERNEL); + if (!rl) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + if (!get) { + rl->lut_entries = adapter->rss_data.rss_lut_size; + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) + rl->lut[i] = adapter->rss_data.rss_lut[i]; + } + rl->vport_id = vport->vport_id; + + if (get) { + err = iecm_send_mb_msg(vport->adapter, VIRTCHNL_OP_GET_RSS_LUT, + buf_size, (u8 *)rl); + if (!err) { + err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_LUT, + IECM_VC_GET_RSS_LUT_ERR); + if (err) + goto get_lut_err; + recv_rl = (struct virtchnl_rss_lut_v2 *)adapter->vc_msg; + if (adapter->rss_data.rss_lut_size != + recv_rl->lut_entries) { + adapter->rss_data.rss_lut_size = + recv_rl->lut_entries; + kfree(adapter->rss_data.rss_lut); + adapter->rss_data.rss_lut = + kzalloc(adapter->rss_data.rss_lut_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_lut) { + adapter->rss_data.rss_lut_size = 0; + err = IECM_ERR_NO_MEMORY; + /* Leave the buffer clean */ + memset(adapter->vc_msg, 0, + IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + goto mem_alloc_err; + } + } + memcpy(adapter->rss_data.rss_lut, adapter->vc_msg, + adapter->rss_data.rss_lut_size); + /* Leave the buffer clean for next message */ + memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + } + } else { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_SET_RSS_LUT, + buf_size, (u8 *)rl); + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_SET_RSS_LUT, + IECM_VC_SET_RSS_LUT_ERR); + } +mem_alloc_err: +get_lut_err: + kfree(rl); +error: + return err; } /** @@ -762,7 +1564,74 @@ iecm_send_get_set_rss_lut_msg(struct iecm_vport *vport, bool get) enum iecm_status iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_rss_key *recv_rk; + struct virtchnl_rss_key *rk; + enum iecm_status err; + int i, buf_size; + + buf_size = sizeof(struct virtchnl_rss_key) + + (sizeof(u8) * (adapter->rss_data.rss_key_size - 1)); + rk = kzalloc(buf_size, GFP_KERNEL); + if (!rk) { + err = IECM_ERR_NO_MEMORY; + goto error; + } + + if (!get) { + rk->key_len = adapter->rss_data.rss_key_size; + for (i = 0; i < adapter->rss_data.rss_key_size; i++) + rk->key[i] = adapter->rss_data.rss_key[i]; + } + rk->vsi_id = vport->vport_id; + + if (get) { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_GET_RSS_KEY, + buf_size, (u8 *)rk); + if (!err) { + err = iecm_wait_for_event(adapter, IECM_VC_GET_RSS_KEY, + IECM_VC_GET_RSS_KEY_ERR); + if (err) + goto get_key_err; + recv_rk = (struct virtchnl_rss_key *)adapter->vc_msg; + if (adapter->rss_data.rss_key_size != + recv_rk->key_len) { + adapter->rss_data.rss_key_size = + min_t(u16, NETDEV_RSS_KEY_LEN, + recv_rk->key_len); + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = + kzalloc(adapter->rss_data.rss_key_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_key) { + adapter->rss_data.rss_key_size = 0; + err = IECM_ERR_NO_MEMORY; + /* Leave the buffer clean */ + memset(adapter->vc_msg, 0, + IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + goto mem_alloc_err; + } + } + memcpy(adapter->rss_data.rss_key, adapter->vc_msg, + adapter->rss_data.rss_key_size); + /* Leave the buffer clean for next message */ + memset(adapter->vc_msg, 0, IECM_DFLT_MBX_BUF_SIZE); + mutex_unlock(&adapter->vc_msg_lock); + } + } else { + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_CONFIG_RSS_KEY, + buf_size, (u8 *)rk); + if (!err) + err = iecm_wait_for_event(adapter, + IECM_VC_CONFIG_RSS_KEY, + IECM_VC_CONFIG_RSS_KEY_ERR); + } +mem_alloc_err: +get_key_err: + kfree(rk); +error: + return err; } /** @@ -773,7 +1642,24 @@ iecm_send_get_set_rss_key_msg(struct iecm_vport *vport, bool get) */ enum iecm_status iecm_send_get_rx_ptype_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_rx_ptype_decoded *rx_ptype_lkup = vport->rx_ptype_lkup; + int ptype_list[IECM_RX_SUPP_PTYPE] = { 0, 1, 11, 12, 22, 23, 24, 25, 26, + 27, 28, 88, 89, 90, 91, 92, 93, + 94 }; + enum iecm_status err = 0; + int i; + + for (i = 0; i < IECM_RX_MAX_PTYPE; i++) + rx_ptype_lkup[i] = iecm_rx_ptype_lkup[0]; + + for (i = 0; i < IECM_RX_SUPP_PTYPE; i++) { + int j = ptype_list[i]; + + rx_ptype_lkup[j] = iecm_rx_ptype_lkup[i]; + rx_ptype_lkup[j].ptype = ptype_list[i]; + }; + + return err; } /** @@ -912,7 +1798,54 @@ void iecm_vport_params_buf_rel(struct iecm_adapter *adapter) */ int iecm_vc_core_init(struct iecm_adapter *adapter, int *vport_id) { - /* stub */ + switch (adapter->state) { + case __IECM_STARTUP: + if (iecm_send_ver_msg(adapter)) + goto init_failed; + adapter->state = __IECM_VER_CHECK; + goto restart; + case __IECM_VER_CHECK: + if (iecm_recv_ver_msg(adapter)) + goto init_failed; + adapter->state = __IECM_GET_CAPS; + if (adapter->dev_ops.vc_ops.get_caps(adapter)) + goto init_failed; + goto restart; + case __IECM_GET_CAPS: + if (iecm_recv_get_caps_msg(adapter)) + goto init_failed; + if (iecm_send_create_vport_msg(adapter)) + goto init_failed; + adapter->state = __IECM_GET_DFLT_VPORT_PARAMS; + goto restart; + case __IECM_GET_DFLT_VPORT_PARAMS: + if (iecm_recv_create_vport_msg(adapter, vport_id)) + goto init_failed; + adapter->state = __IECM_INIT_SW; + break; + case __IECM_INIT_SW: + break; + default: + dev_err(&adapter->pdev->dev, "Device is in bad state: %d\n", + adapter->state); + goto init_failed; + } + + return 0; + +restart: + queue_delayed_work(adapter->init_wq, &adapter->init_task, + msecs_to_jiffies(30)); + /* Not an error. Using try again to continue with state machine */ + return -EAGAIN; +init_failed: + if (++adapter->mb_wait_count > IECM_MB_MAX_ERR) { + dev_err(&adapter->pdev->dev, "Failed to establish mailbox communications with hardware\n"); + return -EFAULT; + } + adapter->state = __IECM_STARTUP; + queue_delayed_work(adapter->init_wq, &adapter->init_task, HZ); + return -EAGAIN; } EXPORT_SYMBOL(iecm_vc_core_init); @@ -959,7 +1892,32 @@ iecm_vport_get_queue_ids(u16 *qids, int num_qids, enum virtchnl_queue_type q_type, struct virtchnl_queue_chunks *chunks) { - /* stub */ + int num_chunks = chunks->num_chunks; + struct virtchnl_queue_chunk *chunk; + int num_q_id_filled = 0; + int start_q_id; + int num_q; + int i; + + while (num_chunks) { + chunk = &chunks->chunks[num_chunks - 1]; + if (chunk->type == q_type) { + num_q = chunk->num_queues; + start_q_id = chunk->start_queue_id; + for (i = 0; i < num_q; i++) { + if ((num_q_id_filled + i) < num_qids) { + qids[num_q_id_filled + i] = start_q_id; + start_q_id++; + } else { + break; + } + } + num_q_id_filled = num_q_id_filled + i; + } + num_chunks--; + } + + return num_q_id_filled; } /** @@ -976,7 +1934,80 @@ static int __iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, int num_qids, enum virtchnl_queue_type q_type) { - /* stub */ + struct iecm_queue *q; + int i, j, k = 0; + + switch (q_type) { + case VIRTCHNL_QUEUE_TYPE_TX: + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + for (j = 0; j < tx_qgrp->num_txq; j++) { + if (k < num_qids) { + tx_qgrp->txqs[j].q_id = qids[k]; + tx_qgrp->txqs[j].q_type = + VIRTCHNL_QUEUE_TYPE_TX; + k++; + } else { + break; + } + } + } + break; + case VIRTCHNL_QUEUE_TYPE_RX: + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int num_rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq && k < num_qids; j++, k++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + else + q = &rx_qgrp->singleq.rxqs[j]; + q->q_id = qids[k]; + q->q_type = VIRTCHNL_QUEUE_TYPE_RX; + } + } + break; + case VIRTCHNL_QUEUE_TYPE_TX_COMPLETION: + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + + if (k < num_qids) { + tx_qgrp->complq->q_id = qids[k]; + tx_qgrp->complq->q_type = + VIRTCHNL_QUEUE_TYPE_TX_COMPLETION; + k++; + } else { + break; + } + } + break; + case VIRTCHNL_QUEUE_TYPE_RX_BUFFER: + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + if (k < num_qids) { + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + q->q_id = qids[k]; + q->q_type = + VIRTCHNL_QUEUE_TYPE_RX_BUFFER; + k++; + } else { + break; + } + } + } + break; + } + + return k; } /** @@ -989,7 +2020,76 @@ __iecm_vport_queue_ids_init(struct iecm_vport *vport, u16 *qids, static enum iecm_status iecm_vport_queue_ids_init(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_create_vport *vport_params; + struct virtchnl_queue_chunks *chunks; + enum virtchnl_queue_type q_type; + /* We may never deal with more that 256 same type of queues */ +#define IECM_MAX_QIDS 256 + u16 qids[IECM_MAX_QIDS]; + int num_ids; + + if (vport->adapter->config_data.num_req_qs) { + struct virtchnl_add_queues *vc_aq = + (struct virtchnl_add_queues *) + vport->adapter->config_data.req_qs_chunks; + chunks = &vc_aq->chunks; + } else { + vport_params = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + chunks = &vport_params->chunks; + /* compare vport_params num queues with vport num queues */ + if (vport_params->num_tx_q != vport->num_txq || + vport_params->num_rx_q != vport->num_rxq || + vport_params->num_tx_complq != vport->num_complq || + vport_params->num_rx_bufq != vport->num_bufq) + return IECM_ERR_CFG; + } + + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, + VIRTCHNL_QUEUE_TYPE_TX, + chunks); + if (num_ids != vport->num_txq) + return IECM_ERR_CFG; + num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids, + VIRTCHNL_QUEUE_TYPE_TX); + if (num_ids != vport->num_txq) + return IECM_ERR_CFG; + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, + VIRTCHNL_QUEUE_TYPE_RX, + chunks); + if (num_ids != vport->num_rxq) + return IECM_ERR_CFG; + num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids, + VIRTCHNL_QUEUE_TYPE_RX); + if (num_ids != vport->num_rxq) + return IECM_ERR_CFG; + + if (iecm_is_queue_model_split(vport->txq_model)) { + q_type = VIRTCHNL_QUEUE_TYPE_TX_COMPLETION; + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, q_type, + chunks); + if (num_ids != vport->num_complq) + return IECM_ERR_CFG; + num_ids = __iecm_vport_queue_ids_init(vport, qids, + num_ids, + q_type); + if (num_ids != vport->num_complq) + return IECM_ERR_CFG; + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + q_type = VIRTCHNL_QUEUE_TYPE_RX_BUFFER; + num_ids = iecm_vport_get_queue_ids(qids, IECM_MAX_QIDS, q_type, + chunks); + if (num_ids != vport->num_bufq) + return IECM_ERR_CFG; + num_ids = __iecm_vport_queue_ids_init(vport, qids, num_ids, + q_type); + if (num_ids != vport->num_bufq) + return IECM_ERR_CFG; + } + + return 0; } /** @@ -1000,7 +2100,23 @@ enum iecm_status iecm_vport_queue_ids_init(struct iecm_vport *vport) */ enum iecm_status iecm_vport_adjust_qs(struct iecm_vport *vport) { - /* stub */ + struct virtchnl_create_vport vport_msg; + enum iecm_status err; + + vport_msg.txq_model = vport->txq_model; + vport_msg.rxq_model = vport->rxq_model; + iecm_vport_calc_total_qs(&vport_msg, + vport->adapter->config_data.num_req_qs); + err = iecm_send_add_queues_msg(vport, vport_msg.num_tx_q, + vport_msg.num_tx_complq, + vport_msg.num_rx_q, + vport_msg.num_rx_bufq); + if (err) + goto failure; + iecm_vport_init_num_qs(vport, &vport_msg); + iecm_vport_calc_num_q_groups(vport); +failure: + return err; } /** @@ -1012,7 +2128,8 @@ enum iecm_status iecm_vport_adjust_qs(struct iecm_vport *vport) */ static bool iecm_is_capability_ena(struct iecm_adapter *adapter, u64 flag) { - /* stub */ + return ((struct virtchnl_get_capabilities *)adapter->caps)->cap_flags & + flag; } /** From patchwork Thu Jun 18 05:13:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45C5AC433DF for ; Thu, 18 Jun 2020 05:14:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2607C21BE5 for ; Thu, 18 Jun 2020 05:14:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727048AbgFRFOF (ORCPT ); Thu, 18 Jun 2020 01:14:05 -0400 Received: from mga03.intel.com ([134.134.136.65]:25340 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726991AbgFRFOC (ORCPT ); Thu, 18 Jun 2020 01:14:02 -0400 IronPort-SDR: Oj3WhqSQtaZbAlOkkW3qe9yXtn8YswKMC8quT8i96+dhcH9s3UpR4sAvM8BuALTQq2/G+4hE5G ycg6aNLX6gFg== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="142378054" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="142378054" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:51 -0700 IronPort-SDR: vbuyyBOjtkg9NXJ2gILq8QL3eoIZWSGnGYJUUYhp0gWfEmROamQGlM/HjibYlEod7LQtNEYRgM xoO9dxhoi6QQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495600" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:51 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next 08/15] iecm: Implement vector allocation Date: Wed, 17 Jun 2020 22:13:37 -0700 Message-Id: <20200618051344.516587-9-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael This allocates PCI vectors and maps to interrupt routines. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 63 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 606 +++++++++++++++++- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 24 +- 3 files changed, 669 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index 3f6878704b3e..a4fd04fd0500 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -15,7 +15,11 @@ extern int debug; */ static void iecm_mb_intr_rel_irq(struct iecm_adapter *adapter) { - /* stub */ + int irq_num; + + irq_num = adapter->msix_entries[0].vector; + synchronize_irq(irq_num); + free_irq(irq_num, adapter); } /** @@ -44,7 +48,12 @@ static void iecm_intr_rel(struct iecm_adapter *adapter) */ irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) { - /* stub */ + struct iecm_adapter *adapter = (struct iecm_adapter *)data; + + set_bit(__IECM_MB_INTR_TRIGGER, adapter->flags); + queue_delayed_work(adapter->serv_wq, &adapter->serv_task, + msecs_to_jiffies(0)); + return IRQ_HANDLED; } /** @@ -53,7 +62,12 @@ irqreturn_t iecm_mb_intr_clean(int __always_unused irq, void *data) */ void iecm_mb_irq_enable(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_hw *hw = &adapter->hw; + struct iecm_intr_reg *intr = &adapter->mb_vector.intr_reg; + u32 val; + + val = intr->dyn_ctl_intena_m | intr->dyn_ctl_itridx_m; + writel_relaxed(val, (u8 *)(hw->hw_addr + intr->dyn_ctl)); } /** @@ -62,7 +76,22 @@ void iecm_mb_irq_enable(struct iecm_adapter *adapter) */ int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_q_vector *mb_vector = &adapter->mb_vector; + int irq_num, mb_vidx = 0, err; + + irq_num = adapter->msix_entries[mb_vidx].vector; + snprintf(mb_vector->name, sizeof(mb_vector->name) - 1, + "%s-%s-%d", dev_driver_string(&adapter->pdev->dev), + "Mailbox", mb_vidx); + err = request_irq(irq_num, adapter->irq_mb_handler, 0, + mb_vector->name, adapter); + if (err) { + dev_err(&adapter->pdev->dev, + "Request_irq for mailbox failed, error: %d\n", err); + return err; + } + set_bit(__IECM_MB_INTR_MODE, adapter->flags); + return 0; } /** @@ -74,7 +103,16 @@ int iecm_mb_intr_req_irq(struct iecm_adapter *adapter) */ void iecm_get_mb_vec_id(struct iecm_adapter *adapter) { - /* stub */ + struct virtchnl_vector_chunks *vchunks; + struct virtchnl_vector_chunk *chunk; + + if (adapter->req_vec_chunks) { + vchunks = &adapter->req_vec_chunks->vchunks; + chunk = &vchunks->num_vchunk[0]; + adapter->mb_vector.v_idx = chunk->start_vector_id; + } else { + adapter->mb_vector.v_idx = 0; + } } /** @@ -83,7 +121,13 @@ void iecm_get_mb_vec_id(struct iecm_adapter *adapter) */ int iecm_mb_intr_init(struct iecm_adapter *adapter) { - /* stub */ + int err = 0; + + iecm_get_mb_vec_id(adapter); + adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter); + adapter->irq_mb_handler = iecm_mb_intr_clean; + err = iecm_mb_intr_req_irq(adapter); + return err; } /** @@ -95,7 +139,12 @@ int iecm_mb_intr_init(struct iecm_adapter *adapter) */ void iecm_intr_distribute(struct iecm_adapter *adapter) { - /* stub */ + struct iecm_vport *vport; + + vport = adapter->vports[0]; + if (adapter->num_msix_entries != adapter->num_req_msix) + vport->num_q_vectors = adapter->num_msix_entries - + IECM_MAX_NONQ_VEC - IECM_MIN_RDMA_VEC; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 0d684adc15e5..da3065a87c2c 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -1002,7 +1002,16 @@ iecm_vport_intr_clean_queues(int __always_unused irq, void *data) */ static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) { - /* stub */ + int q_idx; + + if (!vport->netdev) + return; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + + napi_disable(&q_vector->napi); + } } /** @@ -1013,7 +1022,44 @@ static void iecm_vport_intr_napi_dis_all(struct iecm_vport *vport) */ static void iecm_vport_intr_rel(struct iecm_vport *vport) { - /* stub */ + int i, j, v_idx; + + if (!vport->netdev) + return; + + for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[v_idx]; + + if (q_vector) + netif_napi_del(&q_vector->napi); + } + + /* Clean up the mapping of queues to vectors */ + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++) + rx_qgrp->splitq.rxq_sets[j].rxq.q_vector = + NULL; + } else { + for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + rx_qgrp->singleq.rxqs[j].q_vector = NULL; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + for (i = 0; i < vport->num_txq_grp; i++) + vport->txq_grps[i].complq->q_vector = NULL; + } else { + for (i = 0; i < vport->num_txq_grp; i++) { + for (j = 0; j < vport->txq_grps[i].num_txq; j++) + vport->txq_grps[i].txqs[j].q_vector = NULL; + } + } + + kfree(vport->q_vectors); + vport->q_vectors = NULL; } /** @@ -1022,7 +1068,25 @@ static void iecm_vport_intr_rel(struct iecm_vport *vport) */ static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int vector; + + for (vector = 0; vector < vport->num_q_vectors; vector++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[vector]; + int irq_num, vidx; + + /* free only the IRQs that were actually requested */ + if (!q_vector) + continue; + + vidx = vector + vport->q_vector_base; + irq_num = adapter->msix_entries[vidx].vector; + + /* clear the affinity_mask in the IRQ descriptor */ + irq_set_affinity_hint(irq_num, NULL); + synchronize_irq(irq_num); + free_irq(irq_num, q_vector); + } } /** @@ -1031,7 +1095,13 @@ static void iecm_vport_intr_rel_irq(struct iecm_vport *vport) */ void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) { - /* stub */ + struct iecm_q_vector *q_vector = vport->q_vectors; + struct iecm_hw *hw = &vport->adapter->hw; + int q_idx; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) + writel_relaxed(0, (u8 *)(hw->hw_addr + + q_vector[q_idx].intr_reg.dyn_ctl)); } /** @@ -1043,12 +1113,42 @@ void iecm_vport_intr_dis_irq_all(struct iecm_vport *vport) static u32 iecm_vport_intr_buildreg_itr(struct iecm_q_vector *q_vector, const int type, u16 itr) { - /* stub */ + u32 itr_val; + + itr &= IECM_ITR_MASK; + /* Don't clear PBA because that can cause lost interrupts that + * came in while we were cleaning/polling + */ + itr_val = q_vector->intr_reg.dyn_ctl_intena_m | + (type << q_vector->intr_reg.dyn_ctl_itridx_s) | + (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1)); + + return itr_val; } static inline unsigned int iecm_itr_divisor(struct iecm_q_vector *q_vector) { - /* stub */ + unsigned int divisor; + + switch (q_vector->vport->adapter->link_speed) { + case VIRTCHNL_LINK_SPEED_40GB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 1024; + break; + case VIRTCHNL_LINK_SPEED_25GB: + case VIRTCHNL_LINK_SPEED_20GB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 512; + break; + default: + case VIRTCHNL_LINK_SPEED_10GB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 256; + break; + case VIRTCHNL_LINK_SPEED_1GB: + case VIRTCHNL_LINK_SPEED_100MB: + divisor = IECM_ITR_ADAPTIVE_MIN_INC * 32; + break; + } + + return divisor; } /** @@ -1069,7 +1169,206 @@ static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, struct iecm_itr *itr, enum virtchnl_queue_type q_type) { - /* stub */ + unsigned int avg_wire_size, packets = 0, bytes = 0, new_itr; + unsigned long next_update = jiffies; + + /* If we don't have any queues just leave ourselves set for maximum + * possible latency so we take ourselves out of the equation. + */ + if (!IECM_ITR_IS_DYNAMIC(itr->target_itr)) + return; + + /* For Rx we want to push the delay up and default to low latency. + * for Tx we want to pull the delay down and default to high latency. + */ + new_itr = q_type == VIRTCHNL_QUEUE_TYPE_RX ? + IECM_ITR_ADAPTIVE_MIN_USECS | IECM_ITR_ADAPTIVE_LATENCY : + IECM_ITR_ADAPTIVE_MAX_USECS | IECM_ITR_ADAPTIVE_LATENCY; + + /* If we didn't update within up to 1 - 2 jiffies we can assume + * that either packets are coming in so slow there hasn't been + * any work, or that there is so much work that NAPI is dealing + * with interrupt moderation and we don't need to do anything. + */ + if (time_after(next_update, itr->next_update)) + goto clear_counts; + + /* If itr_countdown is set it means we programmed an ITR within + * the last 4 interrupt cycles. This has a side effect of us + * potentially firing an early interrupt. In order to work around + * this we need to throw out any data received for a few + * interrupts following the update. + */ + if (q_vector->itr_countdown) { + new_itr = itr->target_itr; + goto clear_counts; + } + + if (q_type == VIRTCHNL_QUEUE_TYPE_TX) { + packets = itr->stats.tx.packets; + bytes = itr->stats.tx.bytes; + } + + if (q_type == VIRTCHNL_QUEUE_TYPE_RX) { + packets = itr->stats.rx.packets; + bytes = itr->stats.rx.bytes; + + /* If there are 1 to 4 RX packets and bytes are less than + * 9000 assume insufficient data to use bulk rate limiting + * approach unless Tx is already in bulk rate limiting. We + * are likely latency driven. + */ + if (packets && packets < 4 && bytes < 9000 && + (q_vector->tx[0]->itr.target_itr & + IECM_ITR_ADAPTIVE_LATENCY)) { + new_itr = IECM_ITR_ADAPTIVE_LATENCY; + goto adjust_by_size; + } + } else if (packets < 4) { + /* If we have Tx and Rx ITR maxed and Tx ITR is running in + * bulk mode and we are receiving 4 or fewer packets just + * reset the ITR_ADAPTIVE_LATENCY bit for latency mode so + * that the Rx can relax. + */ + if (itr->target_itr == IECM_ITR_ADAPTIVE_MAX_USECS && + ((q_vector->rx[0]->itr.target_itr & IECM_ITR_MASK) == + IECM_ITR_ADAPTIVE_MAX_USECS)) + goto clear_counts; + } else if (packets > 32) { + /* If we have processed over 32 packets in a single interrupt + * for Tx assume we need to switch over to "bulk" mode. + */ + itr->target_itr &= ~IECM_ITR_ADAPTIVE_LATENCY; + } + + /* We have no packets to actually measure against. This means + * either one of the other queues on this vector is active or + * we are a Tx queue doing TSO with too high of an interrupt rate. + * + * Between 4 and 56 we can assume that our current interrupt delay + * is only slightly too low. As such we should increase it by a small + * fixed amount. + */ + if (packets < 56) { + new_itr = itr->target_itr + IECM_ITR_ADAPTIVE_MIN_INC; + if ((new_itr & IECM_ITR_MASK) > IECM_ITR_ADAPTIVE_MAX_USECS) { + new_itr &= IECM_ITR_ADAPTIVE_LATENCY; + new_itr += IECM_ITR_ADAPTIVE_MAX_USECS; + } + goto clear_counts; + } + + if (packets <= 256) { + new_itr = min(q_vector->tx[0]->itr.current_itr, + q_vector->rx[0]->itr.current_itr); + new_itr &= IECM_ITR_MASK; + + /* Between 56 and 112 is our "goldilocks" zone where we are + * working out "just right". Just report that our current + * ITR is good for us. + */ + if (packets <= 112) + goto clear_counts; + + /* If packet count is 128 or greater we are likely looking + * at a slight overrun of the delay we want. Try halving + * our delay to see if that will cut the number of packets + * in half per interrupt. + */ + new_itr /= 2; + new_itr &= IECM_ITR_MASK; + if (new_itr < IECM_ITR_ADAPTIVE_MIN_USECS) + new_itr = IECM_ITR_ADAPTIVE_MIN_USECS; + + goto clear_counts; + } + + /* The paths below assume we are dealing with a bulk ITR since + * number of packets is greater than 256. We are just going to have + * to compute a value and try to bring the count under control, + * though for smaller packet sizes there isn't much we can do as + * NAPI polling will likely be kicking in sooner rather than later. + */ + new_itr = IECM_ITR_ADAPTIVE_BULK; + +adjust_by_size: + /* If packet counts are 256 or greater we can assume we have a gross + * overestimation of what the rate should be. Instead of trying to fine + * tune it just use the formula below to try and dial in an exact value + * give the current packet size of the frame. + */ + avg_wire_size = bytes / packets; + + /* The following is a crude approximation of: + * wmem_default / (size + overhead) = desired_pkts_per_int + * rate / bits_per_byte / (size + Ethernet overhead) = pkt_rate + * (desired_pkt_rate / pkt_rate) * usecs_per_sec = ITR value + * + * Assuming wmem_default is 212992 and overhead is 640 bytes per + * packet, (256 skb, 64 headroom, 320 shared info), we can reduce the + * formula down to + * + * (170 * (size + 24)) / (size + 640) = ITR + * + * We first do some math on the packet size and then finally bit shift + * by 8 after rounding up. We also have to account for PCIe link speed + * difference as ITR scales based on this. + */ + if (avg_wire_size <= 60) { + /* Start at 250k ints/sec */ + avg_wire_size = 4096; + } else if (avg_wire_size <= 380) { + /* 250K ints/sec to 60K ints/sec */ + avg_wire_size *= 40; + avg_wire_size += 1696; + } else if (avg_wire_size <= 1084) { + /* 60K ints/sec to 36K ints/sec */ + avg_wire_size *= 15; + avg_wire_size += 11452; + } else if (avg_wire_size <= 1980) { + /* 36K ints/sec to 30K ints/sec */ + avg_wire_size *= 5; + avg_wire_size += 22420; + } else { + /* plateau at a limit of 30K ints/sec */ + avg_wire_size = 32256; + } + + /* If we are in low latency mode halve our delay which doubles the + * rate to somewhere between 100K to 16K ints/sec + */ + if (new_itr & IECM_ITR_ADAPTIVE_LATENCY) + avg_wire_size /= 2; + + /* Resultant value is 256 times larger than it needs to be. This + * gives us room to adjust the value as needed to either increase + * or decrease the value based on link speeds of 10G, 2.5G, 1G, etc. + * + * Use addition as we have already recorded the new latency flag + * for the ITR value. + */ + new_itr += DIV_ROUND_UP(avg_wire_size, iecm_itr_divisor(q_vector)) * + IECM_ITR_ADAPTIVE_MIN_INC; + + if ((new_itr & IECM_ITR_MASK) > IECM_ITR_ADAPTIVE_MAX_USECS) { + new_itr &= IECM_ITR_ADAPTIVE_LATENCY; + new_itr += IECM_ITR_ADAPTIVE_MAX_USECS; + } + +clear_counts: + /* write back value */ + itr->target_itr = new_itr; + + /* next update should occur within next jiffy */ + itr->next_update = next_update + 1; + + if (q_type == VIRTCHNL_QUEUE_TYPE_RX) { + itr->stats.rx.bytes = 0; + itr->stats.rx.packets = 0; + } else if (q_type == VIRTCHNL_QUEUE_TYPE_TX) { + itr->stats.tx.bytes = 0; + itr->stats.tx.packets = 0; + } } /** @@ -1078,7 +1377,59 @@ static void iecm_vport_intr_set_new_itr(struct iecm_q_vector *q_vector, */ void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) { - /* stub */ + struct iecm_hw *hw = &q_vector->vport->adapter->hw; + struct iecm_itr *tx_itr = &q_vector->tx[0]->itr; + struct iecm_itr *rx_itr = &q_vector->rx[0]->itr; + u32 intval; + + /* These will do nothing if dynamic updates are not enabled */ + iecm_vport_intr_set_new_itr(q_vector, tx_itr, q_vector->tx[0]->q_type); + iecm_vport_intr_set_new_itr(q_vector, rx_itr, q_vector->rx[0]->q_type); + + /* This block of logic allows us to get away with only updating + * one ITR value with each interrupt. The idea is to perform a + * pseudo-lazy update with the following criteria. + * + * 1. Rx is given higher priority than Tx if both are in same state + * 2. If we must reduce an ITR that is given highest priority. + * 3. We then give priority to increasing ITR based on amount. + */ + if (rx_itr->target_itr < rx_itr->current_itr) { + /* Rx ITR needs to be reduced, this is highest priority */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + rx_itr->itr_idx, + rx_itr->target_itr); + rx_itr->current_itr = rx_itr->target_itr; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + } else if ((tx_itr->target_itr < tx_itr->current_itr) || + ((rx_itr->target_itr - rx_itr->current_itr) < + (tx_itr->target_itr - tx_itr->current_itr))) { + /* Tx ITR needs to be reduced, this is second priority + * Tx ITR needs to be increased more than Rx, fourth priority + */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + tx_itr->itr_idx, + tx_itr->target_itr); + tx_itr->current_itr = tx_itr->target_itr; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + } else if (rx_itr->current_itr != rx_itr->target_itr) { + /* Rx ITR needs to be increased, third priority */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + rx_itr->itr_idx, + rx_itr->target_itr); + rx_itr->current_itr = rx_itr->target_itr; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + } else { + /* No ITR update, lowest priority */ + intval = iecm_vport_intr_buildreg_itr(q_vector, + VIRTCHNL_ITR_IDX_NO_ITR, + 0); + if (q_vector->itr_countdown) + q_vector->itr_countdown--; + } + + writel_relaxed(intval, (u8 *)(hw->hw_addr + + q_vector->intr_reg.dyn_ctl)); } /** @@ -1089,7 +1440,40 @@ void iecm_vport_intr_update_itr_ena_irq(struct iecm_q_vector *q_vector) static int iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int vector, err, irq_num, vidx; + + for (vector = 0; vector < vport->num_q_vectors; vector++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[vector]; + + vidx = vector + vport->q_vector_base; + irq_num = adapter->msix_entries[vidx].vector; + + snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s-%d", basename, "TxRx", vidx); + + err = request_irq(irq_num, vport->irq_q_handler, 0, + q_vector->name, q_vector); + if (err) { + netdev_err(vport->netdev, + "Request_irq failed, error: %d\n", err); + goto free_q_irqs; + } + /* assign the mask for this IRQ */ + irq_set_affinity_hint(irq_num, &q_vector->affinity_mask); + } + + return 0; + +free_q_irqs: + while (vector) { + vector--; + vidx = vector + vport->q_vector_base; + irq_num = adapter->msix_entries[vidx].vector, + free_irq(irq_num, + &vport->q_vectors[vector]); + } + return err; } /** @@ -1098,7 +1482,14 @@ iecm_vport_intr_req_irq(struct iecm_vport *vport, char *basename) */ void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) { - /* stub */ + int q_idx; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + + if (q_vector->num_txq || q_vector->num_rxq) + iecm_vport_intr_update_itr_ena_irq(q_vector); + } } /** @@ -1107,7 +1498,10 @@ void iecm_vport_intr_ena_irq_all(struct iecm_vport *vport) */ void iecm_vport_intr_deinit(struct iecm_vport *vport) { - /* stub */ + iecm_vport_intr_napi_dis_all(vport); + iecm_vport_intr_dis_irq_all(vport); + iecm_vport_intr_rel_irq(vport); + iecm_vport_intr_rel(vport); } /** @@ -1117,7 +1511,16 @@ void iecm_vport_intr_deinit(struct iecm_vport *vport) static void iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) { - /* stub */ + int q_idx; + + if (!vport->netdev) + return; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + + napi_enable(&q_vector->napi); + } } /** @@ -1166,7 +1569,65 @@ int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) */ void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) { - /* stub */ + int i, j, k = 0, num_rxq, num_txq; + struct iecm_rxq_group *rx_qgrp; + struct iecm_txq_group *tx_qgrp; + struct iecm_queue *q; + int q_index; + + for (i = 0; i < vport->num_rxq_grp; i++) { + rx_qgrp = &vport->rxq_grps[i]; + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (j = 0; j < num_rxq; j++) { + if (k >= vport->num_q_vectors) + k = k % vport->num_q_vectors; + + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + else + q = &rx_qgrp->singleq.rxqs[j]; + q->q_vector = &vport->q_vectors[k]; + q_index = q->q_vector->num_rxq; + q->q_vector->rx[q_index] = q; + q->q_vector->num_rxq++; + + k++; + } + } + k = 0; + for (i = 0; i < vport->num_txq_grp; i++) { + tx_qgrp = &vport->txq_grps[i]; + num_txq = tx_qgrp->num_txq; + + if (iecm_is_queue_model_split(vport->txq_model)) { + if (k >= vport->num_q_vectors) + k = k % vport->num_q_vectors; + + q = tx_qgrp->complq; + q->q_vector = &vport->q_vectors[k]; + q_index = q->q_vector->num_txq; + q->q_vector->tx[q_index] = q; + q->q_vector->num_txq++; + k++; + } else { + for (j = 0; j < num_txq; j++) { + if (k >= vport->num_q_vectors) + k = k % vport->num_q_vectors; + + q = &tx_qgrp->txqs[j]; + q->q_vector = &vport->q_vectors[k]; + q_index = q->q_vector->num_txq; + q->q_vector->tx[q_index] = q; + q->q_vector->num_txq++; + + k++; + } + } + } } /** @@ -1177,7 +1638,38 @@ void iecm_vport_intr_map_vector_to_qs(struct iecm_vport *vport) */ static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct iecm_q_vector *q_vector; + int i; + + if (adapter->req_vec_chunks) { + struct virtchnl_vector_chunks *vchunks; + struct virtchnl_alloc_vectors *ac; + /* We may never deal with more that 256 same type of vectors */ +#define IECM_MAX_VECIDS 256 + u16 vecids[IECM_MAX_VECIDS]; + int num_ids; + + ac = adapter->req_vec_chunks; + vchunks = &ac->vchunks; + + num_ids = iecm_vport_get_vec_ids(vecids, IECM_MAX_VECIDS, + vchunks); + if (num_ids != adapter->num_msix_entries) + return -EFAULT; + + for (i = 0; i < vport->num_q_vectors; i++) { + q_vector = &vport->q_vectors[i]; + q_vector->v_idx = vecids[i + vport->q_vector_base]; + } + } else { + for (i = 0; i < vport->num_q_vectors; i++) { + q_vector = &vport->q_vectors[i]; + q_vector->v_idx = i + vport->q_vector_base; + } + } + + return 0; } /** @@ -1189,7 +1681,65 @@ static int iecm_vport_intr_init_vec_idx(struct iecm_vport *vport) */ int iecm_vport_intr_alloc(struct iecm_vport *vport) { - /* stub */ + int txqs_per_vector, rxqs_per_vector; + struct iecm_q_vector *q_vector; + int v_idx, err = 0; + + vport->q_vectors = kcalloc(vport->num_q_vectors, + sizeof(struct iecm_q_vector), GFP_KERNEL); + + if (!vport->q_vectors) + return -ENOMEM; + + txqs_per_vector = DIV_ROUND_UP(vport->num_txq, vport->num_q_vectors); + rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq, vport->num_q_vectors); + + for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) { + q_vector = &vport->q_vectors[v_idx]; + q_vector->vport = vport; + q_vector->itr_countdown = ITR_COUNTDOWN_START; + + q_vector->tx = kcalloc(txqs_per_vector, + sizeof(struct iecm_queue *), + GFP_KERNEL); + if (!q_vector->tx) { + err = -ENOMEM; + goto free_vport_q_vec; + } + + q_vector->rx = kcalloc(rxqs_per_vector, + sizeof(struct iecm_queue *), + GFP_KERNEL); + if (!q_vector->rx) { + err = -ENOMEM; + goto free_vport_q_vec_tx; + } + + /* only set affinity_mask if the CPU is online */ + if (cpu_online(v_idx)) + cpumask_set_cpu(v_idx, &q_vector->affinity_mask); + + /* Register the NAPI handler */ + if (vport->netdev) { + if (iecm_is_queue_model_split(vport->txq_model)) + netif_napi_add(vport->netdev, &q_vector->napi, + iecm_vport_splitq_napi_poll, + NAPI_POLL_WEIGHT); + else + netif_napi_add(vport->netdev, &q_vector->napi, + iecm_vport_singleq_napi_poll, + NAPI_POLL_WEIGHT); + } + } + + err = iecm_vport_intr_init_vec_idx(vport); + goto handle_err; +free_vport_q_vec_tx: + kfree(q_vector->tx); +free_vport_q_vec: + kfree(vport->q_vectors); +handle_err: + return err; } /** @@ -1200,7 +1750,31 @@ int iecm_vport_intr_alloc(struct iecm_vport *vport) */ int iecm_vport_intr_init(struct iecm_vport *vport) { - /* stub */ + char int_name[IECM_INT_NAME_STR_LEN]; + int err = 0; + + if (iecm_vport_intr_alloc(vport)) + return -ENOMEM; + + iecm_vport_intr_map_vector_to_qs(vport); + iecm_vport_intr_napi_ena_all(vport); + + vport->adapter->dev_ops.reg_ops.intr_reg_init(vport); + + snprintf(int_name, sizeof(int_name) - 1, "%s-%s", + dev_driver_string(&vport->adapter->pdev->dev), + vport->netdev->name); + + err = iecm_vport_intr_req_irq(vport, int_name); + if (err) + goto unroll_vectors_alloc; + + iecm_vport_intr_ena_irq_all(vport); + goto handle_err; +unroll_vectors_alloc: + iecm_vport_intr_rel(vport); +handle_err: + return err; } EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index 57862fbfdb9b..b1775cc38924 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -1874,7 +1874,29 @@ int iecm_vport_get_vec_ids(u16 *vecids, int num_vecids, struct virtchnl_vector_chunks *chunks) { - /* stub */ + int num_chunks = chunks->num_vector_chunks; + struct virtchnl_vector_chunk *chunk; + int num_vecid_filled = 0; + int start_vecid; + int num_vec; + int i, j; + + for (j = 0; j < num_chunks; j++) { + chunk = &chunks->num_vchunk[j]; + num_vec = chunk->num_vectors; + start_vecid = chunk->start_vector_id; + for (i = 0; i < num_vec; i++) { + if ((num_vecid_filled + i) < num_vecids) { + vecids[num_vecid_filled + i] = start_vecid; + start_vecid++; + } else { + break; + } + } + num_vecid_filled = num_vecid_filled + i; + } + + return num_vecid_filled; } /** From patchwork Thu Jun 18 05:13:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26DE5C433E2 for ; Thu, 18 Jun 2020 05:14:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1ACA21852 for ; Thu, 18 Jun 2020 05:14:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727111AbgFRFOS (ORCPT ); Thu, 18 Jun 2020 01:14:18 -0400 Received: from mga03.intel.com ([134.134.136.65]:25346 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726995AbgFRFOD (ORCPT ); Thu, 18 Jun 2020 01:14:03 -0400 IronPort-SDR: tiShxCGwj4xA+6yzjf03h3OFvTrPq5hlMg1wStdMkAy/XNMPneIWadt8bycAoAQnJ3uAqDS6lv LQWVlvGLqNXg== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="142378055" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="142378055" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:52 -0700 IronPort-SDR: PcPIKMrkdFGEYSc8z8NwM7GjPAF1mvLNL7WEi06tR7b9cUROhAizDUYNVi00e1RnJ7b4UBmlsc tetH14cVaL7g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495603" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:51 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next 09/15] iecm: Init and allocate vport Date: Wed, 17 Jun 2020 22:13:38 -0700 Message-Id: <20200618051344.516587-10-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Initialize vport and allocate queue resources. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_lib.c | 87 +- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 797 +++++++++++++++++- .../net/ethernet/intel/iecm/iecm_virtchnl.c | 37 +- 3 files changed, 890 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_lib.c b/drivers/net/ethernet/intel/iecm/iecm_lib.c index a4fd04fd0500..d855d6238740 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_lib.c +++ b/drivers/net/ethernet/intel/iecm/iecm_lib.c @@ -443,7 +443,15 @@ static void iecm_vport_rel_all(struct iecm_adapter *adapter) */ void iecm_vport_set_hsplit(struct iecm_vport *vport, struct bpf_prog *prog) { - /* stub */ + if (prog) { + vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT; + return; + } + if (iecm_is_cap_ena(vport->adapter, VIRTCHNL_CAP_HEADER_SPLIT) && + iecm_is_queue_model_split(vport->rxq_model)) + vport->rx_hsplit_en = IECM_RX_HDR_SPLIT; + else + vport->rx_hsplit_en = IECM_RX_NO_HDR_SPLIT; } /** @@ -531,7 +539,12 @@ static void iecm_service_task(struct work_struct *work) */ static void iecm_up_complete(struct iecm_vport *vport) { - /* stub */ + netif_set_real_num_rx_queues(vport->netdev, vport->num_txq); + netif_set_real_num_tx_queues(vport->netdev, vport->num_rxq); + netif_carrier_on(vport->netdev); + netif_tx_start_all_queues(vport->netdev); + + vport->adapter->state = __IECM_UP; } /** @@ -540,7 +553,71 @@ static void iecm_up_complete(struct iecm_vport *vport) */ static int iecm_vport_open(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + int err; + + if (vport->adapter->state != __IECM_DOWN) + return -EBUSY; + + /* we do not allow interface up just yet */ + netif_carrier_off(vport->netdev); + + if (adapter->dev_ops.vc_ops.enable_vport) { + err = adapter->dev_ops.vc_ops.enable_vport(vport); + if (err) + return -EAGAIN; + } + + if (iecm_vport_queues_alloc(vport)) { + err = -ENOMEM; + goto unroll_queues_alloc; + } + + err = iecm_vport_intr_init(vport); + if (err) + goto unroll_intr_init; + + err = vport->adapter->dev_ops.vc_ops.config_queues(vport); + if (err) + goto unroll_config_queues; + err = vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, true); + if (err) { + dev_err(&vport->adapter->pdev->dev, + "Call to irq_map_unmap returned %d\n", err); + goto unroll_config_queues; + } + err = vport->adapter->dev_ops.vc_ops.enable_queues(vport); + if (err) + goto unroll_enable_queues; + + err = vport->adapter->dev_ops.vc_ops.get_ptype(vport); + if (err) + goto unroll_get_ptype; + + if (adapter->rss_data.rss_lut) + err = iecm_config_rss(vport); + else + err = iecm_init_rss(vport); + if (err) + goto unroll_init_rss; + iecm_up_complete(vport); + + netif_info(vport->adapter, hw, vport->netdev, "%s\n", __func__); + + return 0; +unroll_init_rss: +unroll_get_ptype: + vport->adapter->dev_ops.vc_ops.disable_queues(vport); +unroll_enable_queues: + vport->adapter->dev_ops.vc_ops.irq_map_unmap(vport, false); +unroll_config_queues: + iecm_vport_intr_deinit(vport); +unroll_intr_init: + iecm_vport_queues_rel(vport); +unroll_queues_alloc: + adapter->dev_ops.vc_ops.disable_vport(vport); + + return err; } /** @@ -861,7 +938,9 @@ EXPORT_SYMBOL(iecm_shutdown); */ static int iecm_open(struct net_device *netdev) { - /* stub */ + struct iecm_netdev_priv *np = netdev_priv(netdev); + + return iecm_vport_open(np->vport); } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index da3065a87c2c..16fea9ad6545 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -82,7 +82,37 @@ void iecm_tx_desc_rel_all(struct iecm_vport *vport) */ static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) { - /* stub */ + int buf_size; + int i = 0; + + /* Allocate book keeping buffers only. Buffers to be supplied to HW + * are allocated by kernel network stack and received as part of skb + */ + buf_size = sizeof(struct iecm_tx_buf) * tx_q->desc_count; + tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL); + if (!tx_q->tx_buf) + return IECM_ERR_NO_MEMORY; + + /* Initialize Tx buf stack for out-of-order completions if + * flow scheduling offload is enabled + */ + tx_q->buf_stack.bufs = + kcalloc(tx_q->desc_count, sizeof(struct iecm_tx_buf *), + GFP_KERNEL); + if (!tx_q->buf_stack.bufs) + return IECM_ERR_NO_MEMORY; + + for (i = 0; i < tx_q->desc_count; i++) { + tx_q->buf_stack.bufs[i] = kzalloc(sizeof(struct iecm_tx_buf), + GFP_KERNEL); + if (!tx_q->buf_stack.bufs[i]) + return IECM_ERR_NO_MEMORY; + } + + tx_q->buf_stack.size = tx_q->desc_count; + tx_q->buf_stack.top = tx_q->desc_count; + + return 0; } /** @@ -92,7 +122,40 @@ static enum iecm_status iecm_tx_buf_alloc_all(struct iecm_queue *tx_q) */ static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) { - /* stub */ + struct device *dev = tx_q->dev; + enum iecm_status err = 0; + + if (bufq) { + err = iecm_tx_buf_alloc_all(tx_q); + if (err) + goto err_alloc; + tx_q->size = tx_q->desc_count * + sizeof(struct iecm_base_tx_desc); + } else { + tx_q->size = tx_q->desc_count * + sizeof(struct iecm_splitq_tx_compl_desc); + } + + /* Allocate descriptors also round up to nearest 4K */ + tx_q->size = ALIGN(tx_q->size, 4096); + tx_q->desc_ring = dmam_alloc_coherent(dev, tx_q->size, &tx_q->dma, + GFP_KERNEL); + if (!tx_q->desc_ring) { + dev_info(dev, "Unable to allocate memory for the Tx descriptor ring, size=%d\n", + tx_q->size); + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + tx_q->next_to_alloc = 0; + tx_q->next_to_use = 0; + tx_q->next_to_clean = 0; + set_bit(__IECM_Q_GEN_CHK, tx_q->flags); + +err_alloc: + if (err) + iecm_tx_desc_rel(tx_q, bufq); + return err; } /** @@ -101,7 +164,41 @@ static enum iecm_status iecm_tx_desc_alloc(struct iecm_queue *tx_q, bool bufq) */ static enum iecm_status iecm_tx_desc_alloc_all(struct iecm_vport *vport) { - /* stub */ + struct pci_dev *pdev = vport->adapter->pdev; + enum iecm_status err = 0; + int i, j; + + /* Setup buffer queues. In single queue model buffer queues and + * completion queues will be same + */ + for (i = 0; i < vport->num_txq_grp; i++) { + for (j = 0; j < vport->txq_grps[i].num_txq; j++) { + err = iecm_tx_desc_alloc(&vport->txq_grps[i].txqs[j], + true); + if (err) { + dev_err(&pdev->dev, + "Allocation for Tx Queue %u failed\n", + i); + goto err_out; + } + } + + if (iecm_is_queue_model_split(vport->txq_model)) { + /* Setup completion queues */ + err = iecm_tx_desc_alloc(vport->txq_grps[i].complq, + false); + if (err) { + dev_err(&pdev->dev, + "Allocation for Tx Completion Queue %u failed\n", + i); + goto err_out; + } + } + } +err_out: + if (err) + iecm_tx_desc_rel_all(vport); + return err; } /** @@ -156,7 +253,17 @@ void iecm_rx_desc_rel_all(struct iecm_vport *vport) */ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) { - /* stub */ + /* update next to alloc since we have filled the ring */ + rxq->next_to_alloc = val; + + rxq->next_to_use = val; + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel_relaxed(val, rxq->tail); } /** @@ -169,7 +276,34 @@ void iecm_rx_buf_hw_update(struct iecm_queue *rxq, u32 val) */ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) { - /* stub */ + struct page *page = buf->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = alloc_page(GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!page)) + return false; + + /* map page for use */ + dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + + /* if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rxq->dev, dma)) { + __free_pages(page, 0); + return false; + } + + buf->dma = dma; + buf->page = page; + buf->page_offset = iecm_rx_offset(rxq); + + return true; } /** @@ -183,7 +317,34 @@ bool iecm_rx_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *buf) bool iecm_rx_hdr_buf_hw_alloc(struct iecm_queue *rxq, struct iecm_rx_buf *hdr_buf) { - /* stub */ + struct page *page = hdr_buf->page; + dma_addr_t dma; + + /* since we are recycling buffers we should seldom need to alloc */ + if (likely(page)) + return true; + + /* alloc new page for storage */ + page = alloc_page(GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!page)) + return false; + + /* map page for use */ + dma = dma_map_page(rxq->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); + + /* if mapping failed free memory back to system since + * there isn't much point in holding memory we can't use + */ + if (dma_mapping_error(rxq->dev, dma)) { + __free_pages(page, 0); + return false; + } + + hdr_buf->dma = dma; + hdr_buf->page = page; + hdr_buf->page_offset = 0; + + return true; } /** @@ -197,7 +358,59 @@ static bool iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, u16 cleaned_count) { - /* stub */ + struct iecm_splitq_rx_buf_desc *splitq_rx_desc = NULL; + struct iecm_rx_buf *hdr_buf = NULL; + u16 nta = rxq->next_to_alloc; + struct iecm_rx_buf *buf; + + /* do nothing if no valid netdev defined */ + if (!rxq->vport->netdev || !cleaned_count) + return false; + + splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, nta); + + buf = &rxq->rx_buf.buf[nta]; + if (rxq->rx_hsplit_en) + hdr_buf = &rxq->rx_buf.hdr_buf[nta]; + + do { + if (rxq->rx_hsplit_en) { + if (!iecm_rx_hdr_buf_hw_alloc(rxq, hdr_buf)) + break; + + splitq_rx_desc->hdr_addr = + cpu_to_le64(hdr_buf->dma + + hdr_buf->page_offset); + hdr_buf++; + } + + if (!iecm_rx_buf_hw_alloc(rxq, buf)) + break; + + /* Refresh the desc even if buffer_addrs didn't change + * because each write-back erases this info. + */ + splitq_rx_desc->pkt_addr = + cpu_to_le64(buf->dma + buf->page_offset); + splitq_rx_desc->qword0.buf_id = cpu_to_le64(nta); + + splitq_rx_desc++; + buf++; + nta++; + if (unlikely(nta == rxq->desc_count)) { + splitq_rx_desc = IECM_SPLITQ_RX_BUF_DESC(rxq, 0); + buf = rxq->rx_buf.buf; + hdr_buf = rxq->rx_buf.hdr_buf; + nta = 0; + } + + cleaned_count--; + } while (cleaned_count); + + if (rxq->next_to_alloc != nta) + iecm_rx_buf_hw_update(rxq, nta); + + return !!cleaned_count; } /** @@ -206,7 +419,44 @@ iecm_rx_buf_hw_alloc_all(struct iecm_queue *rxq, */ static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) { - /* stub */ + enum iecm_status err = 0; + + /* Allocate book keeping buffers */ + rxq->rx_buf.buf = kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf), + GFP_KERNEL); + if (!rxq->rx_buf.buf) { + err = IECM_ERR_NO_MEMORY; + goto rx_buf_alloc_all_out; + } + + if (rxq->rx_hsplit_en) { + rxq->rx_buf.hdr_buf = + kcalloc(rxq->desc_count, sizeof(struct iecm_rx_buf), + GFP_KERNEL); + if (!rxq->rx_buf.hdr_buf) { + err = IECM_ERR_NO_MEMORY; + goto rx_buf_alloc_all_out; + } + } else { + rxq->rx_buf.hdr_buf = NULL; + } + + /* Allocate buffers to be given to HW. Allocate one less than + * total descriptor count as RX splits 4k buffers to 2K and recycles + */ + if (iecm_is_queue_model_split(rxq->vport->rxq_model)) { + if (iecm_rx_buf_hw_alloc_all(rxq, + rxq->desc_count - 1)) + err = IECM_ERR_NO_MEMORY; + } else if (iecm_rx_singleq_buf_hw_alloc_all(rxq, + rxq->desc_count - 1)) { + err = IECM_ERR_NO_MEMORY; + } + +rx_buf_alloc_all_out: + if (err) + iecm_rx_buf_rel_all(rxq); + return err; } /** @@ -218,7 +468,50 @@ static enum iecm_status iecm_rx_buf_alloc_all(struct iecm_queue *rxq) static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, enum virtchnl_queue_model q_model) { - /* stub */ + struct device *dev = rxq->dev; + enum iecm_status err = 0; + + /* As both single and split descriptors are 32 byte, memory size + * will be same for all three singleq_base Rx, buf., splitq_base + * Rx. So pick anyone of them for size + */ + if (bufq) { + rxq->size = rxq->desc_count * + sizeof(struct iecm_splitq_rx_buf_desc); + } else { + rxq->size = rxq->desc_count * + sizeof(union iecm_rx_desc); + } + + /* Allocate descriptors and also round up to nearest 4K */ + rxq->size = ALIGN(rxq->size, 4096); + rxq->desc_ring = dmam_alloc_coherent(dev, rxq->size, + &rxq->dma, GFP_KERNEL); + if (!rxq->desc_ring) { + dev_info(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n", + rxq->size); + err = IECM_ERR_NO_MEMORY; + return err; + } + + rxq->next_to_alloc = 0; + rxq->next_to_clean = 0; + rxq->next_to_use = 0; + set_bit(__IECM_Q_GEN_CHK, rxq->flags); + + /* Allocate buffers for a Rx queue if the q_model is single OR if it + * is a buffer queue in split queue model + */ + if (bufq || !iecm_is_queue_model_split(q_model)) { + err = iecm_rx_buf_alloc_all(rxq); + if (err) + goto err_alloc; + } + +err_alloc: + if (err) + iecm_rx_desc_rel(rxq, bufq, q_model); + return err; } /** @@ -227,7 +520,48 @@ static enum iecm_status iecm_rx_desc_alloc(struct iecm_queue *rxq, bool bufq, */ static enum iecm_status iecm_rx_desc_alloc_all(struct iecm_vport *vport) { - /* stub */ + struct device *dev = &vport->adapter->pdev->dev; + enum iecm_status err = 0; + struct iecm_queue *q; + int i, j, num_rxq; + + for (i = 0; i < vport->num_rxq_grp; i++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + num_rxq = vport->rxq_grps[i].splitq.num_rxq_sets; + else + num_rxq = vport->rxq_grps[i].singleq.num_rxq; + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &vport->rxq_grps[i].splitq.rxq_sets[j].rxq; + else + q = &vport->rxq_grps[i].singleq.rxqs[j]; + err = iecm_rx_desc_alloc(q, false, vport->rxq_model); + if (err) { + dev_err(dev, "Memory allocation for Rx Queue %u failed\n", + i); + goto err_out; + } + } + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + q = + &vport->rxq_grps[i].splitq.bufq_sets[j].bufq; + err = iecm_rx_desc_alloc(q, true, + vport->rxq_model); + if (err) { + dev_err(dev, "Memory allocation for Rx Buffer Queue %u failed\n", + i); + goto err_out; + } + } + } + } +err_out: + if (err) + iecm_rx_desc_rel_all(vport); + return err; } /** @@ -279,7 +613,26 @@ void iecm_vport_queues_rel(struct iecm_vport *vport) static enum iecm_status iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) { - /* stub */ + enum iecm_status err = 0; + int i, j, k = 0; + + vport->txqs = kcalloc(vport->num_txq, sizeof(struct iecm_queue *), + GFP_KERNEL); + + if (!vport->txqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_grp = &vport->txq_grps[i]; + + for (j = 0; j < tx_grp->num_txq; j++, k++) { + vport->txqs[k] = &tx_grp->txqs[j]; + vport->txqs[k]->idx = k; + } + } +err_alloc: + return err; } /** @@ -290,7 +643,12 @@ iecm_vport_init_fast_path_txqs(struct iecm_vport *vport) void iecm_vport_init_num_qs(struct iecm_vport *vport, struct virtchnl_create_vport *vport_msg) { - /* stub */ + vport->num_txq = vport_msg->num_tx_q; + vport->num_rxq = vport_msg->num_rx_q; + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_complq = vport_msg->num_tx_complq; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->num_bufq = vport_msg->num_rx_bufq; } /** @@ -299,7 +657,32 @@ void iecm_vport_init_num_qs(struct iecm_vport *vport, */ void iecm_vport_calc_num_q_desc(struct iecm_vport *vport) { - /* stub */ + int num_req_txq_desc = vport->adapter->config_data.num_req_txq_desc; + int num_req_rxq_desc = vport->adapter->config_data.num_req_rxq_desc; + + vport->complq_desc_count = 0; + vport->bufq_desc_count = 0; + if (num_req_txq_desc) { + vport->txq_desc_count = num_req_txq_desc; + if (iecm_is_queue_model_split(vport->txq_model)) + vport->complq_desc_count = num_req_txq_desc; + } else { + vport->txq_desc_count = + IECM_DFLT_TX_Q_DESC_COUNT; + if (iecm_is_queue_model_split(vport->txq_model)) { + vport->complq_desc_count = + IECM_DFLT_TX_COMPLQ_DESC_COUNT; + } + } + if (num_req_rxq_desc) { + vport->rxq_desc_count = num_req_rxq_desc; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->bufq_desc_count = num_req_rxq_desc; + } else { + vport->rxq_desc_count = IECM_DFLT_RX_Q_DESC_COUNT; + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->bufq_desc_count = IECM_DFLT_RX_BUFQ_DESC_COUNT; + } } EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); @@ -311,7 +694,51 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_desc); void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, int num_req_qs) { - /* stub */ + int dflt_splitq_txq_grps, dflt_singleq_txqs; + int dflt_splitq_rxq_grps, dflt_singleq_rxqs; + int num_txq_grps, num_rxq_grps; + int num_cpus; + + /* Restrict num of queues to cpus online as a default configuration to + * give best performance. User can always override to a max number + * of queues via ethtool. + */ + num_cpus = num_online_cpus(); + dflt_splitq_txq_grps = min_t(int, IECM_DFLT_SPLITQ_TX_Q_GROUPS, + num_cpus); + dflt_singleq_txqs = min_t(int, IECM_DFLT_SINGLEQ_TXQ_PER_GROUP, + num_cpus); + dflt_splitq_rxq_grps = min_t(int, IECM_DFLT_SPLITQ_RX_Q_GROUPS, + num_cpus); + dflt_singleq_rxqs = min_t(int, IECM_DFLT_SINGLEQ_RXQ_PER_GROUP, + num_cpus); + + if (iecm_is_queue_model_split(vport_msg->txq_model)) { + num_txq_grps = num_req_qs ? num_req_qs : dflt_splitq_txq_grps; + vport_msg->num_tx_complq = num_txq_grps * + IECM_COMPLQ_PER_GROUP; + vport_msg->num_tx_q = num_txq_grps * + IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + } else { + num_txq_grps = IECM_DFLT_SINGLEQ_TX_Q_GROUPS; + vport_msg->num_tx_q = num_txq_grps * + (num_req_qs ? num_req_qs : + dflt_singleq_txqs); + vport_msg->num_tx_complq = 0; + } + if (iecm_is_queue_model_split(vport_msg->rxq_model)) { + num_rxq_grps = num_req_qs ? num_req_qs : dflt_splitq_rxq_grps; + vport_msg->num_rx_bufq = num_rxq_grps * + IECM_BUFQS_PER_RXQ_SET; + vport_msg->num_rx_q = num_rxq_grps * + IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + } else { + num_rxq_grps = IECM_DFLT_SINGLEQ_RX_Q_GROUPS; + vport_msg->num_rx_bufq = 0; + vport_msg->num_rx_q = num_rxq_grps * + (num_req_qs ? num_req_qs : + dflt_singleq_rxqs); + } } /** @@ -320,7 +747,15 @@ void iecm_vport_calc_total_qs(struct virtchnl_create_vport *vport_msg, */ void iecm_vport_calc_num_q_groups(struct iecm_vport *vport) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_txq_grp = vport->num_txq; + else + vport->num_txq_grp = IECM_DFLT_SINGLEQ_TX_Q_GROUPS; + + if (iecm_is_queue_model_split(vport->rxq_model)) + vport->num_rxq_grp = vport->num_rxq; + else + vport->num_rxq_grp = IECM_DFLT_SINGLEQ_RX_Q_GROUPS; } EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); @@ -333,7 +768,15 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_groups); static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, int *num_txq, int *num_rxq) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + *num_txq = IECM_DFLT_SPLITQ_TXQ_PER_GROUP; + else + *num_txq = vport->num_txq; + + if (iecm_is_queue_model_split(vport->rxq_model)) + *num_rxq = IECM_DFLT_SPLITQ_RXQ_PER_GROUP; + else + *num_rxq = vport->num_rxq; } /** @@ -344,7 +787,10 @@ static void iecm_vport_calc_numq_per_grp(struct iecm_vport *vport, */ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) { - /* stub */ + if (iecm_is_queue_model_split(vport->txq_model)) + vport->num_q_vectors = vport->num_txq_grp; + else + vport->num_q_vectors = vport->num_txq; } /** @@ -355,7 +801,68 @@ void iecm_vport_calc_num_q_vec(struct iecm_vport *vport) static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, int num_txq) { - /* stub */ + struct iecm_itr tx_itr = { 0 }; + enum iecm_status err = 0; + int i; + + vport->txq_grps = kcalloc(vport->num_txq_grp, + sizeof(*vport->txq_grps), GFP_KERNEL); + if (!vport->txq_grps) + return IECM_ERR_NO_MEMORY; + + tx_itr.target_itr = IECM_ITR_TX_DEF; + tx_itr.itr_idx = VIRTCHNL_ITR_IDX_1; + tx_itr.next_update = jiffies + 1; + + for (i = 0; i < vport->num_txq_grp; i++) { + struct iecm_txq_group *tx_qgrp = &vport->txq_grps[i]; + int j; + + tx_qgrp->vport = vport; + tx_qgrp->num_txq = num_txq; + tx_qgrp->txqs = kcalloc(num_txq, sizeof(*tx_qgrp->txqs), + GFP_KERNEL); + if (!tx_qgrp->txqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + for (j = 0; j < tx_qgrp->num_txq; j++) { + struct iecm_queue *q = &tx_qgrp->txqs[j]; + + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->txq_desc_count; + q->vport = vport; + q->txq_grp = tx_qgrp; + hash_init(q->sched_buf_hash); + + if (!iecm_is_queue_model_split(vport->txq_model)) + q->itr = tx_itr; + } + + if (!iecm_is_queue_model_split(vport->txq_model)) + continue; + + tx_qgrp->complq = kcalloc(IECM_COMPLQ_PER_GROUP, + sizeof(*tx_qgrp->complq), + GFP_KERNEL); + if (!tx_qgrp->complq) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + tx_qgrp->complq->dev = &vport->adapter->pdev->dev; + tx_qgrp->complq->desc_count = vport->complq_desc_count; + tx_qgrp->complq->vport = vport; + tx_qgrp->complq->txq_grp = tx_qgrp; + + tx_qgrp->complq->itr = tx_itr; + } + +err_alloc: + if (err) + iecm_txq_group_rel(vport); + return err; } /** @@ -366,7 +873,118 @@ static enum iecm_status iecm_txq_group_alloc(struct iecm_vport *vport, static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, int num_rxq) { - /* stub */ + enum iecm_status err = 0; + struct iecm_itr rx_itr = {0}; + struct iecm_queue *q; + int i; + + vport->rxq_grps = kcalloc(vport->num_rxq_grp, + sizeof(struct iecm_rxq_group), GFP_KERNEL); + if (!vport->rxq_grps) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + rx_itr.target_itr = IECM_ITR_RX_DEF; + rx_itr.itr_idx = VIRTCHNL_ITR_IDX_0; + rx_itr.next_update = jiffies + 1; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + int j; + + rx_qgrp->vport = vport; + if (iecm_is_queue_model_split(vport->rxq_model)) { + rx_qgrp->splitq.num_rxq_sets = num_rxq; + rx_qgrp->splitq.rxq_sets = + kcalloc(num_rxq, + sizeof(struct iecm_rxq_set), + GFP_KERNEL); + if (!rx_qgrp->splitq.rxq_sets) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + rx_qgrp->splitq.bufq_sets = + kcalloc(IECM_BUFQS_PER_RXQ_SET, + sizeof(struct iecm_bufq_set), + GFP_KERNEL); + if (!rx_qgrp->splitq.bufq_sets) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + int swq_size = sizeof(struct iecm_sw_queue); + + q = &rx_qgrp->splitq.bufq_sets[j].bufq; + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->bufq_desc_count; + q->vport = vport; + q->rxq_grp = rx_qgrp; + q->idx = j; + q->rx_buf_size = IECM_RX_BUF_2048; + q->rsc_low_watermark = IECM_LOW_WATERMARK; + q->rx_buf_stride = IECM_RX_BUF_STRIDE; + q->itr = rx_itr; + + if (vport->rx_hsplit_en) { + q->rx_hsplit_en = vport->rx_hsplit_en; + q->rx_hbuf_size = IECM_HDR_BUF_SIZE; + } + + rx_qgrp->splitq.bufq_sets[j].num_refillqs = + num_rxq; + rx_qgrp->splitq.bufq_sets[j].refillqs = + kcalloc(num_rxq, swq_size, GFP_KERNEL); + if (!rx_qgrp->splitq.bufq_sets[j].refillqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + } + } else { + rx_qgrp->singleq.num_rxq = num_rxq; + rx_qgrp->singleq.rxqs = kcalloc(num_rxq, + sizeof(struct iecm_queue), + GFP_KERNEL); + if (!rx_qgrp->singleq.rxqs) { + err = IECM_ERR_NO_MEMORY; + goto err_alloc; + } + } + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) { + q = &rx_qgrp->splitq.rxq_sets[j].rxq; + rx_qgrp->splitq.rxq_sets[j].refillq0 = + &rx_qgrp->splitq.bufq_sets[0].refillqs[j]; + rx_qgrp->splitq.rxq_sets[j].refillq1 = + &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; + + if (vport->rx_hsplit_en) { + q->rx_hsplit_en = vport->rx_hsplit_en; + q->rx_hbuf_size = IECM_HDR_BUF_SIZE; + } + + } else { + q = &rx_qgrp->singleq.rxqs[j]; + } + q->dev = &vport->adapter->pdev->dev; + q->desc_count = vport->rxq_desc_count; + q->vport = vport; + q->rxq_grp = rx_qgrp; + q->idx = (i * num_rxq) + j; + q->rx_buf_size = IECM_RX_BUF_2048; + q->rsc_low_watermark = IECM_LOW_WATERMARK; + q->rx_max_pkt_size = vport->netdev->mtu + + IECM_PACKET_HDR_PAD; + q->itr = rx_itr; + } + } +err_alloc: + if (err) + iecm_rxq_group_rel(vport); + return err; } /** @@ -376,7 +994,20 @@ static enum iecm_status iecm_rxq_group_alloc(struct iecm_vport *vport, static enum iecm_status iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) { - /* stub */ + int num_txq, num_rxq; + enum iecm_status err; + + iecm_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq); + + err = iecm_txq_group_alloc(vport, num_txq); + if (err) + goto err_out; + + err = iecm_rxq_group_alloc(vport, num_rxq); +err_out: + if (err) + iecm_vport_queue_grp_rel_all(vport); + return err; } /** @@ -387,7 +1018,35 @@ iecm_vport_queue_grp_alloc_all(struct iecm_vport *vport) */ enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + enum iecm_status err; + + err = iecm_vport_queue_grp_alloc_all(vport); + if (err) + goto err_out; + + err = adapter->dev_ops.vc_ops.vport_queue_ids_init(vport); + if (err) + goto err_out; + + adapter->dev_ops.reg_ops.vportq_reg_init(vport); + + err = iecm_tx_desc_alloc_all(vport); + if (err) + goto err_out; + + err = iecm_rx_desc_alloc_all(vport); + if (err) + goto err_out; + + err = iecm_vport_init_fast_path_txqs(vport); + if (err) + goto err_out; + + return 0; +err_out: + iecm_vport_queues_rel(vport); + return err; } /** @@ -1786,7 +2445,16 @@ EXPORT_SYMBOL(iecm_vport_calc_num_q_vec); */ int iecm_config_rss(struct iecm_vport *vport) { - /* stub */ + int err = iecm_send_get_set_rss_key_msg(vport, false); + + if (!err) + err = vport->adapter->dev_ops.vc_ops.get_set_rss_lut(vport, + false); + if (!err) + err = vport->adapter->dev_ops.vc_ops.get_set_rss_hash(vport, + false); + + return err; } /** @@ -1797,7 +2465,20 @@ int iecm_config_rss(struct iecm_vport *vport) */ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) { - /* stub */ + int i, j, k = 0; + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++) + qid_list[k++] = + rx_qgrp->splitq.rxq_sets[j].rxq.q_id; + } else { + for (j = 0; j < rx_qgrp->singleq.num_rxq; j++) + qid_list[k++] = rx_qgrp->singleq.rxqs[j].q_id; + } + } } /** @@ -1809,7 +2490,13 @@ void iecm_get_rx_qid_list(struct iecm_vport *vport, u16 *qid_list) */ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) { - /* stub */ + int num_lut_segs, lut_seg, i, k = 0; + + num_lut_segs = vport->adapter->rss_data.rss_lut_size / vport->num_rxq; + for (lut_seg = 0; lut_seg < num_lut_segs; lut_seg++) { + for (i = 0; i < vport->num_rxq; i++) + vport->adapter->rss_data.rss_lut[k++] = qid_list[i]; + } } /** @@ -1820,7 +2507,67 @@ void iecm_fill_dflt_rss_lut(struct iecm_vport *vport, u16 *qid_list) */ int iecm_init_rss(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + u16 *qid_list; + int err; + + adapter->rss_data.rss_key = kzalloc(adapter->rss_data.rss_key_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_key) + return IECM_ERR_NO_MEMORY; + adapter->rss_data.rss_lut = kzalloc(adapter->rss_data.rss_lut_size, + GFP_KERNEL); + if (!adapter->rss_data.rss_lut) { + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + return IECM_ERR_NO_MEMORY; + } + + /* Initialize default rss key */ + netdev_rss_key_fill((void *)adapter->rss_data.rss_key, + adapter->rss_data.rss_key_size); + + /* Initialize default rss lut */ + if (adapter->rss_data.rss_lut_size % vport->num_rxq) { + u16 dflt_qid; + int i; + + /* Set all entries to a default RX queue if the algorithm below + * won't fill all entries + */ + if (iecm_is_queue_model_split(vport->rxq_model)) + dflt_qid = + vport->rxq_grps[0].splitq.rxq_sets[0].rxq.q_id; + else + dflt_qid = + vport->rxq_grps[0].singleq.rxqs[0].q_id; + + for (i = 0; i < adapter->rss_data.rss_lut_size; i++) + adapter->rss_data.rss_lut[i] = dflt_qid; + } + + qid_list = kcalloc(vport->num_rxq, sizeof(u16), GFP_KERNEL); + if (!qid_list) { + kfree(adapter->rss_data.rss_lut); + adapter->rss_data.rss_lut = NULL; + kfree(adapter->rss_data.rss_key); + adapter->rss_data.rss_key = NULL; + return IECM_ERR_NO_MEMORY; + } + + iecm_get_rx_qid_list(vport, qid_list); + + /* Fill the default RSS lut values*/ + iecm_fill_dflt_rss_lut(vport, qid_list); + + kfree(qid_list); + + /* Initialize default rss HASH */ + adapter->rss_data.rss_hash = IECM_DEFAULT_RSS_HASH_EXPANDED; + + err = iecm_config_rss(vport); + + return err; } /** diff --git a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c index b1775cc38924..d56f8126521a 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c +++ b/drivers/net/ethernet/intel/iecm/iecm_virtchnl.c @@ -664,7 +664,20 @@ iecm_send_destroy_vport_msg(struct iecm_vport *vport) enum iecm_status iecm_send_enable_vport_msg(struct iecm_vport *vport) { - /* stub */ + struct iecm_adapter *adapter = vport->adapter; + struct virtchnl_vport v_id; + enum iecm_status err; + + v_id.vport_id = vport->vport_id; + + err = iecm_send_mb_msg(adapter, VIRTCHNL_OP_ENABLE_VPORT, + sizeof(v_id), (u8 *)&v_id); + + if (!err) + err = iecm_wait_for_event(adapter, IECM_VC_ENA_VPORT, + IECM_VC_ENA_VPORT_ERR); + + return err; } /** @@ -1858,7 +1871,27 @@ EXPORT_SYMBOL(iecm_vc_core_init); */ static void iecm_vport_init(struct iecm_vport *vport, int vport_id) { - /* stub */ + struct virtchnl_create_vport *vport_msg; + + vport_msg = (struct virtchnl_create_vport *) + vport->adapter->vport_params_recvd[0]; + vport->txq_model = vport_msg->txq_model; + vport->rxq_model = vport_msg->rxq_model; + vport->vport_type = (u16)vport_msg->vport_type; + vport->vport_id = vport_msg->vport_id; + vport->adapter->rss_data.rss_key_size = min_t(u16, NETDEV_RSS_KEY_LEN, + vport_msg->rss_key_size); + vport->adapter->rss_data.rss_lut_size = vport_msg->rss_lut_size; + ether_addr_copy(vport->default_mac_addr, vport_msg->default_mac_addr); + vport->max_mtu = IECM_MAX_MTU; + + iecm_vport_set_hsplit(vport, NULL); + + init_waitqueue_head(&vport->sw_marker_wq); + iecm_vport_init_num_qs(vport, vport_msg); + iecm_vport_calc_num_q_desc(vport); + iecm_vport_calc_num_q_groups(vport); + iecm_vport_calc_num_q_vec(vport); } /** From patchwork Thu Jun 18 05:13:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88135C433DF for ; Thu, 18 Jun 2020 05:14:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 50B5F21852 for ; Thu, 18 Jun 2020 05:14:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727094AbgFRFOO (ORCPT ); Thu, 18 Jun 2020 01:14:14 -0400 Received: from mga03.intel.com ([134.134.136.65]:25340 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727016AbgFRFOE (ORCPT ); Thu, 18 Jun 2020 01:14:04 -0400 IronPort-SDR: ubD4iiVbJOEDaQAdDU56okGtqlNWoJqVgUhEc/nqgGE5nXXbckAhkhopCO2aPoynTjN3MsZ0BH RAJ6m4sww/wA== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="142378059" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="142378059" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:52 -0700 IronPort-SDR: QPUCIy59UnZEGXnxtYTjfVaMqevf4fwfmc878/PS4Ob3GEmVkiABsQ6NVmyHAUgyMeK+AcS3kF QnFvWMNMhucA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495612" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:52 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alice Michael , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alan Brady , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , Jeff Kirsher Subject: [net-next 11/15] iecm: Add splitq TX/RX Date: Wed, 17 Jun 2020 22:13:40 -0700 Message-Id: <20200618051344.516587-12-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alice Michael Implement main TX/RX flows for split queue model. Signed-off-by: Alice Michael Signed-off-by: Alan Brady Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/iecm/iecm_txrx.c | 1283 ++++++++++++++++++- 1 file changed, 1235 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/intel/iecm/iecm_txrx.c b/drivers/net/ethernet/intel/iecm/iecm_txrx.c index 92dc25c10a6c..071f78858282 100644 --- a/drivers/net/ethernet/intel/iecm/iecm_txrx.c +++ b/drivers/net/ethernet/intel/iecm/iecm_txrx.c @@ -11,7 +11,12 @@ static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, struct iecm_tx_buf *buf) { - /* stub */ + if (stack->top == stack->size) + return IECM_ERR_MAX_LIMIT; + + stack->bufs[stack->top++] = buf; + + return 0; } /** @@ -20,7 +25,10 @@ static enum iecm_status iecm_buf_lifo_push(struct iecm_buf_lifo *stack, **/ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) { - /* stub */ + if (!stack->top) + return NULL; + + return stack->bufs[--stack->top]; } /** @@ -31,7 +39,16 @@ static struct iecm_tx_buf *iecm_buf_lifo_pop(struct iecm_buf_lifo *stack) void iecm_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + + iecm_send_get_stats_msg(vport); + stats->rx_packets = vport->netstats.rx_packets; + stats->tx_packets = vport->netstats.tx_packets; + stats->rx_bytes = vport->netstats.rx_bytes; + stats->tx_bytes = vport->netstats.tx_bytes; + stats->tx_errors = vport->netstats.tx_errors; + stats->rx_dropped = vport->netstats.rx_dropped; + stats->tx_dropped = vport->netstats.tx_dropped; } /** @@ -1246,7 +1263,16 @@ enum iecm_status iecm_vport_queues_alloc(struct iecm_vport *vport) static struct iecm_queue * iecm_tx_find_q(struct iecm_vport *vport, int q_id) { - /* stub */ + int i; + + for (i = 0; i < vport->num_txq; i++) { + struct iecm_queue *tx_q = vport->txqs[i]; + + if (tx_q->q_id == q_id) + return tx_q; + } + + return NULL; } /** @@ -1255,7 +1281,22 @@ iecm_tx_find_q(struct iecm_vport *vport, int q_id) */ static void iecm_tx_handle_sw_marker(struct iecm_queue *tx_q) { - /* stub */ + struct iecm_vport *vport = tx_q->vport; + bool drain_complete = true; + int i; + + clear_bit(__IECM_Q_SW_MARKER, tx_q->flags); + /* Hardware must write marker packets to all queues associated with + * completion queues. So check if all queues received marker packets + */ + for (i = 0; i < vport->num_txq; i++) { + if (test_bit(__IECM_Q_SW_MARKER, vport->txqs[i]->flags)) + drain_complete = false; + } + if (drain_complete) { + set_bit(__IECM_VPORT_SW_MARKER, vport->flags); + wake_up(&vport->sw_marker_wq); + } } /** @@ -1270,7 +1311,30 @@ static struct iecm_tx_queue_stats iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, int napi_budget) { - /* stub */ + struct iecm_tx_queue_stats cleaned = {0}; + struct netdev_queue *nq; + + /* update the statistics for this packet */ + cleaned.bytes = tx_buf->bytecount; + cleaned.packets = tx_buf->gso_segs; + + /* free the skb */ + napi_consume_skb(tx_buf->skb, napi_budget); + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + netdev_tx_completed_queue(nq, cleaned.packets, + cleaned.bytes); + + /* unmap skb header data */ + dma_unmap_single(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + + /* clear tx_buf data */ + tx_buf->skb = NULL; + dma_unmap_len_set(tx_buf, len, 0); + + return cleaned; } /** @@ -1282,7 +1346,33 @@ iecm_tx_splitq_clean_buf(struct iecm_queue *tx_q, struct iecm_tx_buf *tx_buf, static int iecm_stash_flow_sch_buffers(struct iecm_queue *txq, struct iecm_tx_buf *tx_buf) { - /* stub */ + struct iecm_adapter *adapter = txq->vport->adapter; + struct iecm_tx_buf *shadow_buf; + + shadow_buf = iecm_buf_lifo_pop(&txq->buf_stack); + if (!shadow_buf) { + dev_err(&adapter->pdev->dev, + "No out-of-order TX buffers left!\n"); + return -ENOMEM; + } + + /* Store buffer params in shadow buffer */ + shadow_buf->skb = tx_buf->skb; + shadow_buf->bytecount = tx_buf->bytecount; + shadow_buf->gso_segs = tx_buf->gso_segs; + shadow_buf->dma = tx_buf->dma; + shadow_buf->len = tx_buf->len; + shadow_buf->compl_tag = tx_buf->compl_tag; + + /* Add buffer to buf_hash table to be freed + * later + */ + hash_add(txq->sched_buf_hash, &shadow_buf->hlist, + shadow_buf->compl_tag); + + memset(tx_buf, 0, sizeof(struct iecm_tx_buf)); + + return 0; } /** @@ -1305,7 +1395,91 @@ static struct iecm_tx_queue_stats iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, bool descs_only) { - /* stub */ + union iecm_tx_flex_desc *next_pending_desc = NULL; + struct iecm_tx_queue_stats cleaned_stats = {0}; + union iecm_tx_flex_desc *tx_desc; + s16 ntc = tx_q->next_to_clean; + struct iecm_tx_buf *tx_buf; + + tx_desc = IECM_FLEX_TX_DESC(tx_q, ntc); + next_pending_desc = IECM_FLEX_TX_DESC(tx_q, end); + tx_buf = &tx_q->tx_buf[ntc]; + ntc -= tx_q->desc_count; + + while (tx_desc != next_pending_desc) { + union iecm_tx_flex_desc *eop_desc = + (union iecm_tx_flex_desc *)tx_buf->next_to_watch; + + /* clear next_to_watch to prevent false hangs */ + tx_buf->next_to_watch = NULL; + + if (descs_only) { + if (iecm_stash_flow_sch_buffers(tx_q, tx_buf)) + goto tx_splitq_clean_out; + + while (tx_desc != eop_desc) { + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + + if (dma_unmap_len(tx_buf, len)) { + if (iecm_stash_flow_sch_buffers(tx_q, + tx_buf)) + goto tx_splitq_clean_out; + } + } + } else { + struct iecm_tx_queue_stats buf_stats = {0}; + + buf_stats = iecm_tx_splitq_clean_buf(tx_q, tx_buf, + napi_budget); + + /* update the statistics for this packet */ + cleaned_stats.bytes += buf_stats.bytes; + cleaned_stats.packets += buf_stats.packets; + + /* unmap remaining buffers */ + while (tx_desc != eop_desc) { + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + + /* unmap any remaining paged data */ + if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(tx_q->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + } + } + } + + tx_buf++; + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= tx_q->desc_count; + tx_buf = tx_q->tx_buf; + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + } + } + +tx_splitq_clean_out: + ntc += tx_q->desc_count; + tx_q->next_to_clean = ntc; + + return cleaned_stats; } /** @@ -1315,7 +1489,18 @@ iecm_tx_splitq_clean(struct iecm_queue *tx_q, u16 end, int napi_budget, */ static inline void iecm_tx_hw_tstamp(struct sk_buff *skb, u8 *desc_ts) { - /* stub */ + struct skb_shared_hwtstamps hwtstamps; + u64 tstamp; + + /* Only report timestamp to stack if requested */ + if (!likely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) + return; + + tstamp = (desc_ts[0] | (desc_ts[1] << 8) | (desc_ts[2] & 0x3F) << 16); + hwtstamps.hwtstamp = + ns_to_ktime(tstamp << IECM_TW_TIME_STAMP_GRAN_512_DIV_S); + + skb_tstamp_tx(skb, &hwtstamps); } /** @@ -1330,7 +1515,39 @@ static struct iecm_tx_queue_stats iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, u8 *desc_ts, int budget) { - /* stub */ + struct iecm_tx_queue_stats cleaned_stats = {0}; + struct hlist_node *tmp_buf = NULL; + struct iecm_tx_buf *tx_buf = NULL; + + /* Buffer completion */ + hash_for_each_possible_safe(txq->sched_buf_hash, tx_buf, tmp_buf, + hlist, compl_tag) { + if (tx_buf->compl_tag != compl_tag) + continue; + + if (likely(tx_buf->skb)) { + /* fetch timestamp from completion + * descriptor to report to stack + */ + iecm_tx_hw_tstamp(tx_buf->skb, desc_ts); + + cleaned_stats = iecm_tx_splitq_clean_buf(txq, tx_buf, + budget); + } else if (dma_unmap_len(tx_buf, len)) { + dma_unmap_page(txq->dev, + dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + } + + /* Push shadow buf back onto stack */ + iecm_buf_lifo_push(&txq->buf_stack, tx_buf); + + hash_del(&tx_buf->hlist); + } + + return cleaned_stats; } /** @@ -1343,7 +1560,109 @@ iecm_tx_clean_flow_sch_bufs(struct iecm_queue *txq, u16 compl_tag, static bool iecm_tx_clean_complq(struct iecm_queue *complq, int budget) { - /* stub */ + struct iecm_splitq_tx_compl_desc *tx_desc; + struct iecm_vport *vport = complq->vport; + s16 ntc = complq->next_to_clean; + unsigned int complq_budget; + + complq_budget = vport->compln_clean_budget; + tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, ntc); + ntc -= complq->desc_count; + + do { + struct iecm_tx_queue_stats cleaned_stats = {0}; + bool descs_only = false; + struct iecm_queue *tx_q; + u16 compl_tag, hw_head; + int tx_qid; + u8 ctype; /* completion type */ + u16 gen; + + /* if the descriptor isn't done, no work yet to do */ + gen = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S; + if (test_bit(__IECM_Q_GEN_CHK, complq->flags) != gen) + break; + + /* Find necessary info of TX queue to clean buffers */ + tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S; + tx_q = iecm_tx_find_q(vport, tx_qid); + if (!tx_q) { + dev_err(&complq->vport->adapter->pdev->dev, + "TxQ #%d not found\n", tx_qid); + goto fetch_next_desc; + } + + /* Determine completion type */ + ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) & + IECM_TXD_COMPLQ_COMPL_TYPE_M) >> + IECM_TXD_COMPLQ_COMPL_TYPE_S; + switch (ctype) { + case IECM_TXD_COMPLT_RE: + hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + + cleaned_stats = iecm_tx_splitq_clean(tx_q, hw_head, + budget, + descs_only); + break; + case IECM_TXD_COMPLT_RS: + if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) { + compl_tag = + le16_to_cpu(tx_desc->q_head_compl_tag.compl_tag); + + cleaned_stats = + iecm_tx_clean_flow_sch_bufs(tx_q, + compl_tag, + tx_desc->ts, + budget); + } else { + hw_head = + le16_to_cpu(tx_desc->q_head_compl_tag.q_head); + + cleaned_stats = iecm_tx_splitq_clean(tx_q, + hw_head, + budget, + false); + } + + break; + case IECM_TXD_COMPLT_SW_MARKER: + iecm_tx_handle_sw_marker(tx_q); + break; + default: + dev_err(&tx_q->vport->adapter->pdev->dev, + "Unknown TX completion type: %d\n", + ctype); + goto fetch_next_desc; + } + + tx_q->itr.stats.tx.packets += cleaned_stats.packets; + tx_q->itr.stats.tx.bytes += cleaned_stats.bytes; + u64_stats_update_begin(&tx_q->stats_sync); + tx_q->q_stats.tx.packets += cleaned_stats.packets; + tx_q->q_stats.tx.bytes += cleaned_stats.bytes; + u64_stats_update_end(&tx_q->stats_sync); + +fetch_next_desc: + tx_desc++; + ntc++; + if (unlikely(!ntc)) { + ntc -= complq->desc_count; + tx_desc = IECM_SPLITQ_TX_COMPLQ_DESC(complq, 0); + change_bit(__IECM_Q_GEN_CHK, complq->flags); + } + + prefetch(tx_desc); + + /* update budget accounting */ + complq_budget--; + } while (likely(complq_budget)); + + ntc += complq->desc_count; + complq->next_to_clean = ntc; + + return !!complq_budget; } /** @@ -1359,7 +1678,12 @@ iecm_tx_splitq_build_ctb(union iecm_tx_flex_desc *desc, struct iecm_tx_splitq_params *parms, u16 td_cmd, u16 size) { - /* stub */ + desc->q.qw1.cmd_dtype = + cpu_to_le16(parms->dtype & IECM_FLEX_TXD_QW1_DTYPE_M); + desc->q.qw1.cmd_dtype |= + cpu_to_le16((td_cmd << IECM_FLEX_TXD_QW1_CMD_S) & + IECM_FLEX_TXD_QW1_CMD_M); + desc->q.qw1.buf_size = cpu_to_le16((u16)size); } /** @@ -1375,7 +1699,13 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, struct iecm_tx_splitq_params *parms, u16 td_cmd, u16 size) { - /* stub */ + desc->flow.qw1.cmd_dtype = cpu_to_le16((u16)parms->dtype | td_cmd); + desc->flow.qw1.rxr_bufsize = cpu_to_le16((u16)size); + desc->flow.qw1.compl_tag = cpu_to_le16(parms->compl_tag); + + desc->flow.qw1.ts[0] = parms->offload.desc_ts & 0xff; + desc->flow.qw1.ts[1] = (parms->offload.desc_ts >> 8) & 0xff; + desc->flow.qw1.ts[2] = (parms->offload.desc_ts >> 16) & 0xff; } /** @@ -1388,7 +1718,19 @@ iecm_tx_splitq_build_flow_desc(union iecm_tx_flex_desc *desc, static int __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) { - /* stub */ + netif_stop_subqueue(tx_q->vport->netdev, tx_q->idx); + + /* Memory barrier before checking head and tail */ + smp_mb(); + + /* Check again in a case another CPU has just made room available. */ + if (likely(IECM_DESC_UNUSED(tx_q) < size)) + return -EBUSY; + + /* A reprieve! - use start_subqueue because it doesn't call schedule */ + netif_start_subqueue(tx_q->vport->netdev, tx_q->idx); + + return 0; } /** @@ -1400,7 +1742,10 @@ __iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) */ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) { - /* stub */ + if (likely(IECM_DESC_UNUSED(tx_q) >= size)) + return 0; + + return __iecm_tx_maybe_stop(tx_q, size); } /** @@ -1412,7 +1757,23 @@ int iecm_tx_maybe_stop(struct iecm_queue *tx_q, unsigned int size) void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val, struct sk_buff *skb) { - /* stub */ + struct netdev_queue *nq; + + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + tx_q->next_to_use = val; + + iecm_tx_maybe_stop(tx_q, IECM_TX_DESC_NEEDED); + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + + /* notify HW of packet */ + if (netif_xmit_stopped(nq) || !netdev_xmit_more()) + writel_relaxed(val, tx_q->tail); } /** @@ -1445,7 +1806,7 @@ void iecm_tx_buf_hw_update(struct iecm_queue *tx_q, u32 val, */ static unsigned int __iecm_tx_desc_count_required(unsigned int size) { - /* stub */ + return ((size * 85) >> 20) + IECM_TX_DESCS_FOR_SKB_DATA_PTR; } /** @@ -1456,13 +1817,26 @@ static unsigned int __iecm_tx_desc_count_required(unsigned int size) */ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) { - /* stub */ + const skb_frag_t *frag = &skb_shinfo(skb)->frags[0]; + unsigned int nr_frags = skb_shinfo(skb)->nr_frags; + unsigned int count = 0, size = skb_headlen(skb); + + for (;;) { + count += __iecm_tx_desc_count_required(size); + + if (!nr_frags--) + break; + + size = skb_frag_size(frag++); + } + + return count; } /** * iecm_tx_splitq_map - Build the Tx flex descriptor * @tx_q: queue to send buffer on - * @off: pointer to offload params struct + * @parms: pointer to splitq params struct * @first: first buffer info buffer to use * * This function loops over the skb data pointed to by *first @@ -1471,10 +1845,130 @@ unsigned int iecm_tx_desc_count_required(struct sk_buff *skb) */ static void iecm_tx_splitq_map(struct iecm_queue *tx_q, - struct iecm_tx_offload_params *off, + struct iecm_tx_splitq_params *parms, struct iecm_tx_buf *first) { - /* stub */ + union iecm_tx_flex_desc *tx_desc; + unsigned int data_len, size; + struct iecm_tx_buf *tx_buf; + u16 i = tx_q->next_to_use; + struct netdev_queue *nq; + struct sk_buff *skb; + skb_frag_t *frag; + u16 td_cmd = 0; + dma_addr_t dma; + + skb = first->skb; + + td_cmd = parms->offload.td_cmd; + parms->compl_tag = tx_q->tx_buf_key; + + data_len = skb->data_len; + size = skb_headlen(skb); + + tx_desc = IECM_FLEX_TX_DESC(tx_q, i); + + dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE); + + tx_buf = first; + + for (frag = &skb_shinfo(skb)->frags[0];; frag++) { + unsigned int max_data = IECM_TX_MAX_DESC_DATA_ALIGNED; + + if (dma_mapping_error(tx_q->dev, dma)) + goto dma_error; + + /* record length, and DMA address */ + dma_unmap_len_set(tx_buf, len, size); + dma_unmap_addr_set(tx_buf, dma, dma); + + /* align size to end of page */ + max_data += -dma & (IECM_TX_MAX_READ_REQ_SIZE - 1); + + /* buf_addr is in same location for both desc types */ + tx_desc->q.buf_addr = cpu_to_le64(dma); + + /* account for data chunks larger than the hardware + * can handle + */ + while (unlikely(size > IECM_TX_MAX_DESC_DATA)) { + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + + tx_desc++; + i++; + + if (i == tx_q->desc_count) { + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + i = 0; + } + + dma += max_data; + size -= max_data; + + max_data = IECM_TX_MAX_DESC_DATA_ALIGNED; + /* buf_addr is in same location for both desc types */ + tx_desc->q.buf_addr = cpu_to_le64(dma); + } + + if (likely(!data_len)) + break; + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + tx_desc++; + i++; + + if (i == tx_q->desc_count) { + tx_desc = IECM_FLEX_TX_DESC(tx_q, 0); + i = 0; + } + + size = skb_frag_size(frag); + data_len -= size; + + dma = skb_frag_dma_map(tx_q->dev, frag, 0, size, + DMA_TO_DEVICE); + + tx_buf->compl_tag = parms->compl_tag; + tx_buf = &tx_q->tx_buf[i]; + } + + /* record bytecount for BQL */ + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + netdev_tx_sent_queue(nq, first->bytecount); + + /* record SW timestamp if HW timestamp is not available */ + skb_tx_timestamp(first->skb); + + /* write last descriptor with RS and EOP bits */ + td_cmd |= parms->eop_cmd; + parms->splitq_build_ctb(tx_desc, parms, td_cmd, size); + i++; + if (i == tx_q->desc_count) + i = 0; + + /* set next_to_watch value indicating a packet is present */ + first->next_to_watch = tx_desc; + tx_buf->compl_tag = parms->compl_tag++; + + iecm_tx_buf_hw_update(tx_q, i, skb); + + /* Update TXQ Completion Tag key for next buffer */ + tx_q->tx_buf_key = parms->compl_tag; + + return; + +dma_error: + /* clear DMA mappings for failed tx_buf map */ + for (;;) { + tx_buf = &tx_q->tx_buf[i]; + iecm_tx_buf_rel(tx_q, tx_buf); + if (tx_buf == first) + break; + if (i == 0) + i = tx_q->desc_count; + i--; + } + + tx_q->next_to_use = i; } /** @@ -1490,7 +1984,79 @@ iecm_tx_splitq_map(struct iecm_queue *tx_q, static int iecm_tso(struct iecm_tx_buf *first, struct iecm_tx_offload_params *off) { - /* stub */ + struct sk_buff *skb = first->skb; + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct tcphdr *tcp; + struct udphdr *udp; + unsigned char *hdr; + } l4; + u32 paylen, l4_start; + int err; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return 0; + + if (!skb_is_gso(skb)) + return 0; + + err = skb_cow_head(skb, 0); + if (err < 0) + return err; + + ip.hdr = skb_network_header(skb); + l4.hdr = skb_transport_header(skb); + + /* initialize outer IP header fields */ + if (ip.v4->version == 4) { + ip.v4->tot_len = 0; + ip.v4->check = 0; + } else { + ip.v6->payload_len = 0; + } + + /* determine offset of transport header */ + l4_start = l4.hdr - skb->data; + + /* remove payload length from checksum */ + paylen = skb->len - l4_start; + + switch (skb_shinfo(skb)->gso_type) { + case SKB_GSO_TCPV4: + case SKB_GSO_TCPV6: + csum_replace_by_diff(&l4.tcp->check, + (__force __wsum)htonl(paylen)); + + /* compute length of segmentation header */ + off->tso_hdr_len = (l4.tcp->doff * 4) + l4_start; + break; + case SKB_GSO_UDP_L4: + csum_replace_by_diff(&l4.udp->check, + (__force __wsum)htonl(paylen)); + /* compute length of segmentation header */ + off->tso_hdr_len = sizeof(struct udphdr) + l4_start; + l4.udp->len = + htons(skb_shinfo(skb)->gso_size + + sizeof(struct udphdr)); + break; + default: + return -EINVAL; + } + + off->tso_len = skb->len - off->tso_hdr_len; + off->mss = skb_shinfo(skb)->gso_size; + + /* update gso_segs and bytecount */ + first->gso_segs = skb_shinfo(skb)->gso_segs; + first->bytecount += (first->gso_segs - 1) * off->tso_hdr_len; + + first->tx_flags |= IECM_TX_FLAGS_TSO; + + return 0; } /** @@ -1503,7 +2069,84 @@ static int iecm_tso(struct iecm_tx_buf *first, static netdev_tx_t iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) { - /* stub */ + struct iecm_tx_splitq_params tx_parms = {0}; + struct iecm_tx_buf *first; + unsigned int count; + + count = iecm_tx_desc_count_required(skb); + + /* need: 1 descriptor per page * PAGE_SIZE/IECM_MAX_DATA_PER_TXD, + * + 1 desc for skb_head_len/IECM_MAX_DATA_PER_TXD, + * + 4 desc gap to avoid the cache line where head is, + * + 1 desc for context descriptor, + * otherwise try next time + */ + if (iecm_tx_maybe_stop(tx_q, count + IECM_TX_DESCS_PER_CACHE_LINE + + IECM_TX_DESCS_FOR_CTX)) { + return NETDEV_TX_BUSY; + } + + /* record the location of the first descriptor for this packet */ + first = &tx_q->tx_buf[tx_q->next_to_use]; + first->skb = skb; + first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN); + first->gso_segs = 1; + first->tx_flags = 0; + + if (iecm_tso(first, &tx_parms.offload) < 0) { + /* If tso returns an error, drop the packet */ + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + + if (first->tx_flags & IECM_TX_FLAGS_TSO) { + /* If TSO is needed, set up context desc */ + union iecm_flex_tx_ctx_desc *ctx_desc; + int i = tx_q->next_to_use; + + /* grab the next descriptor */ + ctx_desc = IECM_FLEX_TX_CTX_DESC(tx_q, i); + i++; + tx_q->next_to_use = (i < tx_q->desc_count) ? i : 0; + + ctx_desc->tso.qw1.cmd_dtype |= + cpu_to_le16(IECM_TX_DESC_DTYPE_FLEX_TSO_CTX | + IECM_TX_FLEX_CTX_DESC_CMD_TSO); + ctx_desc->tso.qw0.flex_tlen = + cpu_to_le32(tx_parms.offload.tso_len & + IECM_TXD_FLEX_CTX_TLEN_M); + ctx_desc->tso.qw0.mss_rt = + cpu_to_le16(tx_parms.offload.mss & + IECM_TXD_FLEX_CTX_MSS_RT_M); + ctx_desc->tso.qw0.hdr_len = tx_parms.offload.tso_hdr_len; + } + + if (test_bit(__IECM_Q_FLOW_SCH_EN, tx_q->flags)) { + s64 ts_ns = first->skb->skb_mstamp_ns; + + tx_parms.offload.desc_ts = + ts_ns >> IECM_TW_TIME_STAMP_GRAN_512_DIV_S; + + tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE; + tx_parms.splitq_build_ctb = iecm_tx_splitq_build_flow_desc; + tx_parms.eop_cmd = + IECM_TXD_FLEX_FLOW_CMD_EOP | IECM_TXD_FLEX_FLOW_CMD_RE; + + if (skb->ip_summed == CHECKSUM_PARTIAL) + tx_parms.offload.td_cmd |= IECM_TXD_FLEX_FLOW_CMD_CS_EN; + + } else { + tx_parms.dtype = IECM_TX_DESC_DTYPE_FLEX_DATA; + tx_parms.splitq_build_ctb = iecm_tx_splitq_build_ctb; + tx_parms.eop_cmd = IECM_TX_DESC_CMD_EOP | IECM_TX_DESC_CMD_RS; + + if (skb->ip_summed == CHECKSUM_PARTIAL) + tx_parms.offload.td_cmd |= IECM_TX_FLEX_DESC_CMD_CS_EN; + } + + iecm_tx_splitq_map(tx_q, &tx_parms, first); + + return NETDEV_TX_OK; } /** @@ -1516,7 +2159,18 @@ iecm_tx_splitq_frame(struct sk_buff *skb, struct iecm_queue *tx_q) netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, struct net_device *netdev) { - /* stub */ + struct iecm_vport *vport = iecm_netdev_to_vport(netdev); + struct iecm_queue *tx_q; + + tx_q = vport->txqs[skb->queue_mapping]; + + /* hardware can't handle really short frames, hardware padding works + * beyond this point + */ + if (skb_put_padto(skb, IECM_TX_MIN_LEN)) + return NETDEV_TX_OK; + + return iecm_tx_splitq_frame(skb, tx_q); } /** @@ -1531,7 +2185,18 @@ netdev_tx_t iecm_tx_splitq_start(struct sk_buff *skb, static enum pkt_hash_types iecm_ptype_to_htype(struct iecm_vport *vport, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded = vport->rx_ptype_lkup[ptype]; + + if (!decoded.known) + return PKT_HASH_TYPE_NONE; + if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY4) + return PKT_HASH_TYPE_L4; + if (decoded.payload_layer == IECM_RX_PTYPE_PAYLOAD_LAYER_PAY3) + return PKT_HASH_TYPE_L3; + if (decoded.outer_ip == IECM_RX_PTYPE_OUTER_L2) + return PKT_HASH_TYPE_L2; + + return PKT_HASH_TYPE_NONE; } /** @@ -1545,7 +2210,17 @@ static void iecm_rx_hash(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + u32 hash; + + if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXHASH)) + return; + + hash = rx_desc->status_err1 | + (rx_desc->fflags1 << 8) | + (rx_desc->ts_low << 16) | + (rx_desc->ff2_mirrid_hash2.hash2 << 24); + + skb_set_hash(skb, hash, iecm_ptype_to_htype(rxq->vport, ptype)); } /** @@ -1561,7 +2236,63 @@ static void iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded; + u8 rx_status_0_qw1, rx_status_0_qw0; + bool ipv4, ipv6; + + /* Start with CHECKSUM_NONE and by default csum_level = 0 */ + skb->ip_summed = CHECKSUM_NONE; + + /* check if Rx checksum is enabled */ + if (!iecm_is_feature_ena(rxq->vport, NETIF_F_RXCSUM)) + return; + + rx_status_0_qw1 = rx_desc->status_err0_qw1; + /* check if HW has decoded the packet and checksum */ + if (!(rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_L3L4P_S))) + return; + + decoded = rxq->vport->rx_ptype_lkup[ptype]; + if (!(decoded.known && decoded.outer_ip)) + return; + + ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4); + ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6); + + if (ipv4 && (rx_status_0_qw1 & + (BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | + BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))) + goto checksum_fail; + + rx_status_0_qw0 = rx_desc->status_err0_qw0; + if (ipv6 && (rx_status_0_qw0 & + (BIT(IECM_RX_FLEX_DESC_STATUS0_IPV6EXADD_S)))) + return; + + /* check for L4 errors and handle packets that were not able to be + * checksummed + */ + if (rx_status_0_qw1 & BIT(IECM_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)) + goto checksum_fail; + + /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */ + switch (decoded.inner_prot) { + case IECM_RX_PTYPE_INNER_PROT_ICMP: + case IECM_RX_PTYPE_INNER_PROT_TCP: + case IECM_RX_PTYPE_INNER_PROT_UDP: + case IECM_RX_PTYPE_INNER_PROT_SCTP: + skb->ip_summed = CHECKSUM_UNNECESSARY; + rxq->q_stats.rx.basic_csum++; + default: + break; + } + return; + +checksum_fail: + rxq->q_stats.rx.csum_err++; + dev_dbg(rxq->dev, "RX Checksum not available\n"); } /** @@ -1577,7 +2308,74 @@ iecm_rx_csum(struct iecm_queue *rxq, struct sk_buff *skb, static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc, u16 ptype) { - /* stub */ + struct iecm_rx_ptype_decoded decoded; + u16 rsc_segments, rsc_payload_len; + struct ipv6hdr *ipv6h; + struct tcphdr *tcph; + struct iphdr *ipv4h; + bool ipv4, ipv6; + u16 hdr_len; + + rsc_payload_len = le32_to_cpu(rx_desc->fmd1_misc.rscseglen); + if (!rsc_payload_len) + goto rsc_err; + + decoded = rxq->vport->rx_ptype_lkup[ptype]; + if (!(decoded.known && decoded.outer_ip)) + goto rsc_err; + + ipv4 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV4); + ipv6 = (decoded.outer_ip == IECM_RX_PTYPE_OUTER_IP) && + (decoded.outer_ip_ver == IECM_RX_PTYPE_OUTER_IPV6); + + if (!(ipv4 ^ ipv6)) + goto rsc_err; + + if (ipv4) + hdr_len = ETH_HLEN + sizeof(struct tcphdr) + + sizeof(struct iphdr); + else + hdr_len = ETH_HLEN + sizeof(struct tcphdr) + + sizeof(struct ipv6hdr); + + rsc_segments = DIV_ROUND_UP(skb->len - hdr_len, rsc_payload_len); + + NAPI_GRO_CB(skb)->count = rsc_segments; + skb_shinfo(skb)->gso_size = rsc_payload_len; + + skb_reset_network_header(skb); + + if (ipv4) { + ipv4h = ip_hdr(skb); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + + /* Reset and set transport header offset in skb */ + skb_set_transport_header(skb, sizeof(struct iphdr)); + tcph = tcp_hdr(skb); + + /* Compute the TCP pseudo header checksum*/ + tcph->check = + ~tcp_v4_check(skb->len - skb_transport_offset(skb), + ipv4h->saddr, ipv4h->daddr, 0); + } else { + ipv6h = ipv6_hdr(skb); + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + skb_set_transport_header(skb, sizeof(struct ipv6hdr)); + tcph = tcp_hdr(skb); + tcph->check = + ~tcp_v6_check(skb->len - skb_transport_offset(skb), + &ipv6h->saddr, &ipv6h->daddr, 0); + } + + tcp_gro_complete(skb); + + /* Map Rx qid to the skb*/ + skb_record_rx_queue(skb, rxq->q_id); + + return true; +rsc_err: + return false; } /** @@ -1589,7 +2387,19 @@ static bool iecm_rx_rsc(struct iecm_queue *rxq, struct sk_buff *skb, static void iecm_rx_hwtstamp(struct iecm_flex_rx_desc *rx_desc, struct sk_buff __maybe_unused *skb) { - /* stub */ + u8 ts_lo = rx_desc->ts_low; + u32 ts_hi = 0; + u64 ts_ns = 0; + + ts_hi = le32_to_cpu(rx_desc->flex_ts.ts_high); + + ts_ns |= ts_lo | ((u64)ts_hi << 8); + + if (ts_ns) { + memset(skb_hwtstamps(skb), 0, + sizeof(struct skb_shared_hwtstamps)); + skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ts_ns); + } } /** @@ -1606,7 +2416,26 @@ static bool iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, struct iecm_flex_rx_desc *rx_desc) { - /* stub */ + bool err = false; + u16 rx_ptype; + bool rsc; + + rx_ptype = le16_to_cpu(rx_desc->ptype_err_fflags0) & + IECM_RXD_FLEX_PTYPE_M; + + /* modifies the skb - consumes the enet header */ + skb->protocol = eth_type_trans(skb, rxq->vport->netdev); + iecm_rx_csum(rxq, skb, rx_desc, rx_ptype); + /* process RSS/hash */ + iecm_rx_hash(rxq, skb, rx_desc, rx_ptype); + + rsc = le16_to_cpu(rx_desc->hdrlen_flags) & IECM_RXD_FLEX_RSC_M; + if (rsc) + err = iecm_rx_rsc(rxq, skb, rx_desc, rx_ptype); + + iecm_rx_hwtstamp(rx_desc, skb); + + return err; } /** @@ -1619,7 +2448,7 @@ iecm_rx_process_skb_fields(struct iecm_queue *rxq, struct sk_buff *skb, */ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) { - /* stub */ + napi_gro_receive(&rxq->q_vector->napi, skb); } /** @@ -1628,7 +2457,7 @@ void iecm_rx_skb(struct iecm_queue *rxq, struct sk_buff *skb) */ static bool iecm_rx_page_is_reserved(struct page *page) { - /* stub */ + return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } /** @@ -1644,7 +2473,13 @@ static bool iecm_rx_page_is_reserved(struct page *page) static void iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) { - /* stub */ +#if (PAGE_SIZE < 8192) + /* flip page offset to other buffer */ + rx_buf->page_offset ^= size; +#else + /* move offset up to the next cache line */ + rx_buf->page_offset += size; +#endif } /** @@ -1658,7 +2493,34 @@ iecm_rx_buf_adjust_pg_offset(struct iecm_rx_buf *rx_buf, unsigned int size) */ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) { - /* stub */ +#if (PAGE_SIZE >= 8192) +#endif + unsigned int pagecnt_bias = rx_buf->pagecnt_bias; + struct page *page = rx_buf->page; + + /* avoid re-using remote pages */ + if (unlikely(iecm_rx_page_is_reserved(page))) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely((page_count(page) - pagecnt_bias) > 1)) + return false; +#else + if (rx_buf->page_offset > last_offset) + return false; +#endif /* PAGE_SIZE < 8192) */ + + /* If we have drained the page fragment pool we need to update + * the pagecnt_bias and page count so that we fully restock the + * number of references the driver holds. + */ + if (unlikely(pagecnt_bias == 1)) { + page_ref_add(page, USHRT_MAX - 1); + rx_buf->pagecnt_bias = USHRT_MAX; + } + + return true; } /** @@ -1674,7 +2536,17 @@ static bool iecm_rx_can_reuse_page(struct iecm_rx_buf *rx_buf) void iecm_rx_add_frag(struct iecm_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { - /* stub */ +#if (PAGE_SIZE >= 8192) + unsigned int truesize = SKB_DATA_ALIGN(size); +#else + unsigned int truesize = IECM_RX_BUF_2048; +#endif + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, + rx_buf->page_offset, size, truesize); + + /* page is being used so we must update the page offset */ + iecm_rx_buf_adjust_pg_offset(rx_buf, truesize); } /** @@ -1689,7 +2561,22 @@ void iecm_rx_reuse_page(struct iecm_queue *rx_bufq, bool hsplit, struct iecm_rx_buf *old_buf) { - /* stub */ + u16 ntu = rx_bufq->next_to_use; + struct iecm_rx_buf *new_buf; + + if (hsplit) + new_buf = &rx_bufq->rx_buf.hdr_buf[ntu]; + else + new_buf = &rx_bufq->rx_buf.buf[ntu]; + + /* Transfer page from old buffer to new buffer. + * Move each member individually to avoid possible store + * forwarding stalls and unnecessary copy of skb. + */ + new_buf->dma = old_buf->dma; + new_buf->page = old_buf->page; + new_buf->page_offset = old_buf->page_offset; + new_buf->pagecnt_bias = old_buf->pagecnt_bias; } /** @@ -1704,7 +2591,15 @@ static void iecm_rx_get_buf_page(struct device *dev, struct iecm_rx_buf *rx_buf, const unsigned int size) { - /* stub */ + prefetch(rx_buf->page); + + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(dev, rx_buf->dma, + rx_buf->page_offset, size, + DMA_FROM_DEVICE); + + /* We have pulled a buffer for use, so decrement pagecnt_bias */ + rx_buf->pagecnt_bias--; } /** @@ -1721,7 +2616,52 @@ struct sk_buff * iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, unsigned int size) { - /* stub */ + void *va = page_address(rx_buf->page) + rx_buf->page_offset; + unsigned int headlen; + struct sk_buff *skb; + + /* prefetch first cache line of first page */ + prefetch(va); +#if L1_CACHE_BYTES < 128 + prefetch((u8 *)va + L1_CACHE_BYTES); +#endif /* L1_CACHE_BYTES */ + /* allocate a skb to store the frags */ + skb = __napi_alloc_skb(&rxq->q_vector->napi, IECM_RX_HDR_SIZE, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + skb_record_rx_queue(skb, rxq->idx); + + /* Determine available headroom for copy */ + headlen = size; + if (headlen > IECM_RX_HDR_SIZE) + headlen = eth_get_headlen(skb->dev, va, IECM_RX_HDR_SIZE); + + /* align pull length to size of long to optimize memcpy performance */ + memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); + + /* if we exhaust the linear part then add what is left as a frag */ + size -= headlen; + if (size) { +#if (PAGE_SIZE >= 8192) + unsigned int truesize = SKB_DATA_ALIGN(size); +#else + unsigned int truesize = IECM_RX_BUF_2048; +#endif + skb_add_rx_frag(skb, 0, rx_buf->page, + rx_buf->page_offset + headlen, size, truesize); + /* buffer is used by skb, update page_offset */ + iecm_rx_buf_adjust_pg_offset(rx_buf, truesize); + } else { + /* buffer is unused, reset bias back to rx_buf; data was copied + * onto skb's linear part so there's no need for adjusting + * page offset and we can reuse this buffer as-is + */ + rx_buf->pagecnt_bias++; + } + + return skb; } /** @@ -1738,7 +2678,11 @@ iecm_rx_construct_skb(struct iecm_queue *rxq, struct iecm_rx_buf *rx_buf, */ bool iecm_rx_cleanup_headers(struct sk_buff *skb) { - /* stub */ + /* if eth_skb_pad returns an error the skb was freed */ + if (eth_skb_pad(skb)) + return true; + + return false; } /** @@ -1751,7 +2695,7 @@ bool iecm_rx_cleanup_headers(struct sk_buff *skb) static bool iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) { - /* stub */ + return !!(stat_err_field & stat_err_bits); } /** @@ -1764,7 +2708,13 @@ iecm_rx_splitq_test_staterr(u8 stat_err_field, const u8 stat_err_bits) static bool iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) { - /* stub */ + /* if we are the last buffer then there is nothing else to do */ +#define IECM_RXD_EOF BIT(IECM_RX_FLEX_DESC_STATUS0_EOF_S) + if (likely(iecm_rx_splitq_test_staterr(rx_desc->status_err0_qw1, + IECM_RXD_EOF))) + return false; + + return true; } /** @@ -1781,7 +2731,24 @@ iecm_rx_splitq_is_non_eop(struct iecm_flex_rx_desc *rx_desc) bool iecm_rx_recycle_buf(struct iecm_queue *rx_bufq, bool hsplit, struct iecm_rx_buf *rx_buf) { - /* stub */ + bool recycled = false; + + if (iecm_rx_can_reuse_page(rx_buf)) { + /* hand second half of page back to the queue */ + iecm_rx_reuse_page(rx_bufq, hsplit, rx_buf); + recycled = true; + } else { + /* we are not reusing the buffer so unmap it */ + dma_unmap_page_attrs(rx_bufq->dev, rx_buf->dma, PAGE_SIZE, + DMA_FROM_DEVICE, IECM_RX_DMA_ATTR); + __page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias); + } + + /* clear contents of buffer_info */ + rx_buf->page = NULL; + rx_buf->skb = NULL; + + return recycled; } /** @@ -1797,7 +2764,19 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, struct iecm_rx_buf *hdr_buf, struct iecm_rx_buf *rx_buf) { - /* stub */ + u16 ntu = rx_bufq->next_to_use; + bool recycled = false; + + if (likely(hdr_buf)) + recycled = iecm_rx_recycle_buf(rx_bufq, true, hdr_buf); + if (likely(rx_buf)) + recycled = iecm_rx_recycle_buf(rx_bufq, false, rx_buf); + + /* update, and store next to alloc if the buffer was recycled */ + if (recycled) { + ntu++; + rx_bufq->next_to_use = (ntu < rx_bufq->desc_count) ? ntu : 0; + } } /** @@ -1806,7 +2785,14 @@ static void iecm_rx_splitq_put_bufs(struct iecm_queue *rx_bufq, */ static void iecm_rx_bump_ntc(struct iecm_queue *q) { - /* stub */ + u16 ntc = q->next_to_clean + 1; + /* fetch, update, and store next to clean */ + if (ntc < q->desc_count) { + q->next_to_clean = ntc; + } else { + q->next_to_clean = 0; + change_bit(__IECM_Q_GEN_CHK, q->flags); + } } /** @@ -1823,7 +2809,158 @@ static void iecm_rx_bump_ntc(struct iecm_queue *q) */ static int iecm_rx_splitq_clean(struct iecm_queue *rxq, int budget) { - /* stub */ + unsigned int total_rx_bytes = 0, total_rx_pkts = 0; + u16 cleaned_count[IECM_BUFQS_PER_RXQ_SET] = {0}; + struct iecm_queue *rx_bufq = NULL; + struct sk_buff *skb = rxq->skb; + bool failure = false; + int i; + + /* Process Rx packets bounded by budget */ + while (likely(total_rx_pkts < (unsigned int)budget)) { + struct iecm_flex_rx_desc *splitq_flex_rx_desc; + union iecm_rx_desc *rx_desc; + struct iecm_rx_buf *hdr_buf = NULL; + struct iecm_rx_buf *rx_buf = NULL; + unsigned int pkt_len = 0; + unsigned int hdr_len = 0; + u16 gen_id, buf_id; + u8 stat_err0_qw0; + u8 stat_err_bits; + /* Header buffer overflow only valid for header split */ + bool hbo = false; + int bufq_id; + + /* get the Rx desc from Rx queue based on 'next_to_clean' */ + rx_desc = IECM_RX_DESC(rxq, rxq->next_to_clean); + splitq_flex_rx_desc = (struct iecm_flex_rx_desc *)rx_desc; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc + */ + dma_rmb(); + + /* if the descriptor isn't done, no work yet to do */ + gen_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id); + gen_id = (gen_id & IECM_RXD_FLEX_GEN_M) >> IECM_RXD_FLEX_GEN_S; + if (test_bit(__IECM_Q_GEN_CHK, rxq->flags) != gen_id) + break; + + pkt_len = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id) & + IECM_RXD_FLEX_LEN_PBUF_M; + + hbo = le16_to_cpu(splitq_flex_rx_desc->status_err0_qw1) & + BIT(IECM_RX_FLEX_DESC_STATUS0_HBO_S); + + if (unlikely(hbo)) { + rxq->q_stats.rx.hsplit_hbo++; + goto bypass_hsplit; + } + + hdr_len = + le16_to_cpu(splitq_flex_rx_desc->hdrlen_flags) & + IECM_RXD_FLEX_LEN_HDR_M; + +bypass_hsplit: + bufq_id = le16_to_cpu(splitq_flex_rx_desc->pktlen_gen_bufq_id); + bufq_id = (bufq_id & IECM_RXD_FLEX_BUFQ_ID_M) >> + IECM_RXD_FLEX_BUFQ_ID_S; + /* retrieve buffer from the rxq */ + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq; + + buf_id = le16_to_cpu(splitq_flex_rx_desc->fmd0_bufid.buf_id); + + if (pkt_len) { + rx_buf = &rx_bufq->rx_buf.buf[buf_id]; + iecm_rx_get_buf_page(rx_bufq->dev, rx_buf, pkt_len); + } + + if (hdr_len) { + hdr_buf = &rx_bufq->rx_buf.hdr_buf[buf_id]; + iecm_rx_get_buf_page(rx_bufq->dev, hdr_buf, + hdr_len); + + skb = iecm_rx_construct_skb(rxq, hdr_buf, hdr_len); + } + + if (skb && pkt_len) + iecm_rx_add_frag(rx_buf, skb, pkt_len); + else if (pkt_len) + skb = iecm_rx_construct_skb(rxq, rx_buf, pkt_len); + + /* exit if we failed to retrieve a buffer */ + if (!skb) { + /* If we fetched a buffer, but didn't use it + * undo pagecnt_bias decrement + */ + if (rx_buf) + rx_buf->pagecnt_bias++; + break; + } + + iecm_rx_splitq_put_bufs(rx_bufq, hdr_buf, rx_buf); + iecm_rx_bump_ntc(rxq); + cleaned_count[bufq_id]++; + + /* skip if it is non EOP desc */ + if (iecm_rx_splitq_is_non_eop(splitq_flex_rx_desc)) + continue; + + stat_err_bits = BIT(IECM_RX_FLEX_DESC_STATUS0_RXE_S); + stat_err0_qw0 = splitq_flex_rx_desc->status_err0_qw0; + if (unlikely(iecm_rx_splitq_test_staterr(stat_err0_qw0, + stat_err_bits))) { + dev_kfree_skb_any(skb); + skb = NULL; + continue; + } + + /* correct empty headers and pad skb if needed (to make valid + * Ethernet frame + */ + if (iecm_rx_cleanup_headers(skb)) { + skb = NULL; + continue; + } + + /* probably a little skewed due to removing CRC */ + total_rx_bytes += skb->len; + + /* protocol */ + if (unlikely(iecm_rx_process_skb_fields(rxq, skb, + splitq_flex_rx_desc))) { + dev_kfree_skb_any(skb); + skb = NULL; + continue; + } + + /* send completed skb up the stack */ + iecm_rx_skb(rxq, skb); + skb = NULL; + + /* update budget accounting */ + total_rx_pkts++; + } + for (i = 0; i < IECM_BUFQS_PER_RXQ_SET; i++) { + if (cleaned_count[i]) { + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[i].bufq; + failure = iecm_rx_buf_hw_alloc_all(rx_bufq, + cleaned_count[i]) || + failure; + } + } + + rxq->skb = skb; + u64_stats_update_begin(&rxq->stats_sync); + rxq->q_stats.rx.packets += total_rx_pkts; + rxq->q_stats.rx.bytes += total_rx_bytes; + u64_stats_update_end(&rxq->stats_sync); + + rxq->itr.stats.rx.packets += total_rx_pkts; + rxq->itr.stats.rx.bytes += total_rx_bytes; + + /* guarantee a trip back through this routine if there was a failure */ + return failure ? budget : (int)total_rx_pkts; } /** @@ -2379,7 +3516,15 @@ iecm_vport_intr_napi_ena_all(struct iecm_vport *vport) static inline bool iecm_tx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget) { - /* stub */ + bool clean_complete = true; + int i, budget_per_q; + + budget_per_q = max(budget / q_vec->num_txq, 1); + for (i = 0; i < q_vec->num_txq; i++) { + if (!iecm_tx_clean_complq(q_vec->tx[i], budget_per_q)) + clean_complete = false; + } + return clean_complete; } /** @@ -2394,7 +3539,22 @@ static inline bool iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, int *cleaned) { - /* stub */ + bool clean_complete = true; + int pkts_cleaned_per_q; + int i, budget_per_q; + + budget_per_q = max(budget / q_vec->num_rxq, 1); + for (i = 0; i < q_vec->num_rxq; i++) { + pkts_cleaned_per_q = iecm_rx_splitq_clean(q_vec->rx[i], + budget_per_q); + /* if we clean as many as budgeted, we must not + * be done + */ + if (pkts_cleaned_per_q >= budget_per_q) + clean_complete = false; + *cleaned += pkts_cleaned_per_q; + } + return clean_complete; } /** @@ -2404,7 +3564,34 @@ iecm_rx_splitq_clean_all(struct iecm_q_vector *q_vec, int budget, */ int iecm_vport_splitq_napi_poll(struct napi_struct *napi, int budget) { - /* stub */ + struct iecm_q_vector *q_vector = + container_of(napi, struct iecm_q_vector, napi); + bool clean_complete; + int work_done = 0; + + clean_complete = iecm_tx_splitq_clean_all(q_vector, budget); + + /* Handle case where we are called by netpoll with a budget of 0 */ + if (budget <= 0) + return budget; + + /* We attempt to distribute budget to each Rx queue fairly, but don't + * allow the budget to go below 1 because that would exit polling early. + */ + clean_complete |= iecm_rx_splitq_clean_all(q_vector, budget, + &work_done); + + /* If work not completed, return budget and polling will return */ + if (!clean_complete) + return budget; + + /* Exit the polling mode, but don't re-enable interrupts if stack might + * poll us due to busy-polling + */ + if (likely(napi_complete_done(napi, work_done))) + iecm_vport_intr_update_itr_ena_irq(q_vector); + + return min_t(int, work_done, budget - 1); } /** From patchwork Thu Jun 18 05:13:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Kirsher X-Patchwork-Id: 217626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9426CC433DF for ; Thu, 18 Jun 2020 05:14:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7306121852 for ; Thu, 18 Jun 2020 05:14:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727121AbgFRFOT (ORCPT ); Thu, 18 Jun 2020 01:14:19 -0400 Received: from mga14.intel.com ([192.55.52.115]:64356 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726971AbgFRFOO (ORCPT ); Thu, 18 Jun 2020 01:14:14 -0400 IronPort-SDR: uvCwNMp6kf7+OFojqMAwQ0h6k2862IfJfhteEw0OLNPGWV/9zqAuu42JWXtx86AOx//nfXq4Q9 9jT3kRKaQlpA== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="141516121" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="141516121" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 22:13:53 -0700 IronPort-SDR: zEw+msD7Za2wP2gHwRW2Mjw9TrLC5KADiWuK+N6rCJJLyC+ihecwjeNAxASS+PfmFBOYXSh2d6 Lff+bSl/NOXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="263495634" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.86]) by fmsmga008.fm.intel.com with ESMTP; 17 Jun 2020 22:13:53 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Alan Brady , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Alice Michael , Phani Burra , Joshua Hay , Madhu Chittim , Pavan Kumar Linga , Donald Skidmore , Jesse Brandeburg , Sridhar Samudrala , kbuild test robot , Jeff Kirsher Subject: [net-next 15/15] idpf: Introduce idpf driver Date: Wed, 17 Jun 2020 22:13:44 -0700 Message-Id: <20200618051344.516587-16-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> References: <20200618051344.516587-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Alan Brady Utilizes the Intel Ethernet Common Module and provides a device specific implementation for data plane devices. Signed-off-by: Alan Brady Signed-off-by: Alice Michael Signed-off-by: Phani Burra Signed-off-by: Joshua Hay Signed-off-by: Madhu Chittim Signed-off-by: Pavan Kumar Linga Reviewed-by: Donald Skidmore Reviewed-by: Jesse Brandeburg Reviewed-by: Sridhar Samudrala Reported-by: kbuild test robot Signed-off-by: Jeff Kirsher --- .../networking/device_drivers/intel/idpf.rst | 47 ++++++ MAINTAINERS | 1 + drivers/net/ethernet/intel/Kconfig | 8 + drivers/net/ethernet/intel/Makefile | 1 + drivers/net/ethernet/intel/idpf/Makefile | 12 ++ drivers/net/ethernet/intel/idpf/idpf_dev.h | 17 ++ drivers/net/ethernet/intel/idpf/idpf_devids.h | 10 ++ drivers/net/ethernet/intel/idpf/idpf_main.c | 136 ++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_reg.c | 152 ++++++++++++++++++ 9 files changed, 384 insertions(+) create mode 100644 Documentation/networking/device_drivers/intel/idpf.rst create mode 100644 drivers/net/ethernet/intel/idpf/Makefile create mode 100644 drivers/net/ethernet/intel/idpf/idpf_dev.h create mode 100644 drivers/net/ethernet/intel/idpf/idpf_devids.h create mode 100644 drivers/net/ethernet/intel/idpf/idpf_main.c create mode 100644 drivers/net/ethernet/intel/idpf/idpf_reg.c diff --git a/Documentation/networking/device_drivers/intel/idpf.rst b/Documentation/networking/device_drivers/intel/idpf.rst new file mode 100644 index 000000000000..973fa9613428 --- /dev/null +++ b/Documentation/networking/device_drivers/intel/idpf.rst @@ -0,0 +1,47 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================================================== +Linux Base Driver for the Intel(R) Smart Network Adapter Family Series +================================================================== + +Intel idpf Linux driver. +Copyright(c) 2020 Intel Corporation. + +Contents +======== + +- Enabling the driver +- Support + +The driver in this release supports Intel's Smart Network Adapter Family Series +of products. For more information, visit Intel's support page at +https://support.intel.com. + +Enabling the driver +=================== +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> Network device support (NETDEVICES [=y]) + -> Ethernet driver support + -> Intel devices + -> Intel(R) Smart Network Adapter Family Series Support + +Support +======= +For general information, go to the Intel support website at: + +https://www.intel.com/support/ + +or the Intel Wired Networking project hosted by Sourceforge at: + +https://sourceforge.net/projects/e1000 + +If an issue is identified with the released source code on a supported kernel +with a supported adapter, email the specific information related to the issue +to e1000-devel@lists.sf.net. diff --git a/MAINTAINERS b/MAINTAINERS index 102ee1e4aef0..97ac0d417067 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8654,6 +8654,7 @@ F: Documentation/networking/device_drivers/intel/fm10k.rst F: Documentation/networking/device_drivers/intel/i40e.rst F: Documentation/networking/device_drivers/intel/iavf.rst F: Documentation/networking/device_drivers/intel/ice.rst +F: Documentation/networking/device_drivers/intel/idpf.rst F: Documentation/networking/device_drivers/intel/iecm.rst F: Documentation/networking/device_drivers/intel/igb.rst F: Documentation/networking/device_drivers/intel/igbvf.rst diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 6dd985cbdb6d..9e0b3c1bf7c6 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -349,4 +349,12 @@ config IECM help To compile this driver as a module, choose M here. The module will be called iecm. MSI-X interrupt support is required + +config IDPF + tristate "Intel(R) Data Plane Function Support" + default n + depends on PCI + help + To compile this driver as a module, choose M here. The module + will be called idpf. endif # NET_VENDOR_INTEL diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile index c9eba9cc5087..3786c2269f3d 100644 --- a/drivers/net/ethernet/intel/Makefile +++ b/drivers/net/ethernet/intel/Makefile @@ -17,3 +17,4 @@ obj-$(CONFIG_IAVF) += iavf/ obj-$(CONFIG_FM10K) += fm10k/ obj-$(CONFIG_ICE) += ice/ obj-$(CONFIG_IECM) += iecm/ +obj-$(CONFIG_IDPF) += idpf/ diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile new file mode 100644 index 000000000000..ac6cac6c6360 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright (C) 2020 Intel Corporation + +# +# Makefile for the Intel(R) Data Plane Function Linux Driver +# + +obj-$(CONFIG_IDPF) += idpf.o + +idpf-y := \ + idpf_main.o \ + idpf_reg.o diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.h b/drivers/net/ethernet/intel/idpf/idpf_dev.h new file mode 100644 index 000000000000..1da33f5120a2 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IDPF_DEV_H_ +#define _IDPF_DEV_H_ + +#include + +void idpf_intr_reg_init(struct iecm_vport *vport); +void idpf_mb_intr_reg_init(struct iecm_adapter *adapter); +void idpf_reset_reg_init(struct iecm_reset_reg *reset_reg); +void idpf_trigger_reset(struct iecm_adapter *adapter, + enum iecm_flags trig_cause); +void idpf_vportq_reg_init(struct iecm_vport *vport); +void idpf_ctlq_reg_init(struct iecm_ctlq_create_info *cq); + +#endif /* _IDPF_DEV_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_devids.h b/drivers/net/ethernet/intel/idpf/idpf_devids.h new file mode 100644 index 000000000000..ee373a04cb20 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_devids.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Intel Corporation */ + +#ifndef _IDPF_DEVIDS_H_ +#define _IDPF_DEVIDS_H_ + +/* Device IDs */ +#define IDPF_DEV_ID_PF 0x1452 + +#endif /* _IDPF_DEVIDS_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c new file mode 100644 index 000000000000..56f20abd57ee --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_main.c @@ -0,0 +1,136 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include "idpf_dev.h" +#include "idpf_devids.h" + +#define DRV_SUMMARY "Intel(R) Data Plane Function Linux Driver" +static const char idpf_driver_string[] = DRV_SUMMARY; +static const char idpf_copyright[] = "Copyright (c) 2020, Intel Corporation."; + +MODULE_AUTHOR("Intel Corporation, "); +MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_LICENSE("GPL v2"); + +static int debug = -1; +module_param(debug, int, 0644); +#ifndef CONFIG_DYNAMIC_DEBUG +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXXXXX)"); +#else +MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); +#endif /* !CONFIG_DYNAMIC_DEBUG */ + +/** + * idpf_reg_ops_init - Initialize register API function pointers + * @adapter: Driver specific private structure + */ +static void idpf_reg_ops_init(struct iecm_adapter *adapter) +{ + adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_ctlq_reg_init; + adapter->dev_ops.reg_ops.vportq_reg_init = idpf_vportq_reg_init; + adapter->dev_ops.reg_ops.intr_reg_init = idpf_intr_reg_init; + adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_mb_intr_reg_init; + adapter->dev_ops.reg_ops.reset_reg_init = idpf_reset_reg_init; + adapter->dev_ops.reg_ops.trigger_reset = idpf_trigger_reset; +} + +/** + * idpf_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in idpf_pci_tbl + * + * Returns 0 on success, negative on failure + */ +int idpf_probe(struct pci_dev *pdev, + const struct pci_device_id __always_unused *ent) +{ + struct iecm_adapter *adapter = NULL; + int err; + + err = pcim_enable_device(pdev); + if (err) + return err; + + adapter = kzalloc(sizeof(*adapter), GFP_KERNEL); + if (!adapter) + return -ENOMEM; + + adapter->dev_ops.reg_ops_init = idpf_reg_ops_init; + + err = iecm_probe(pdev, ent, adapter); + if (err) + kfree(adapter); + + return err; +} + +/** + * idpf_remove - Device removal routine + * @pdev: PCI device information struct + */ +static void idpf_remove(struct pci_dev *pdev) +{ + struct iecm_adapter *adapter = pci_get_drvdata(pdev); + + iecm_remove(pdev); + kfree(adapter); +} + +/* idpf_pci_tbl - PCI Dev idpf ID Table + * + * Wildcard entries (PCI_ANY_ID) should come last + * Last entry must be all 0s + * + * { Vendor ID, Deviecm ID, SubVendor ID, SubDevice ID, + * Class, Class Mask, private data (not used) } + */ +static const struct pci_device_id idpf_pci_tbl[] = { + { PCI_VDEVICE(INTEL, IDPF_DEV_ID_PF), 0 }, + /* required last entry */ + { 0, } +}; +MODULE_DEVICE_TABLE(pci, idpf_pci_tbl); + +static struct pci_driver idpf_driver = { + .name = KBUILD_MODNAME, + .id_table = idpf_pci_tbl, + .probe = idpf_probe, + .remove = idpf_remove, + .shutdown = iecm_shutdown, +}; + +/** + * idpf_module_init - Driver registration routine + * + * idpf_module_init is the first routine called when the driver is + * loaded. All it does is register with the PCI subsystem. + */ +static int __init idpf_module_init(void) +{ + int status; + + pr_info("%s", idpf_driver_string); + pr_info("%s\n", idpf_copyright); + + status = pci_register_driver(&idpf_driver); + if (status) + pr_err("failed to register pci driver, err %d\n", status); + + return status; +} +module_init(idpf_module_init); + +/** + * idpf_module_exit - Driver exit cleanup routine + * + * idpf_module_exit is called just before the driver is removed + * from memory. + */ +static void __exit idpf_module_exit(void) +{ + pci_unregister_driver(&idpf_driver); + pr_info("module unloaded\n"); +} +module_exit(idpf_module_exit); diff --git a/drivers/net/ethernet/intel/idpf/idpf_reg.c b/drivers/net/ethernet/intel/idpf/idpf_reg.c new file mode 100644 index 000000000000..f5f364639cfc --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_reg.c @@ -0,0 +1,152 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2020 Intel Corporation */ + +#include "idpf_dev.h" +#include + +/** + * idpf_ctlq_reg_init - initialize default mailbox registers + * @cq: pointer to the array of create control queues + */ +void idpf_ctlq_reg_init(struct iecm_ctlq_create_info *cq) +{ + int i; + +#define NUM_Q 2 + for (i = 0; i < NUM_Q; i++) { + struct iecm_ctlq_create_info *ccq = cq + i; + + switch (ccq->type) { + case IECM_CTLQ_TYPE_MAILBOX_TX: + /* set head and tail registers in our local struct */ + ccq->reg.head = PF_FW_ATQH; + ccq->reg.tail = PF_FW_ATQT; + ccq->reg.len = PF_FW_ATQLEN; + ccq->reg.bah = PF_FW_ATQBAH; + ccq->reg.bal = PF_FW_ATQBAL; + ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M; + ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M; + ccq->reg.head_mask = PF_FW_ATQH_ATQH_M; + break; + case IECM_CTLQ_TYPE_MAILBOX_RX: + /* set head and tail registers in our local struct */ + ccq->reg.head = PF_FW_ARQH; + ccq->reg.tail = PF_FW_ARQT; + ccq->reg.len = PF_FW_ARQLEN; + ccq->reg.bah = PF_FW_ARQBAH; + ccq->reg.bal = PF_FW_ARQBAL; + ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M; + ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M; + ccq->reg.head_mask = PF_FW_ARQH_ARQH_M; + break; + default: + break; + } + } +} + +/** + * idpf_vportq_reg_init - Initialize tail registers + * @vport: virtual port structure + */ +void idpf_vportq_reg_init(struct iecm_vport *vport) +{ + struct iecm_hw *hw = &vport->adapter->hw; + struct iecm_queue *q; + int i, j; + + for (i = 0; i < vport->num_txq_grp; i++) { + int num_txq = vport->txq_grps[i].num_txq; + + for (j = 0; j < num_txq; j++) { + q = &vport->txq_grps[i].txqs[j]; + q->tail = hw->hw_addr + PF_QTX_COMM_DBELL(q->q_id); + } + } + + for (i = 0; i < vport->num_rxq_grp; i++) { + struct iecm_rxq_group *rxq_grp = &vport->rxq_grps[i]; + int num_rxq; + + if (iecm_is_queue_model_split(vport->rxq_model)) { + for (j = 0; j < IECM_BUFQS_PER_RXQ_SET; j++) { + q = &rxq_grp->splitq.bufq_sets[j].bufq; + q->tail = hw->hw_addr + + PF_QRX_BUFFQ_TAIL(q->q_id); + } + + num_rxq = rxq_grp->splitq.num_rxq_sets; + } else { + num_rxq = rxq_grp->singleq.num_rxq; + } + + for (j = 0; j < num_rxq; j++) { + if (iecm_is_queue_model_split(vport->rxq_model)) + q = &rxq_grp->splitq.rxq_sets[j].rxq; + else + q = &rxq_grp->singleq.rxqs[j]; + q->tail = hw->hw_addr + PF_QRX_TAIL(q->q_id); + } + } +} + +/** + * idpf_mb_intr_reg_init - Initialize mailbox interrupt register + * @adapter: adapter structure + */ +void idpf_mb_intr_reg_init(struct iecm_adapter *adapter) +{ + struct iecm_intr_reg *intr = &adapter->mb_vector.intr_reg; + int vidx; + + vidx = adapter->mb_vector.v_idx; + intr->dyn_ctl = PF_GLINT_DYN_CTL(vidx); + intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M; + intr->dyn_ctl_itridx_m = 0x3 << PF_GLINT_DYN_CTL_ITR_INDX_S; +} + +/** + * idpf_intr_reg_init - Initialize interrupt registers + * @vport: virtual port structure + */ +void idpf_intr_reg_init(struct iecm_vport *vport) +{ + int q_idx; + + for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) { + struct iecm_q_vector *q_vector = &vport->q_vectors[q_idx]; + struct iecm_intr_reg *intr = &q_vector->intr_reg; + u32 vidx = q_vector->v_idx; + + intr->dyn_ctl = PF_GLINT_DYN_CTL(vidx); + intr->dyn_ctl_clrpba_m = PF_GLINT_DYN_CTL_CLEARPBA_M; + intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M; + intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S; + intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S; + intr->itr = PF_GLINT_ITR(VIRTCHNL_ITR_IDX_0, vidx); + } +} + +/** + * idpf_reset_reg_init - Initialize reset registers + * @reset_reg: struct to be filled in with reset registers + */ +void idpf_reset_reg_init(struct iecm_reset_reg *reset_reg) +{ + reset_reg->rstat = PFGEN_RSTAT; + reset_reg->rstat_m = PFGEN_RSTAT_PFR_STATE_M; +} + +/** + * idpf_trigger_reset - trigger reset + * @adapter: Driver specific private structure + * @trig_cause: Reason to trigger a reset + */ +void idpf_trigger_reset(struct iecm_adapter *adapter, + enum iecm_flags trig_cause) +{ + u32 reset_reg; + + reset_reg = rd32(&adapter->hw, PFGEN_CTRL); + wr32(&adapter->hw, PFGEN_CTRL, (reset_reg | PFGEN_CTRL_PFSWR)); +}