From patchwork Fri Apr 9 16:43:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 418658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 372FDC43460 for ; Fri, 9 Apr 2021 16:42:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB768610CF for ; Fri, 9 Apr 2021 16:42:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234177AbhDIQm2 (ORCPT ); Fri, 9 Apr 2021 12:42:28 -0400 Received: from mga01.intel.com ([192.55.52.88]:9785 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233713AbhDIQm0 (ORCPT ); Fri, 9 Apr 2021 12:42:26 -0400 IronPort-SDR: nEOYQh4DKjeBKlq3eyXjYY3uI25HZV8thUik8vmZnrYy6XjKFX1bewjUMkqO9DeYs+1GLCm0BG iBSc+Mx+nrrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9949"; a="214235091" X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="214235091" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Apr 2021 09:42:12 -0700 IronPort-SDR: 3MDLN1dVTmuzq5D3WtKVXrnzNQUOgz4GbAb4KLEhzuZCXB98T6AClhAGnsYLj5EsBnYEr8ipbT aag14qif3Ysg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="531055056" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga004.jf.intel.com with ESMTP; 09 Apr 2021 09:42:12 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 1/9] igc: Move igc_xdp_is_enabled() Date: Fri, 9 Apr 2021 09:43:43 -0700 Message-Id: <20210409164351.188953-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210409164351.188953-1-anthony.l.nguyen@intel.com> References: <20210409164351.188953-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Andre Guedes Move the helper igc_xdp_is_enabled() to igc_xdp.h so it can be reused in igc_xdp.c by upcoming patches that will introduce AF_XDP zero-copy support to the driver. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 5 ----- drivers/net/ethernet/intel/igc/igc_xdp.h | 5 +++++ 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 10765491e357..eef2e195dd37 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -515,11 +515,6 @@ static int igc_setup_all_rx_resources(struct igc_adapter *adapter) return err; } -static bool igc_xdp_is_enabled(struct igc_adapter *adapter) -{ - return !!adapter->xdp_prog; -} - /** * igc_configure_rx_ring - Configure a receive ring after Reset * @adapter: board private structure diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.h b/drivers/net/ethernet/intel/igc/igc_xdp.h index cfecb515b718..412aa369e6ba 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.h +++ b/drivers/net/ethernet/intel/igc/igc_xdp.h @@ -10,4 +10,9 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, int igc_xdp_register_rxq_info(struct igc_ring *ring); void igc_xdp_unregister_rxq_info(struct igc_ring *ring); +static inline bool igc_xdp_is_enabled(struct igc_adapter *adapter) +{ + return !!adapter->xdp_prog; +} + #endif /* _IGC_XDP_H_ */ From patchwork Fri Apr 9 16:43:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 418657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A87EDC433ED for ; Fri, 9 Apr 2021 16:42:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 86852610A7 for ; Fri, 9 Apr 2021 16:42:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234216AbhDIQmd (ORCPT ); Fri, 9 Apr 2021 12:42:33 -0400 Received: from mga01.intel.com ([192.55.52.88]:9788 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234166AbhDIQm1 (ORCPT ); Fri, 9 Apr 2021 12:42:27 -0400 IronPort-SDR: AtF/oqi0MJa5bOhSEZG8D4nRVvD5uKEmen5p+TZ/0dI7jejUIG2BKs7/2TjgIn1v32ul3zbpSd X5whO7kT29gw== X-IronPort-AV: E=McAfee;i="6000,8403,9949"; a="214235095" X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="214235095" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Apr 2021 09:42:13 -0700 IronPort-SDR: yvwKRk/cDHuyVta8F6nsv8YH0VpFqHIjZdqGEPtszbRkh/9vEdOdnczn9tGFVMEhuhRNiXluUr GlKs/201NOuQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="531055066" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga004.jf.intel.com with ESMTP; 09 Apr 2021 09:42:12 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 4/9] igc: Refactor XDP rxq info registration Date: Fri, 9 Apr 2021 09:43:46 -0700 Message-Id: <20210409164351.188953-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210409164351.188953-1-anthony.l.nguyen@intel.com> References: <20210409164351.188953-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Andre Guedes Refactor XDP rxq info registration code, preparing the driver for AF_XDP zero-copy support which is added by upcoming patches. Currently, xdp_rxq and memory model are both registered during RX resource setup time by igc_xdp_register_rxq_info() helper. With AF_XDP, we want to register the memory model later on while configuring the ring because we will know which memory model type to register (MEM_TYPE_PAGE_SHARED or MEM_TYPE_XSK_BUFF_POOL). The helpers igc_xdp_register_rxq_info() and igc_xdp_unregister_rxq_ info() are not useful anymore so they are removed. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 16 ++++++++++---- drivers/net/ethernet/intel/igc/igc_xdp.c | 27 ----------------------- drivers/net/ethernet/intel/igc/igc_xdp.h | 3 --- 3 files changed, 12 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index f715b69805fa..cd192517946b 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -419,7 +419,7 @@ void igc_free_rx_resources(struct igc_ring *rx_ring) { igc_clean_rx_ring(rx_ring); - igc_xdp_unregister_rxq_info(rx_ring); + xdp_rxq_info_unreg(&rx_ring->xdp_rxq); vfree(rx_ring->rx_buffer_info); rx_ring->rx_buffer_info = NULL; @@ -458,11 +458,16 @@ int igc_setup_rx_resources(struct igc_ring *rx_ring) { struct net_device *ndev = rx_ring->netdev; struct device *dev = rx_ring->dev; + u8 index = rx_ring->queue_index; int size, desc_len, res; - res = igc_xdp_register_rxq_info(rx_ring); - if (res < 0) + res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, ndev, index, + rx_ring->q_vector->napi.napi_id); + if (res < 0) { + netdev_err(ndev, "Failed to register xdp_rxq index %u\n", + index); return res; + } size = sizeof(struct igc_rx_buffer) * rx_ring->count; rx_ring->rx_buffer_info = vzalloc(size); @@ -488,7 +493,7 @@ int igc_setup_rx_resources(struct igc_ring *rx_ring) return 0; err: - igc_xdp_unregister_rxq_info(rx_ring); + xdp_rxq_info_unreg(&rx_ring->xdp_rxq); vfree(rx_ring->rx_buffer_info); rx_ring->rx_buffer_info = NULL; netdev_err(ndev, "Unable to allocate memory for Rx descriptor ring\n"); @@ -536,6 +541,9 @@ static void igc_configure_rx_ring(struct igc_adapter *adapter, u32 srrctl = 0, rxdctl = 0; u64 rdba = ring->dma; + WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, + MEM_TYPE_PAGE_SHARED, NULL)); + if (igc_xdp_is_enabled(adapter)) set_ring_uses_large_buffer(ring); diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c index 11133c4619bb..27c886a254f1 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.c +++ b/drivers/net/ethernet/intel/igc/igc_xdp.c @@ -31,30 +31,3 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, return 0; } - -int igc_xdp_register_rxq_info(struct igc_ring *ring) -{ - struct net_device *dev = ring->netdev; - int err; - - err = xdp_rxq_info_reg(&ring->xdp_rxq, dev, ring->queue_index, 0); - if (err) { - netdev_err(dev, "Failed to register xdp rxq info\n"); - return err; - } - - err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, MEM_TYPE_PAGE_SHARED, - NULL); - if (err) { - netdev_err(dev, "Failed to register xdp rxq mem model\n"); - xdp_rxq_info_unreg(&ring->xdp_rxq); - return err; - } - - return 0; -} - -void igc_xdp_unregister_rxq_info(struct igc_ring *ring) -{ - xdp_rxq_info_unreg(&ring->xdp_rxq); -} diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.h b/drivers/net/ethernet/intel/igc/igc_xdp.h index 412aa369e6ba..cdaa2c39b03a 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.h +++ b/drivers/net/ethernet/intel/igc/igc_xdp.h @@ -7,9 +7,6 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, struct netlink_ext_ack *extack); -int igc_xdp_register_rxq_info(struct igc_ring *ring); -void igc_xdp_unregister_rxq_info(struct igc_ring *ring); - static inline bool igc_xdp_is_enabled(struct igc_adapter *adapter) { return !!adapter->xdp_prog; From patchwork Fri Apr 9 16:43:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 418656 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2969AC43462 for ; Fri, 9 Apr 2021 16:42:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 01B36610C7 for ; Fri, 9 Apr 2021 16:42:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234224AbhDIQmf (ORCPT ); Fri, 9 Apr 2021 12:42:35 -0400 Received: from mga01.intel.com ([192.55.52.88]:9785 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233577AbhDIQm1 (ORCPT ); Fri, 9 Apr 2021 12:42:27 -0400 IronPort-SDR: gIvuzNczIKRpq7iebleeAX6fRX4kkWwKZbodKQ459T/sT7pvJ3hGW5aOMOZrQANAZgxo6xr7vL +bddiY1wtF5Q== X-IronPort-AV: E=McAfee;i="6000,8403,9949"; a="214235097" X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="214235097" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Apr 2021 09:42:13 -0700 IronPort-SDR: HSqG4MX/peV6fviaCZ9DYw2KfTMZSbBFYBT/tn+zCJmRONwkMR49sHPPHEBVtL22pYjuCCfN8B yeoPxH+eFB6A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="531055070" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga004.jf.intel.com with ESMTP; 09 Apr 2021 09:42:12 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 5/9] igc: Introduce TX/RX stats helpers Date: Fri, 9 Apr 2021 09:43:47 -0700 Message-Id: <20210409164351.188953-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210409164351.188953-1-anthony.l.nguyen@intel.com> References: <20210409164351.188953-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Andre Guedes In preparation for AF_XDP zero-copy support, encapsulate the code that updates the driver RX stats in its own local helper so it can be reused in the zero-copy path. Likewise, encapsulate TX stats code as well. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc_main.c | 43 ++++++++++++++++------- 1 file changed, 31 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index cd192517946b..cc157d83355b 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -2111,6 +2111,20 @@ static void igc_finalize_xdp(struct igc_adapter *adapter, int status) xdp_do_flush(); } +static void igc_update_rx_stats(struct igc_q_vector *q_vector, + unsigned int packets, unsigned int bytes) +{ + struct igc_ring *ring = q_vector->rx.ring; + + u64_stats_update_begin(&ring->rx_syncp); + ring->rx_stats.packets += packets; + ring->rx_stats.bytes += bytes; + u64_stats_update_end(&ring->rx_syncp); + + q_vector->rx.total_packets += packets; + q_vector->rx.total_bytes += bytes; +} + static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) { unsigned int total_bytes = 0, total_packets = 0; @@ -2234,12 +2248,7 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) /* place incomplete frames back on ring for completion */ rx_ring->skb = skb; - u64_stats_update_begin(&rx_ring->rx_syncp); - rx_ring->rx_stats.packets += total_packets; - rx_ring->rx_stats.bytes += total_bytes; - u64_stats_update_end(&rx_ring->rx_syncp); - q_vector->rx.total_packets += total_packets; - q_vector->rx.total_bytes += total_bytes; + igc_update_rx_stats(q_vector, total_packets, total_bytes); if (cleaned_count) igc_alloc_rx_buffers(rx_ring, cleaned_count); @@ -2247,6 +2256,20 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) return total_packets; } +static void igc_update_tx_stats(struct igc_q_vector *q_vector, + unsigned int packets, unsigned int bytes) +{ + struct igc_ring *ring = q_vector->tx.ring; + + u64_stats_update_begin(&ring->tx_syncp); + ring->tx_stats.bytes += bytes; + ring->tx_stats.packets += packets; + u64_stats_update_end(&ring->tx_syncp); + + q_vector->tx.total_bytes += bytes; + q_vector->tx.total_packets += packets; +} + /** * igc_clean_tx_irq - Reclaim resources after transmit completes * @q_vector: pointer to q_vector containing needed info @@ -2349,12 +2372,8 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget) i += tx_ring->count; tx_ring->next_to_clean = i; - u64_stats_update_begin(&tx_ring->tx_syncp); - tx_ring->tx_stats.bytes += total_bytes; - tx_ring->tx_stats.packets += total_packets; - u64_stats_update_end(&tx_ring->tx_syncp); - q_vector->tx.total_bytes += total_bytes; - q_vector->tx.total_packets += total_packets; + + igc_update_tx_stats(q_vector, total_packets, total_bytes); if (test_bit(IGC_RING_FLAG_TX_DETECT_HANG, &tx_ring->flags)) { struct igc_hw *hw = &adapter->hw; From patchwork Fri Apr 9 16:43:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 418655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1775C43460 for ; Fri, 9 Apr 2021 16:42:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C28E2610A8 for ; Fri, 9 Apr 2021 16:42:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234206AbhDIQmk (ORCPT ); Fri, 9 Apr 2021 12:42:40 -0400 Received: from mga01.intel.com ([192.55.52.88]:9788 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234169AbhDIQm1 (ORCPT ); Fri, 9 Apr 2021 12:42:27 -0400 IronPort-SDR: wQ5QGJrJVskYX7/N2inHgU/fm5b3FidEtAOnQKKEVSZ9ts58oTS6fOhPQRoruO1uyRhKKBoH22 ixE/NjPXpeGg== X-IronPort-AV: E=McAfee;i="6000,8403,9949"; a="214235098" X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="214235098" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Apr 2021 09:42:14 -0700 IronPort-SDR: HHdmLIy8y1shJ9zV+aVnPzcVseuxFpDP8IXHT8pywSiqxzxZ8rrS/pN6FC2eXovZ7e/uhsSt2C CA4PsmZ39NaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="531055075" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga004.jf.intel.com with ESMTP; 09 Apr 2021 09:42:12 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 7/9] igc: Replace IGC_TX_FLAGS_XDP flag by an enum Date: Fri, 9 Apr 2021 09:43:49 -0700 Message-Id: <20210409164351.188953-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210409164351.188953-1-anthony.l.nguyen@intel.com> References: <20210409164351.188953-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Andre Guedes Up to this point, Tx buffers are associated with either a skb or a xdpf, and the IGC_TX_FLAGS_XDP flag was enough to distinguish between these two case. However, with upcoming patches that will add AF_XDP zero-copy support, a third case will be introduced so this flag-based approach won't fit well. In preparation to land AF_XDP zero-copy support, replace the IGC_TX_FLAGS_XDP flag by an enum which will be extended once zero-copy support is introduced to the driver. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc.h | 8 ++++++-- drivers/net/ethernet/intel/igc/igc_main.c | 25 ++++++++++++++++++----- 2 files changed, 26 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index 91493a73355d..86eb1686ec43 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -377,8 +377,6 @@ enum igc_tx_flags { /* olinfo flags */ IGC_TX_FLAGS_IPV4 = 0x10, IGC_TX_FLAGS_CSUM = 0x20, - - IGC_TX_FLAGS_XDP = 0x100, }; enum igc_boards { @@ -395,12 +393,18 @@ enum igc_boards { #define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IGC_MAX_DATA_PER_TXD) #define DESC_NEEDED (MAX_SKB_FRAGS + 4) +enum igc_tx_buffer_type { + IGC_TX_BUFFER_TYPE_SKB, + IGC_TX_BUFFER_TYPE_XDP, +}; + /* wrapper around a pointer to a socket buffer, * so a DMA handle can be stored along with the buffer */ struct igc_tx_buffer { union igc_adv_tx_desc *next_to_watch; unsigned long time_stamp; + enum igc_tx_buffer_type type; union { struct sk_buff *skb; struct xdp_frame *xdpf; diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 5bf6d8463700..b34b45afc732 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -191,10 +191,17 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring) while (i != tx_ring->next_to_use) { union igc_adv_tx_desc *eop_desc, *tx_desc; - if (tx_buffer->tx_flags & IGC_TX_FLAGS_XDP) + switch (tx_buffer->type) { + case IGC_TX_BUFFER_TYPE_XDP: xdp_return_frame(tx_buffer->xdpf); - else + break; + case IGC_TX_BUFFER_TYPE_SKB: dev_kfree_skb_any(tx_buffer->skb); + break; + default: + netdev_warn_once(tx_ring->netdev, "Unknown Tx buffer type\n"); + break; + } igc_unmap_tx_buffer(tx_ring->dev, tx_buffer); @@ -1360,6 +1367,7 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb, /* record the location of the first descriptor for this packet */ first = &tx_ring->tx_buffer_info[tx_ring->next_to_use]; + first->type = IGC_TX_BUFFER_TYPE_SKB; first->skb = skb; first->bytecount = skb->len; first->gso_segs = 1; @@ -1943,8 +1951,8 @@ static int igc_xdp_init_tx_buffer(struct igc_tx_buffer *buffer, return -ENOMEM; } + buffer->type = IGC_TX_BUFFER_TYPE_XDP; buffer->xdpf = xdpf; - buffer->tx_flags = IGC_TX_FLAGS_XDP; buffer->protocol = 0; buffer->bytecount = xdpf->len; buffer->gso_segs = 1; @@ -2308,10 +2316,17 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget) total_bytes += tx_buffer->bytecount; total_packets += tx_buffer->gso_segs; - if (tx_buffer->tx_flags & IGC_TX_FLAGS_XDP) + switch (tx_buffer->type) { + case IGC_TX_BUFFER_TYPE_XDP: xdp_return_frame(tx_buffer->xdpf); - else + break; + case IGC_TX_BUFFER_TYPE_SKB: napi_consume_skb(tx_buffer->skb, napi_budget); + break; + default: + netdev_warn_once(tx_ring->netdev, "Unknown Tx buffer type\n"); + break; + } igc_unmap_tx_buffer(tx_ring->dev, tx_buffer); From patchwork Fri Apr 9 16:43:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 418654 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 615D8C433B4 for ; Fri, 9 Apr 2021 16:42:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 39B40610C7 for ; Fri, 9 Apr 2021 16:42:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234242AbhDIQmq (ORCPT ); Fri, 9 Apr 2021 12:42:46 -0400 Received: from mga11.intel.com ([192.55.52.93]:27231 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234190AbhDIQma (ORCPT ); Fri, 9 Apr 2021 12:42:30 -0400 IronPort-SDR: zgFCmjH5t2hFm5ilDFE5Pwi84GFqz+0Al7OQOVxUAczuulV9Dt+vFxc1YMRNqu6wunnDeUdSaJ ncixXAkyQsig== X-IronPort-AV: E=McAfee;i="6000,8403,9949"; a="190599175" X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="190599175" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Apr 2021 09:42:14 -0700 IronPort-SDR: 4Wj1bSEn0KXHBrB1MPAmctPCIKH8G04Rll01+xa9/voD+8UFoLgYm79JjTz62e6VCIfrLOHCAr IA5/D6IhBoUA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,209,1613462400"; d="scan'208";a="531055079" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by orsmga004.jf.intel.com with ESMTP; 09 Apr 2021 09:42:12 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Andre Guedes , netdev@vger.kernel.org, sassmann@redhat.com, anthony.l.nguyen@intel.com, bjorn.topel@intel.com, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, sasha.neftin@intel.com, vitaly.lifshits@intel.com, Vedang Patel , Jithu Joseph , Dvora Fuxbrumer Subject: [PATCH net-next 8/9] igc: Enable RX via AF_XDP zero-copy Date: Fri, 9 Apr 2021 09:43:50 -0700 Message-Id: <20210409164351.188953-9-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210409164351.188953-1-anthony.l.nguyen@intel.com> References: <20210409164351.188953-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Andre Guedes Add support for receiving packets via AF_XDP zero-copy mechanism. Add a new flag to 'enum igc_ring_flags_t' to indicate the ring has AF_XDP zero-copy enabled so proper ring setup is carried out during ring configuration in igc_configure_rx_ring(). RX buffers can now be allocated via the shared pages mechanism (default behavior of the driver) or via xsk pool (when AF_XDP zero-copy is enabled) so a union is added to the 'struct igc_rx_buffer' to cover both cases. When AF_XDP zero-copy is enabled, rx buffers are allocated from the xsk pool using the new helper igc_alloc_rx_buffers_zc() which is the counterpart of igc_alloc_rx_buffers(). Likewise other Intel drivers that support AF_XDP zero-copy, in igc we have a dedicated path for cleaning up rx irqs when zero-copy is enabled. This avoids adding too many checks within igc_clean_rx_irq(), resulting in a more readable and efficient code since this function is called from the hot-path of the driver. Signed-off-by: Andre Guedes Signed-off-by: Vedang Patel Signed-off-by: Jithu Joseph Reviewed-by: Maciej Fijalkowski Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/igc/igc.h | 22 +- drivers/net/ethernet/intel/igc/igc_base.h | 1 + drivers/net/ethernet/intel/igc/igc_main.c | 338 +++++++++++++++++++++- drivers/net/ethernet/intel/igc/igc_xdp.c | 98 +++++++ drivers/net/ethernet/intel/igc/igc_xdp.h | 2 + 5 files changed, 442 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index 86eb1686ec43..7d452d422b1c 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -113,6 +113,7 @@ struct igc_ring { }; struct xdp_rxq_info xdp_rxq; + struct xsk_buff_pool *xsk_pool; } ____cacheline_internodealigned_in_smp; /* Board specific private data structure */ @@ -242,6 +243,9 @@ bool igc_has_link(struct igc_adapter *adapter); void igc_reset(struct igc_adapter *adapter); int igc_set_spd_dplx(struct igc_adapter *adapter, u32 spd, u8 dplx); void igc_update_stats(struct igc_adapter *adapter); +void igc_disable_rx_ring(struct igc_ring *ring); +void igc_enable_rx_ring(struct igc_ring *ring); +int igc_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags); /* igc_dump declarations */ void igc_rings_dump(struct igc_adapter *adapter); @@ -419,14 +423,19 @@ struct igc_tx_buffer { }; struct igc_rx_buffer { - dma_addr_t dma; - struct page *page; + union { + struct { + dma_addr_t dma; + struct page *page; #if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536) - __u32 page_offset; + __u32 page_offset; #else - __u16 page_offset; + __u16 page_offset; #endif - __u16 pagecnt_bias; + __u16 pagecnt_bias; + }; + struct xdp_buff *xdp; + }; }; struct igc_q_vector { @@ -512,7 +521,8 @@ enum igc_ring_flags_t { IGC_RING_FLAG_RX_SCTP_CSUM, IGC_RING_FLAG_RX_LB_VLAN_BSWAP, IGC_RING_FLAG_TX_CTX_IDX, - IGC_RING_FLAG_TX_DETECT_HANG + IGC_RING_FLAG_TX_DETECT_HANG, + IGC_RING_FLAG_AF_XDP_ZC, }; #define ring_uses_large_buffer(ring) \ diff --git a/drivers/net/ethernet/intel/igc/igc_base.h b/drivers/net/ethernet/intel/igc/igc_base.h index ea627ce52525..2ca028c1919f 100644 --- a/drivers/net/ethernet/intel/igc/igc_base.h +++ b/drivers/net/ethernet/intel/igc/igc_base.h @@ -81,6 +81,7 @@ union igc_adv_rx_desc { /* Additional Receive Descriptor Control definitions */ #define IGC_RXDCTL_QUEUE_ENABLE 0x02000000 /* Ena specific Rx Queue */ +#define IGC_RXDCTL_SWFLUSH 0x04000000 /* Receive Software Flush */ /* SRRCTL bit definitions */ #define IGC_SRRCTL_BSIZEPKT_SHIFT 10 /* Shift _right_ */ diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index b34b45afc732..118c2852317f 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -11,7 +11,7 @@ #include #include #include - +#include #include #include "igc.h" @@ -389,13 +389,31 @@ static void igc_clean_rx_ring_page_shared(struct igc_ring *rx_ring) } } +static void igc_clean_rx_ring_xsk_pool(struct igc_ring *ring) +{ + struct igc_rx_buffer *bi; + u16 i; + + for (i = 0; i < ring->count; i++) { + bi = &ring->rx_buffer_info[i]; + if (!bi->xdp) + continue; + + xsk_buff_free(bi->xdp); + bi->xdp = NULL; + } +} + /** * igc_clean_rx_ring - Free Rx Buffers per Queue * @ring: ring to free buffers from */ static void igc_clean_rx_ring(struct igc_ring *ring) { - igc_clean_rx_ring_page_shared(ring); + if (ring->xsk_pool) + igc_clean_rx_ring_xsk_pool(ring); + else + igc_clean_rx_ring_page_shared(ring); clear_ring_uses_large_buffer(ring); @@ -533,6 +551,16 @@ static int igc_setup_all_rx_resources(struct igc_adapter *adapter) return err; } +static struct xsk_buff_pool *igc_get_xsk_pool(struct igc_adapter *adapter, + struct igc_ring *ring) +{ + if (!igc_xdp_is_enabled(adapter) || + !test_bit(IGC_RING_FLAG_AF_XDP_ZC, &ring->flags)) + return NULL; + + return xsk_get_pool_from_qid(ring->netdev, ring->queue_index); +} + /** * igc_configure_rx_ring - Configure a receive ring after Reset * @adapter: board private structure @@ -548,9 +576,20 @@ static void igc_configure_rx_ring(struct igc_adapter *adapter, int reg_idx = ring->reg_idx; u32 srrctl = 0, rxdctl = 0; u64 rdba = ring->dma; - - WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, - MEM_TYPE_PAGE_SHARED, NULL)); + u32 buf_size; + + xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); + ring->xsk_pool = igc_get_xsk_pool(adapter, ring); + if (ring->xsk_pool) { + WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL)); + xsk_pool_set_rxq_info(ring->xsk_pool, &ring->xdp_rxq); + } else { + WARN_ON(xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, + MEM_TYPE_PAGE_SHARED, + NULL)); + } if (igc_xdp_is_enabled(adapter)) set_ring_uses_large_buffer(ring); @@ -574,12 +613,15 @@ static void igc_configure_rx_ring(struct igc_adapter *adapter, ring->next_to_clean = 0; ring->next_to_use = 0; - /* set descriptor configuration */ - srrctl = IGC_RX_HDR_LEN << IGC_SRRCTL_BSIZEHDRSIZE_SHIFT; - if (ring_uses_large_buffer(ring)) - srrctl |= IGC_RXBUFFER_3072 >> IGC_SRRCTL_BSIZEPKT_SHIFT; + if (ring->xsk_pool) + buf_size = xsk_pool_get_rx_frame_size(ring->xsk_pool); + else if (ring_uses_large_buffer(ring)) + buf_size = IGC_RXBUFFER_3072; else - srrctl |= IGC_RXBUFFER_2048 >> IGC_SRRCTL_BSIZEPKT_SHIFT; + buf_size = IGC_RXBUFFER_2048; + + srrctl = IGC_RX_HDR_LEN << IGC_SRRCTL_BSIZEHDRSIZE_SHIFT; + srrctl |= buf_size >> IGC_SRRCTL_BSIZEPKT_SHIFT; srrctl |= IGC_SRRCTL_DESCTYPE_ADV_ONEBUF; wr32(IGC_SRRCTL(reg_idx), srrctl); @@ -1939,6 +1981,63 @@ static void igc_alloc_rx_buffers(struct igc_ring *rx_ring, u16 cleaned_count) } } +static bool igc_alloc_rx_buffers_zc(struct igc_ring *ring, u16 count) +{ + union igc_adv_rx_desc *desc; + u16 i = ring->next_to_use; + struct igc_rx_buffer *bi; + dma_addr_t dma; + bool ok = true; + + if (!count) + return ok; + + desc = IGC_RX_DESC(ring, i); + bi = &ring->rx_buffer_info[i]; + i -= ring->count; + + do { + bi->xdp = xsk_buff_alloc(ring->xsk_pool); + if (!bi->xdp) { + ok = false; + break; + } + + dma = xsk_buff_xdp_get_dma(bi->xdp); + desc->read.pkt_addr = cpu_to_le64(dma); + + desc++; + bi++; + i++; + if (unlikely(!i)) { + desc = IGC_RX_DESC(ring, 0); + bi = ring->rx_buffer_info; + i -= ring->count; + } + + /* Clear the length for the next_to_use descriptor. */ + desc->wb.upper.length = 0; + + count--; + } while (count); + + i += ring->count; + + if (ring->next_to_use != i) { + ring->next_to_use = i; + + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. (Only + * applicable for weak-ordered memory model archs, + * such as IA-64). + */ + wmb(); + writel(i, ring->tail); + } + + return ok; +} + static int igc_xdp_init_tx_buffer(struct igc_tx_buffer *buffer, struct xdp_frame *xdpf, struct igc_ring *ring) @@ -2257,6 +2356,142 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget) return total_packets; } +static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring, + struct xdp_buff *xdp) +{ + unsigned int metasize = xdp->data - xdp->data_meta; + unsigned int datasize = xdp->data_end - xdp->data; + struct sk_buff *skb; + + skb = __napi_alloc_skb(&ring->q_vector->napi, + xdp->data_end - xdp->data_hard_start, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + memcpy(__skb_put(skb, datasize), xdp->data, datasize); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; +} + +static void igc_dispatch_skb_zc(struct igc_q_vector *q_vector, + union igc_adv_rx_desc *desc, + struct xdp_buff *xdp, + ktime_t timestamp) +{ + struct igc_ring *ring = q_vector->rx.ring; + struct sk_buff *skb; + + skb = igc_construct_skb_zc(ring, xdp); + if (!skb) { + ring->rx_stats.alloc_failed++; + return; + } + + if (timestamp) + skb_hwtstamps(skb)->hwtstamp = timestamp; + + if (igc_cleanup_headers(ring, desc, skb)) + return; + + igc_process_skb_fields(ring, desc, skb); + napi_gro_receive(&q_vector->napi, skb); +} + +static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget) +{ + struct igc_adapter *adapter = q_vector->adapter; + struct igc_ring *ring = q_vector->rx.ring; + u16 cleaned_count = igc_desc_unused(ring); + int total_bytes = 0, total_packets = 0; + u16 ntc = ring->next_to_clean; + struct bpf_prog *prog; + bool failure = false; + int xdp_status = 0; + + rcu_read_lock(); + + prog = READ_ONCE(adapter->xdp_prog); + + while (likely(total_packets < budget)) { + union igc_adv_rx_desc *desc; + struct igc_rx_buffer *bi; + ktime_t timestamp = 0; + unsigned int size; + int res; + + desc = IGC_RX_DESC(ring, ntc); + size = le16_to_cpu(desc->wb.upper.length); + if (!size) + break; + + /* This memory barrier is needed to keep us from reading + * any other fields out of the rx_desc until we know the + * descriptor has been written back + */ + dma_rmb(); + + bi = &ring->rx_buffer_info[ntc]; + + if (igc_test_staterr(desc, IGC_RXDADV_STAT_TSIP)) { + timestamp = igc_ptp_rx_pktstamp(q_vector->adapter, + bi->xdp->data); + + bi->xdp->data += IGC_TS_HDR_LEN; + size -= IGC_TS_HDR_LEN; + } + + bi->xdp->data_end = bi->xdp->data + size; + xsk_buff_dma_sync_for_cpu(bi->xdp, ring->xsk_pool); + + res = __igc_xdp_run_prog(adapter, prog, bi->xdp); + switch (res) { + case IGC_XDP_PASS: + igc_dispatch_skb_zc(q_vector, desc, bi->xdp, timestamp); + fallthrough; + case IGC_XDP_CONSUMED: + xsk_buff_free(bi->xdp); + break; + case IGC_XDP_TX: + case IGC_XDP_REDIRECT: + xdp_status |= res; + break; + } + + bi->xdp = NULL; + total_bytes += size; + total_packets++; + cleaned_count++; + ntc++; + if (ntc == ring->count) + ntc = 0; + } + + ring->next_to_clean = ntc; + rcu_read_unlock(); + + if (cleaned_count >= IGC_RX_BUFFER_WRITE) + failure = !igc_alloc_rx_buffers_zc(ring, cleaned_count); + + if (xdp_status) + igc_finalize_xdp(adapter, xdp_status); + + igc_update_rx_stats(q_vector, total_packets, total_bytes); + + if (xsk_uses_need_wakeup(ring->xsk_pool)) { + if (failure || ring->next_to_clean == ring->next_to_use) + xsk_set_rx_need_wakeup(ring->xsk_pool); + else + xsk_clear_rx_need_wakeup(ring->xsk_pool); + return total_packets; + } + + return failure ? budget : total_packets; +} + static void igc_update_tx_stats(struct igc_q_vector *q_vector, unsigned int packets, unsigned int bytes) { @@ -2949,7 +3184,10 @@ static void igc_configure(struct igc_adapter *adapter) for (i = 0; i < adapter->num_rx_queues; i++) { struct igc_ring *ring = adapter->rx_ring[i]; - igc_alloc_rx_buffers(ring, igc_desc_unused(ring)); + if (ring->xsk_pool) + igc_alloc_rx_buffers_zc(ring, igc_desc_unused(ring)); + else + igc_alloc_rx_buffers(ring, igc_desc_unused(ring)); } } @@ -3564,14 +3802,17 @@ static int igc_poll(struct napi_struct *napi, int budget) struct igc_q_vector *q_vector = container_of(napi, struct igc_q_vector, napi); + struct igc_ring *rx_ring = q_vector->rx.ring; bool clean_complete = true; int work_done = 0; if (q_vector->tx.ring) clean_complete = igc_clean_tx_irq(q_vector, budget); - if (q_vector->rx.ring) { - int cleaned = igc_clean_rx_irq(q_vector, budget); + if (rx_ring) { + int cleaned = rx_ring->xsk_pool ? + igc_clean_rx_irq_zc(q_vector, budget) : + igc_clean_rx_irq(q_vector, budget); work_done += cleaned; if (cleaned >= budget) @@ -5150,6 +5391,9 @@ static int igc_bpf(struct net_device *dev, struct netdev_bpf *bpf) switch (bpf->command) { case XDP_SETUP_PROG: return igc_xdp_set_prog(adapter, bpf->prog, bpf->extack); + case XDP_SETUP_XSK_POOL: + return igc_xdp_setup_pool(adapter, bpf->xsk.pool, + bpf->xsk.queue_id); default: return -EOPNOTSUPP; } @@ -5195,6 +5439,43 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames, return num_frames - drops; } +static void igc_trigger_rxtxq_interrupt(struct igc_adapter *adapter, + struct igc_q_vector *q_vector) +{ + struct igc_hw *hw = &adapter->hw; + u32 eics = 0; + + eics |= q_vector->eims_value; + wr32(IGC_EICS, eics); +} + +int igc_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) +{ + struct igc_adapter *adapter = netdev_priv(dev); + struct igc_q_vector *q_vector; + struct igc_ring *ring; + + if (test_bit(__IGC_DOWN, &adapter->state)) + return -ENETDOWN; + + if (!igc_xdp_is_enabled(adapter)) + return -ENXIO; + + if (queue_id >= adapter->num_rx_queues) + return -EINVAL; + + ring = adapter->rx_ring[queue_id]; + + if (!ring->xsk_pool) + return -ENXIO; + + q_vector = adapter->q_vector[queue_id]; + if (!napi_if_scheduled_mark_missed(&q_vector->napi)) + igc_trigger_rxtxq_interrupt(adapter, q_vector); + + return 0; +} + static const struct net_device_ops igc_netdev_ops = { .ndo_open = igc_open, .ndo_stop = igc_close, @@ -5210,6 +5491,7 @@ static const struct net_device_ops igc_netdev_ops = { .ndo_setup_tc = igc_setup_tc, .ndo_bpf = igc_bpf, .ndo_xdp_xmit = igc_xdp_xmit, + .ndo_xsk_wakeup = igc_xsk_wakeup, }; /* PCIe configuration access */ @@ -5962,6 +6244,36 @@ struct net_device *igc_get_hw_dev(struct igc_hw *hw) return adapter->netdev; } +static void igc_disable_rx_ring_hw(struct igc_ring *ring) +{ + struct igc_hw *hw = &ring->q_vector->adapter->hw; + u8 idx = ring->reg_idx; + u32 rxdctl; + + rxdctl = rd32(IGC_RXDCTL(idx)); + rxdctl &= ~IGC_RXDCTL_QUEUE_ENABLE; + rxdctl |= IGC_RXDCTL_SWFLUSH; + wr32(IGC_RXDCTL(idx), rxdctl); +} + +void igc_disable_rx_ring(struct igc_ring *ring) +{ + igc_disable_rx_ring_hw(ring); + igc_clean_rx_ring(ring); +} + +void igc_enable_rx_ring(struct igc_ring *ring) +{ + struct igc_adapter *adapter = ring->q_vector->adapter; + + igc_configure_rx_ring(adapter, ring); + + if (ring->xsk_pool) + igc_alloc_rx_buffers_zc(ring, igc_desc_unused(ring)); + else + igc_alloc_rx_buffers(ring, igc_desc_unused(ring)); +} + /** * igc_init_module - Driver Registration Routine * diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c index 27c886a254f1..7fe3177a53dd 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.c +++ b/drivers/net/ethernet/intel/igc/igc_xdp.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2020, Intel Corporation. */ +#include + #include "igc.h" #include "igc_xdp.h" @@ -31,3 +33,99 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, return 0; } + +static int igc_xdp_enable_pool(struct igc_adapter *adapter, + struct xsk_buff_pool *pool, u16 queue_id) +{ + struct net_device *ndev = adapter->netdev; + struct device *dev = &adapter->pdev->dev; + struct igc_ring *rx_ring; + struct napi_struct *napi; + bool needs_reset; + u32 frame_size; + int err; + + if (queue_id >= adapter->num_rx_queues) + return -EINVAL; + + frame_size = xsk_pool_get_rx_frame_size(pool); + if (frame_size < ETH_FRAME_LEN + VLAN_HLEN * 2) { + /* When XDP is enabled, the driver doesn't support frames that + * span over multiple buffers. To avoid that, we check if xsk + * frame size is big enough to fit the max ethernet frame size + * + vlan double tagging. + */ + return -EOPNOTSUPP; + } + + err = xsk_pool_dma_map(pool, dev, IGC_RX_DMA_ATTR); + if (err) { + netdev_err(ndev, "Failed to map xsk pool\n"); + return err; + } + + needs_reset = netif_running(adapter->netdev) && igc_xdp_is_enabled(adapter); + + rx_ring = adapter->rx_ring[queue_id]; + napi = &rx_ring->q_vector->napi; + + if (needs_reset) { + igc_disable_rx_ring(rx_ring); + napi_disable(napi); + } + + set_bit(IGC_RING_FLAG_AF_XDP_ZC, &rx_ring->flags); + + if (needs_reset) { + napi_enable(napi); + igc_enable_rx_ring(rx_ring); + + err = igc_xsk_wakeup(ndev, queue_id, XDP_WAKEUP_RX); + if (err) + return err; + } + + return 0; +} + +static int igc_xdp_disable_pool(struct igc_adapter *adapter, u16 queue_id) +{ + struct xsk_buff_pool *pool; + struct igc_ring *rx_ring; + struct napi_struct *napi; + bool needs_reset; + + if (queue_id >= adapter->num_rx_queues) + return -EINVAL; + + pool = xsk_get_pool_from_qid(adapter->netdev, queue_id); + if (!pool) + return -EINVAL; + + needs_reset = netif_running(adapter->netdev) && igc_xdp_is_enabled(adapter); + + rx_ring = adapter->rx_ring[queue_id]; + napi = &rx_ring->q_vector->napi; + + if (needs_reset) { + igc_disable_rx_ring(rx_ring); + napi_disable(napi); + } + + xsk_pool_dma_unmap(pool, IGC_RX_DMA_ATTR); + clear_bit(IGC_RING_FLAG_AF_XDP_ZC, &rx_ring->flags); + + if (needs_reset) { + napi_enable(napi); + igc_enable_rx_ring(rx_ring); + } + + return 0; +} + +int igc_xdp_setup_pool(struct igc_adapter *adapter, struct xsk_buff_pool *pool, + u16 queue_id) +{ + return pool ? igc_xdp_enable_pool(adapter, pool, queue_id) : + igc_xdp_disable_pool(adapter, queue_id); +} diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.h b/drivers/net/ethernet/intel/igc/igc_xdp.h index cdaa2c39b03a..a74e5487d199 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.h +++ b/drivers/net/ethernet/intel/igc/igc_xdp.h @@ -6,6 +6,8 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, struct netlink_ext_ack *extack); +int igc_xdp_setup_pool(struct igc_adapter *adapter, struct xsk_buff_pool *pool, + u16 queue_id); static inline bool igc_xdp_is_enabled(struct igc_adapter *adapter) {