From patchwork Fri Apr 16 21:22:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58E4EC433B4 for ; Fri, 16 Apr 2021 21:23:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E69B613D5 for ; Fri, 16 Apr 2021 21:23:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344282AbhDPVXc (ORCPT ); Fri, 16 Apr 2021 17:23:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344323AbhDPVXb (ORCPT ); Fri, 16 Apr 2021 17:23:31 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5ADA7C061574 for ; Fri, 16 Apr 2021 14:23:03 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id z22-20020a17090a0156b029014d4056663fso15307928pje.0 for ; Fri, 16 Apr 2021 14:23:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+V8KqyA+rMMJYCjLMa+OXZ90t3ixKrulpx3+F+JwT4A=; b=uccIZRtDPPH+v0cTPWp/saGG3YTEyv8+OKaB5v8prkNV0OR3DNqY/XlLKsT3RsJyNU B1cjL8Iqx2QpQpvJ6fruqzxHlRQYWIqDLDarYhC2ba+e35ySrfMXK3tTcbd3k6Fa0rmn Sj1TnJjlNkSNODs4JudDULoE+S7ChP851QMVQWI3LEn/VR47iCeCtEYTJsz8rbi1i5aX F4P9YaWEmC6wo7d/V60Vcqu+KZ0MiaczAgZ8fwHgqGZNVKR6Ma/0LVt987MP2l10TVgA XL0n23HlGXm5E8cpQVzqJfdUu68nA91bCbZQlB6dZa+IJo7y18pYvhVesxBRf/kDQ63I 4/Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+V8KqyA+rMMJYCjLMa+OXZ90t3ixKrulpx3+F+JwT4A=; b=VaGHjeZW/JjfHQM4pU1aVavFVn9qUWwPkDj1eefQ8ChKZyXc4cWGxHUUy5QyS3/cMC Be4MeL7iJWJIrVF58EB+YMWLzInztSzOhDasZBxeDKdrKyJHK0NYMzNafcjqH9fWwpvG dElS1U7b5K1LHlNpkJFhaaZVKOdAq+HacVGoFu1vo45dDZk2Ts2K1fL5kBbLWS8DyXVl iWi3kHhVJ+Bmr40pC+J0bTzAons0fZJbaF9OwnmrFJuEKT5vlMo4Q1Bqju69tzY2sD3V 0oXSsbv4TdeISmKvDb/qSmhiu4NnxYvuAgQbNM+qz9xDiDp9t6iTPO9dhJQ+iJefUOGp gKRA== X-Gm-Message-State: AOAM531N4zxR+xo1rsVZJJDCb9nF3QZEmM+Vt6C9GWK+j0AdN/yq8CLK 3RjdzHL23FZnvlLfPPxeZv4= X-Google-Smtp-Source: ABdhPJyRtWbWA/FfVOkz9T9ySQ/3t6Nc4uBUaWIX+guK893x4EQwlJouAjfBpuRNweyD0Fr9r+FD/w== X-Received: by 2002:a17:90a:e28b:: with SMTP id d11mr11851099pjz.53.1618608183436; Fri, 16 Apr 2021 14:23:03 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.22.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:02 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 01/10] net: enetc: remove redundant clearing of skb/xdp_frame pointer in TX conf path Date: Sat, 17 Apr 2021 00:22:16 +0300 Message-Id: <20210416212225.3576792-2-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Later in enetc_clean_tx_ring we have: /* Scrub the swbd here so we don't have to do that * when we reuse it during xmit */ memset(tx_swbd, 0, sizeof(*tx_swbd)); So these assignments are unnecessary. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index 9a726085841d..c7f3c6e691a1 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -544,7 +544,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget) if (xdp_frame) { xdp_return_frame(xdp_frame); - tx_swbd->xdp_frame = NULL; } else if (skb) { if (unlikely(tx_swbd->skb->cb[0] & ENETC_F_TX_ONESTEP_SYNC_TSTAMP)) { @@ -558,7 +557,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget) do_twostep_tstamp = false; } napi_consume_skb(skb, napi_budget); - tx_swbd->skb = NULL; } tx_byte_cnt += tx_swbd->len; From patchwork Fri Apr 16 21:22:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DA22C433B4 for ; Fri, 16 Apr 2021 21:23:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 11AFA613D3 for ; Fri, 16 Apr 2021 21:23:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344328AbhDPVXg (ORCPT ); Fri, 16 Apr 2021 17:23:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244459AbhDPVXf (ORCPT ); Fri, 16 Apr 2021 17:23:35 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF38AC061574 for ; Fri, 16 Apr 2021 14:23:09 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id h20so14663207plr.4 for ; Fri, 16 Apr 2021 14:23:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=o6qUF/9QARUcG0OSjqxHZaTtsldWq9ieK8qJAMVi9t4=; b=QrKUqzbgvxPLR0BHflxtBCPZYWgqQO8kpHRRi4XM68T+FAqQSc94lVItq5Ijerya50 NoR1o3/jiZcyiyEwVLZYljaApOJW846SxR7DgVoMn5TD12eTO5jyHp2POkMTUZLdofRz YweaqICcjhTJAzLrPaaNzcEr1dXgCIh9krY5ReUx3GEBke6FJwHPdeeEobvyLRvfvezQ zJ/OIxKfQQFV/HDCNth1TF4HkqhWLE8OIURkChMzRtjOX7VfbHMB0jPVU87zv9kjsKOz GzdpryvEJQp1NAf3BXDAdLK0zESqTeei0LZ9yL4/Tm+8AfklQa8wUgUOf2f4I9urJ8kl 1MGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o6qUF/9QARUcG0OSjqxHZaTtsldWq9ieK8qJAMVi9t4=; b=GPPGatT3m4DtP+zVO4xWfaNHJeUNwlQcMXgmwuGMXWwQKfjYFdMFS8qaLo/c5asnC3 ZovJBbYs5RoHg4b1aHU1CF/yHvRy5xIxOyqfpTEm476SpjFphe7XROx012FmWs93hYDf J+VpYfUsNAJYzTFgSMMBL6CGJ1r9hZ4ynMJXnopJwmpEHzJF33MVprPCutpxRKllVPuK RV1zyeXVxYAuktomoRV188bSc+0oqHPfUGwNYo8IM7anl2Ir5sxdnyf1mrw7kczsAaEU 0b1nQMDnFOOghlVtzXyp/mnuvHofxWOzYFkAXmUKmpTOeDZFCkwIN5ML/D0ZwW2bBEh4 gkrw== X-Gm-Message-State: AOAM531NEIwt13qktNa/aFaqcjRsdbPZIS0VcFGUxXwOveT1lojiFWCC VmoWeWkjG4Z2g5sW9V2ca3U= X-Google-Smtp-Source: ABdhPJwz4jzRlYGGlW1GGRpWciz9xFPwC12rQ5IjbGZSMNvjTWQkTVxlzHuj95yulp1Cp/zgZRJgmw== X-Received: by 2002:a17:902:fe8a:b029:e6:5f:62e with SMTP id x10-20020a170902fe8ab02900e6005f062emr11407676plm.48.1618608189298; Fri, 16 Apr 2021 14:23:09 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:08 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 02/10] net: enetc: rename the buffer reuse helpers Date: Sat, 17 Apr 2021 00:22:17 +0300 Message-Id: <20210416212225.3576792-3-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean enetc_put_xdp_buff has nothing to do with XDP, frankly, it is just a helper to populate the recycle end of the shadow RX BD ring (next_to_alloc) with a given buffer. On the other hand, enetc_put_rx_buff plays more tricks than its name would suggest. So let's rename enetc_put_rx_buff into enetc_flip_rx_buff to reflect the half-page buffer reuse tricks that it employs, and enetc_put_xdp_buff into enetc_put_rx_buff which suggests a more garden-variety operation. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 54 +++++++++----------- 1 file changed, 24 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index c7f3c6e691a1..c4ff090f29ec 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -751,27 +751,35 @@ static struct enetc_rx_swbd *enetc_get_rx_buff(struct enetc_bdr *rx_ring, return rx_swbd; } +/* Reuse the current page without performing half-page buffer flipping */ static void enetc_put_rx_buff(struct enetc_bdr *rx_ring, struct enetc_rx_swbd *rx_swbd) { - if (likely(enetc_page_reusable(rx_swbd->page))) { - size_t buffer_size = ENETC_RXB_TRUESIZE - rx_ring->buffer_offset; + size_t buffer_size = ENETC_RXB_TRUESIZE - rx_ring->buffer_offset; + + enetc_reuse_page(rx_ring, rx_swbd); + dma_sync_single_range_for_device(rx_ring->dev, rx_swbd->dma, + rx_swbd->page_offset, + buffer_size, rx_swbd->dir); + + rx_swbd->page = NULL; +} + +/* Reuse the current page by performing half-page buffer flipping */ +static void enetc_flip_rx_buff(struct enetc_bdr *rx_ring, + struct enetc_rx_swbd *rx_swbd) +{ + if (likely(enetc_page_reusable(rx_swbd->page))) { rx_swbd->page_offset ^= ENETC_RXB_TRUESIZE; page_ref_inc(rx_swbd->page); - enetc_reuse_page(rx_ring, rx_swbd); - - /* sync for use by the device */ - dma_sync_single_range_for_device(rx_ring->dev, rx_swbd->dma, - rx_swbd->page_offset, - buffer_size, rx_swbd->dir); + enetc_put_rx_buff(rx_ring, rx_swbd); } else { dma_unmap_page(rx_ring->dev, rx_swbd->dma, PAGE_SIZE, rx_swbd->dir); + rx_swbd->page = NULL; } - - rx_swbd->page = NULL; } static struct sk_buff *enetc_map_rx_buff_to_skb(struct enetc_bdr *rx_ring, @@ -791,7 +799,7 @@ static struct sk_buff *enetc_map_rx_buff_to_skb(struct enetc_bdr *rx_ring, skb_reserve(skb, rx_ring->buffer_offset); __skb_put(skb, size); - enetc_put_rx_buff(rx_ring, rx_swbd); + enetc_flip_rx_buff(rx_ring, rx_swbd); return skb; } @@ -804,7 +812,7 @@ static void enetc_add_rx_buff_to_skb(struct enetc_bdr *rx_ring, int i, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_swbd->page, rx_swbd->page_offset, size, ENETC_RXB_TRUESIZE); - enetc_put_rx_buff(rx_ring, rx_swbd); + enetc_flip_rx_buff(rx_ring, rx_swbd); } static bool enetc_check_bd_errors_and_consume(struct enetc_bdr *rx_ring, @@ -1142,20 +1150,6 @@ static void enetc_build_xdp_buff(struct enetc_bdr *rx_ring, u32 bd_status, } } -/* Reuse the current page without performing half-page buffer flipping */ -static void enetc_put_xdp_buff(struct enetc_bdr *rx_ring, - struct enetc_rx_swbd *rx_swbd) -{ - enetc_reuse_page(rx_ring, rx_swbd); - - dma_sync_single_range_for_device(rx_ring->dev, rx_swbd->dma, - rx_swbd->page_offset, - ENETC_RXB_DMA_SIZE_XDP, - rx_swbd->dir); - - rx_swbd->page = NULL; -} - /* Convert RX buffer descriptors to TX buffer descriptors. These will be * recycled back into the RX ring in enetc_clean_tx_ring. We need to scrub the * RX software BDs because the ownership of the buffer no longer belongs to the @@ -1194,8 +1188,8 @@ static void enetc_xdp_drop(struct enetc_bdr *rx_ring, int rx_ring_first, int rx_ring_last) { while (rx_ring_first != rx_ring_last) { - enetc_put_xdp_buff(rx_ring, - &rx_ring->rx_swbd[rx_ring_first]); + enetc_put_rx_buff(rx_ring, + &rx_ring->rx_swbd[rx_ring_first]); enetc_bdr_idx_inc(rx_ring, &rx_ring_first); } rx_ring->stats.xdp_drops++; @@ -1316,8 +1310,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, tmp_orig_i = orig_i; while (orig_i != i) { - enetc_put_rx_buff(rx_ring, - &rx_ring->rx_swbd[orig_i]); + enetc_flip_rx_buff(rx_ring, + &rx_ring->rx_swbd[orig_i]); enetc_bdr_idx_inc(rx_ring, &orig_i); } From patchwork Fri Apr 16 21:22:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 875FBC433ED for ; Fri, 16 Apr 2021 21:23:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69A556023F for ; Fri, 16 Apr 2021 21:23:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344344AbhDPVXq (ORCPT ); Fri, 16 Apr 2021 17:23:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344323AbhDPVXl (ORCPT ); Fri, 16 Apr 2021 17:23:41 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 859A4C061756 for ; Fri, 16 Apr 2021 14:23:15 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id 20so10704990pll.7 for ; Fri, 16 Apr 2021 14:23:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O3g2qhM6mlcVIMTVrQ6cO0+6INLxw+AiLv0b4IUQItk=; b=SfIE3VrxwuCAgL3+/zlq+jQg4EPgi+m39ejT0pBTUhJtj4x+dATJctsHsbJUBqS42i 1h3Fh43uJ2L+vHQ5qPpAkUFb69rTsFbeb04CuirSL4RUjSIGjq1Eo1M01pJe4rTnSotq WZ9C+6dppZObWReKng7+TiAQHbkVmSWKaFAweFlvAO9Nb6E1CgLjMXHwbNvb9f7KJ8Ef mc5Q33W7jleXSPqebiCYjm4xHve44I3CUDsk/3apUsY1GpEjEifH8x3YBLV6snibjmDz BZco6aRYP2Ue+UAZxniyWkoT4b9ryq/N6tyMYPHnMcVcAr+Yxm1NSmY7TmJCc7qSP2uH 8W8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O3g2qhM6mlcVIMTVrQ6cO0+6INLxw+AiLv0b4IUQItk=; b=rRWJSJwakaHcu+0p7jVRCs57O1BNgLfcTP1r0H40g/eDw38DZ7/iD+9AiN29yIJ7Su L7USFCdo5pvpkBTj0vVJMdbz0soBkWZ4owKu1mRDnxTOJrab0uildPkfZAYzLUOpoG30 BPleXGljd3wOj0xKyWh5rezQm+xOMlsdIt6ABks0zoK/0ogdZ1Iiq/vK0TtSWdTMYsD4 pyIMEMBfQf7iy3A0gkdjnP+S/TI4jJo6PWL/7fxLEgMYHq0s/Kh3X2VSbYyJ0Gntw0XG 4H+6x0YJNoaJw0GMV7btpzfYLIZ85JtjohLZi8i021bdSIrGvLwM7sriltHdR/RboLpJ yxvA== X-Gm-Message-State: AOAM532IM1GX54hTdZqmtrhwoU22ev/eGE9NQPEvhKRoMyWXc3ze/D6q eMp2X34GbnpE9DwRUgleX04= X-Google-Smtp-Source: ABdhPJyDUIZldlwz9wwGe4MLaZ4TGpxkOuB4LRl9IlzgfkqxDt2NZYMn/4SYy35xkbk87skSE5msSA== X-Received: by 2002:a17:90a:2f89:: with SMTP id t9mr11990518pjd.177.1618608195186; Fri, 16 Apr 2021 14:23:15 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:14 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 03/10] net: enetc: recycle buffers for frames with RX errors Date: Sat, 17 Apr 2021 00:22:18 +0300 Message-Id: <20210416212225.3576792-4-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean When receiving a frame with errors, currently we do nothing with it (we don't construct an skb or an xdp_buff), we just exit the NAPI poll loop. Let's put the buffer back into the RX ring (similar to XDP_DROP). Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index c4ff090f29ec..c6f984473337 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -822,12 +822,14 @@ static bool enetc_check_bd_errors_and_consume(struct enetc_bdr *rx_ring, if (likely(!(bd_status & ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK)))) return false; + enetc_put_rx_buff(rx_ring, &rx_ring->rx_swbd[*i]); enetc_rxbd_next(rx_ring, rxbd, i); while (!(bd_status & ENETC_RXBD_LSTATUS_F)) { dma_rmb(); bd_status = le32_to_cpu((*rxbd)->r.lstatus); + enetc_put_rx_buff(rx_ring, &rx_ring->rx_swbd[*i]); enetc_rxbd_next(rx_ring, rxbd, i); } From patchwork Fri Apr 16 21:22:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 302B2C433B4 for ; Fri, 16 Apr 2021 21:23:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0FE7A613D2 for ; Fri, 16 Apr 2021 21:23:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344341AbhDPVXu (ORCPT ); Fri, 16 Apr 2021 17:23:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344347AbhDPVXs (ORCPT ); Fri, 16 Apr 2021 17:23:48 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68922C061574 for ; Fri, 16 Apr 2021 14:23:21 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id p16so10699452plf.12 for ; Fri, 16 Apr 2021 14:23:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bIuIwwJxbAm8jJlBJtBotyHQ5pp2KTqgy2d7ZTTl9dY=; b=SNtGc78mEI0/gnX5w9w7MYD/ho5xlFC/Ym8GxV0C0PljEkQYeeOCXZ6rhPfbSHdQs7 4V00LV343DiyVsMJ2EvgyV8iXPnuUUxhrAdxL56DjyAtvOgV5Ji671y82JQmxXQ1e6kd 3DW2qyCUEJ5dMS+VP6p/nqL7OxNXGYiBfTVepOqId2b8NQ/RtksIRwmsZLttrsyD/PHp bU8t1sxYKWT0Tv3ez7q63d/AbKSZAxllHt3fJ2zwZypk7nJlEzgXh7txHmMqarJvWXB9 Vi9OK7EBmIzQmbdyqTfs4ht3KTLNKHNs9hGMqxanXsenXMh3Yna1kq8hLHmJHUvtdDle 0Grg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bIuIwwJxbAm8jJlBJtBotyHQ5pp2KTqgy2d7ZTTl9dY=; b=j7RFSzAJG8ojFg+coJhqG0nI1aqPvpplByhjEKPrWtEr52Gqd+7SOzvx/44xOEZXaf E5YyfXd7QYuvSQrB9g60+bmzCEncYLwuExDYMkD3uIkrxlaZY5sb10OjkIRQMp36v7yH iXOPGfw4wS+vdrYQ/aYuY1pQRQQr+diyIoowQskGp4i4EeX478Xjb7yQnUKGTp3V0rj8 PYq5Mw4S4yhWb/1Qmo7izUa7jRWD0SPDEVjIQONQmBwzaqjpmW2BIijxA9wpt748BRkB dInPDnJVN5VUM8ZdRDJIrT5QL5rYyGj8y1R63PlEDZMs5I4PG3aN23yu/a3K01iWUsjN ORgA== X-Gm-Message-State: AOAM530mrz0yq6KCk7szqMBZ6qxiWlPEAwdA0WbpIUFjwLb1F940rvmC z3ksLoApngI6l5XkvNQKEhg= X-Google-Smtp-Source: ABdhPJykCeEp6E/0nbChRKx6RnP8T2nDhaOpTtFjoCiT8uh4ZQ9pf67tkRbNlA96NmOCCplPYUKIFg== X-Received: by 2002:a17:902:7281:b029:ea:afe2:b356 with SMTP id d1-20020a1709027281b02900eaafe2b356mr11568483pll.16.1618608201024; Fri, 16 Apr 2021 14:23:21 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:20 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 04/10] net: enetc: stop XDP NAPI processing when build_skb() fails Date: Sat, 17 Apr 2021 00:22:19 +0300 Message-Id: <20210416212225.3576792-5-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean When the code path below fails: enetc_clean_rx_ring_xdp // XDP_PASS -> enetc_build_skb -> enetc_map_rx_buff_to_skb -> build_skb enetc_clean_rx_ring_xdp will 'break', but that 'break' instruction isn't strong enough to actually break the NAPI poll loop, just the switch/case statement for XDP actions. So we increment rx_frm_cnt and go to the next frames minding our own business. Instead let's do what the skb NAPI poll function does, and break the loop now, waiting for the memory pressure to go away. Otherwise the next calls to build_skb() are likely to fail too. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index c6f984473337..469170076efa 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1275,8 +1275,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, &i, &cleaned_cnt, ENETC_RXB_DMA_SIZE_XDP); if (unlikely(!skb)) - /* Exit the switch/case, not the loop */ - break; + goto out; napi_gro_receive(napi, skb); break; @@ -1338,6 +1337,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, rx_frm_cnt++; } +out: rx_ring->next_to_clean = i; rx_ring->stats.packets += rx_frm_cnt; From patchwork Fri Apr 16 21:22:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BBC4C433B4 for ; Fri, 16 Apr 2021 21:23:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0B27C6023F for ; Fri, 16 Apr 2021 21:23:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344282AbhDPVYD (ORCPT ); Fri, 16 Apr 2021 17:24:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344359AbhDPVXy (ORCPT ); Fri, 16 Apr 2021 17:23:54 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41D3BC061756 for ; Fri, 16 Apr 2021 14:23:27 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id p12so20032534pgj.10 for ; Fri, 16 Apr 2021 14:23:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7yIJAENMJue5b5fKtN07FGax+EgoqXKDi+PYwVaRHfs=; b=s+Rp1T4qoylixWcADErrg/V06wD/vVsH35XPLqpxWCUsN22Be0rTcwtBBis3MNtMll +2axTrvoamiwqE8hBYoP851T3f8DzIFSJ3HZKqFmet6nwb3Og2BceD2AlZupuauqVM/i 4x3s/yWEA7z3VgIHA+41vNJxPNU+1v6NF8rpDjLIV1W3ejqn13qfG4Aoer1V29UA3KTm Mkhtv2RV1SrDcuHXtm5hpXh30DtClCXMYF9Kn2ZY3xgUFOz/Om7DMxUvZPlqnbMv3rHK ozVR1vvbNr/RDYtLWJtGlAG8hH5K3jKhtsSuXrGeyqzKw1lUy1YGh2oQ7kQgC1Cp8v1i H8fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7yIJAENMJue5b5fKtN07FGax+EgoqXKDi+PYwVaRHfs=; b=TAy0hVVBDC9+/vu746eROY1IlbwnIhcgq+7lYRVqyZb9V4Q4enS6Pnt/45utTPetIG UU/wPfVUz8Gj1rXEt8v2zeJ0wKeV6pxFXMC4JdlwoQDhfmghoTMR13uR62aYLOr+FY/j RXqGg/sd4q2aLei/OqEKamzcZ2pV/A/HtWcTvgSP3nn2OLMCk2pP8/M6g0OZ+KT9KY1+ 48akAi5JArxsXcn1KzZKSd8I4abB6Y5OKwdb32zMqJyqwUEsnCDK+JiPoQeB3glniJqd Un+Z5vVyrrhBIWu0OGZQud0uBZ61zEIuathDVb1+9idDyniTUvkpg6B8ke1F8FJDjpgD e7xA== X-Gm-Message-State: AOAM530hmEwtZ4Q55w0KurNMMVt9gQFzUH3o3LVkKsYQ3guqAMXvqUur KutlVCSumB2NWZR5kqbslxU= X-Google-Smtp-Source: ABdhPJyvSdPwRtpxp5lYTRW3thl24IsVJ4qKXQl2Jzhh2N8sT+8quVlInpkbKXEieL3KLE52Va8NJA== X-Received: by 2002:aa7:814c:0:b029:250:13db:3c6e with SMTP id d12-20020aa7814c0000b029025013db3c6emr9814667pfn.65.1618608206861; Fri, 16 Apr 2021 14:23:26 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:26 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 05/10] net: enetc: remove unneeded xdp_do_flush_map() Date: Sat, 17 Apr 2021 00:22:20 +0300 Message-Id: <20210416212225.3576792-6-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean xdp_do_redirect already contains: -> dev_map_enqueue -> __xdp_enqueue -> bq_enqueue -> bq_xmit_all // if we have more than 16 frames So the logic from enetc will never be hit, because ENETC_DEFAULT_TX_WORK is 128. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index 469170076efa..c7b940979314 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1324,11 +1324,6 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, rx_ring->stats.xdp_redirect++; } - if (unlikely(xdp_redirect_frm_cnt > ENETC_DEFAULT_TX_WORK)) { - xdp_do_flush_map(); - xdp_redirect_frm_cnt = 0; - } - break; default: bpf_warn_invalid_xdp_action(xdp_act); From patchwork Fri Apr 16 21:22:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91502C43460 for ; Fri, 16 Apr 2021 21:23:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DD35613D3 for ; Fri, 16 Apr 2021 21:23:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344349AbhDPVYE (ORCPT ); Fri, 16 Apr 2021 17:24:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235589AbhDPVX7 (ORCPT ); Fri, 16 Apr 2021 17:23:59 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 220E3C061760 for ; Fri, 16 Apr 2021 14:23:33 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id u7so12845581plr.6 for ; Fri, 16 Apr 2021 14:23:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VSU/g57bhcz5yTSfubswru6uuL84VyI3gunzqLKY5Pc=; b=seGIZGbEIb3ljGUfghZJnY45KF4hL+sSh9I/H6zxRHgTxM8up6GhdjTHHShV5FwOth JyGU7kgDgmUExa7nmmrpdv+NwHEZeHWFvkZTcCuzimcGf/WWXYl7jEvX9K7MKvFrVGoM HM83ritFiXYqGFPtsDoL2+zOY9hVK4oRgcKD7wAO2b4z4RCVO30aF+j9nASEm3eD9+y5 G1PcLLxTwamD0FZvJGdNFTSAuFqC8SHT4So9NGB6VkPrbCvPtqITfiU4Szly8Mzk5uHd jDGteVrzO5Wbj/stJ/PnIxrOOiC/todGCu6rXiFLiiMxIBDgbgnTz7hTUaNqgeQjKNPX V22A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VSU/g57bhcz5yTSfubswru6uuL84VyI3gunzqLKY5Pc=; b=op+jZy42XMgqcqLTHyapFtG11luVfIHigSGFNaaZ9GZaJWrtAnHfOdA2updx2TybKG Or0zlRzzMS9z/qZzg3z/V94j756eFSaKyAvlEIBvAv3L9QWZlw/UnSIylHUOzjvMI8YC mkozw5CgRonlVOLr7naLgosfahv7Z61cC9fP1Hd0wDufpAmjhCtPMumxPCQmstCW0Vrk bLRLYpdmmJRIDSVJZH5DazaQQXW376sxru0YU8wX/mHMO0iyKgKLWUU6EVxOoPwTFuKw QeqxVA35oIwgA4hrEi4l55CpYSLfqwB/8afZ87MTmrDFJ29RXeHvMDROsKRrwKbsjHBN JKnQ== X-Gm-Message-State: AOAM532IM/aOgn9AuNFsb+IStyjDvN28Xtqi76hGD4Prfa2wsTnyNHcE 679IgBp5aW7c61tHQ9Gpq08= X-Google-Smtp-Source: ABdhPJzi6uSVGic6TgCU/T/hRYCjuCr1Js3z3jeK9wUI1m+397MVRgQXtUKT6t2L26JuWFYTI++pJA== X-Received: by 2002:a17:90b:ed8:: with SMTP id gz24mr11357183pjb.98.1618608212728; Fri, 16 Apr 2021 14:23:32 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:32 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 06/10] net: enetc: increase TX ring size Date: Sat, 17 Apr 2021 00:22:21 +0300 Message-Id: <20210416212225.3576792-7-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Now that commit d6a2829e82cf ("net: enetc: increase RX ring default size") has increased the RX ring size, it is quite easy to congest the TX rings when the traffic is predominantly XDP_TX, as the RX ring is quite a bit larger than the TX one. Since we bit the bullet and did the expensive thing already (larger RX rings consume more memory pages), it seems quite foolish to keep the TX rings small. So make them equally sized with TX. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h index d52717bc73c7..6f818e33e03b 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.h +++ b/drivers/net/ethernet/freescale/enetc/enetc.h @@ -79,7 +79,7 @@ struct enetc_xdp_data { }; #define ENETC_RX_RING_DEFAULT_SIZE 2048 -#define ENETC_TX_RING_DEFAULT_SIZE 256 +#define ENETC_TX_RING_DEFAULT_SIZE 2048 #define ENETC_DEFAULT_TX_WORK (ENETC_TX_RING_DEFAULT_SIZE / 2) struct enetc_bdr { From patchwork Fri Apr 16 21:22:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2861DC433ED for ; Fri, 16 Apr 2021 21:23:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 110AA613D2 for ; Fri, 16 Apr 2021 21:23:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344373AbhDPVYG (ORCPT ); Fri, 16 Apr 2021 17:24:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344362AbhDPVYE (ORCPT ); Fri, 16 Apr 2021 17:24:04 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 086ABC061574 for ; Fri, 16 Apr 2021 14:23:39 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id d10so20024846pgf.12 for ; Fri, 16 Apr 2021 14:23:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OvULFiWeHUEKUeh9RQ3kpfHIvoTEp9Y6vWcG+2NW++U=; b=UfAuHMBbYbJR8ZLVAUeH3kbHblYmqQ4Dds/6s4+TF7/vCvq7MM4MpMSPHBPCJljAAL jXVMYIERqTKlhKpJc4aPcNCffFVXqjSoFtMN5uc1ZgRj6l/l56f8cF/RC1irYEcxSFVo LlIuaP0UCfg4uOL7KsOyA0zzTVikmZp3EHDRa7PVnokdE2gkTOn7KAleiUIuezjNe986 AHi+nANdypeV/9vJdVqMqwt+/z1St64n6Njvw7rxXmcapE4RBvUIVDngtb4AA3HS8aer 44kDWBl86+kYDi1Uv8OCniwuKA4TpqLyCBGnSA65jGPXXCEFluxB02RyNK5wh7GSSPdT 6mKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OvULFiWeHUEKUeh9RQ3kpfHIvoTEp9Y6vWcG+2NW++U=; b=keGIzoq7bcqFaG71JfMbGFWGpwq3RUXP9ssty0vvfG91CBTKvdKU+Jy83Vtg6liUJ6 iSL2ZpE9k85TLINIMWpyTMr9J6W7DWTbM6TB4J2H8tIT3ZrNkvt4wuxhn1CFrWRLPair 6K35wVw/wpKwjrnAXzJ5TCDxXIKUvbEFMrtZ1Or1Djxdn7RjX2j+kmIA0WtW1CNsYgg4 yKn3YgSJ/KCHVOxArvjm4SIAjH/9pBOua0JZgyrjFeKTgnBK5yggpZljevPwt4J/8v4e 8vAoucqZ5qQGCX6sTV8Jc0A7vokAVlhJTNGwD4FKYjnl06ZZ7l9vxr8YAh7cZb5FakwF ZiHg== X-Gm-Message-State: AOAM533a8LRW12Jd/0HMa/80udUHOspdqPP55lemjewIRXqMZjO5TwEZ Jm8xYCG711tlan2fO/nW8fo= X-Google-Smtp-Source: ABdhPJzsSZHauzWyaLX5zwoWT6K9/fIp+1AAvkuGqJvhpORSHxxw52eE+a7xR4+oAe4mBQ0Xa3rBIA== X-Received: by 2002:a62:3201:0:b029:211:3dcc:c9ca with SMTP id y1-20020a6232010000b02902113dccc9camr9553567pfy.46.1618608218581; Fri, 16 Apr 2021 14:23:38 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:38 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 07/10] net: enetc: use dedicated TX rings for XDP Date: Sat, 17 Apr 2021 00:22:22 +0300 Message-Id: <20210416212225.3576792-8-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean It is possible for one CPU to perform TX hashing (see netdev_pick_tx) between the 8 ENETC TX rings, and the TX hashing to select TX queue 1. At the same time, it is possible for the other CPU to already use TX ring 1 for XDP (either XDP_TX or XDP_REDIRECT). Since there is no mutual exclusion between XDP and the network stack, we run into an issue because the ENETC TX procedure is not reentrant. The obvious approach would be to just make XDP take the lock of the network stack's TX queue corresponding to the ring it's about to enqueue in. For XDP_REDIRECT, this is quite straightforward, a lock at the beginning and end of enetc_xdp_xmit() should do the trick. But for XDP_TX, it's a bit more complicated. For one, we do TX batching all by ourselves for frames with the XDP_TX verdict. This is something we would like to keep the way it is, for performance reasons. But batching means that the network stack's lock should be kept from the first enqueued XDP_TX frame and until we ring the doorbell. That is mostly fine, except for cases when in the same NAPI loop we have mixed XDP_TX and XDP_REDIRECT frames. So if enetc_xdp_xmit() gets called while we are holding the lock from the RX NAPI, then bam, deadlock. The naive answer could be 'just flush the XDP_TX frames first, then release the network stack's TX queue lock, then call xdp_do_flush_map()'. But even xdp_do_redirect() is capable of flushing the batched XDP_REDIRECT frames, so unless we unlock/relock the TX queue around xdp_do_redirect(), there simply isn't any clean way to protect XDP_TX from concurrent network stack .ndo_start_xmit() on another CPU. So we need to take a different approach, and that is to reserve two rings for the sole use of XDP. We leave TX rings 0..ndev->real_num_tx_queues-1 to be handled by the network stack, and we pick them from the end of the priv->tx_ring array. We make an effort to keep the mapping done by enetc_alloc_msix() which decides which CPU handles the TX completions of which TX ring in its NAPI poll. So the XDP TX ring of CPU 0 is handled by TX ring 6, and the XDP TX ring of CPU 1 is handled by TX ring 7. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 46 +++++++++++++++++--- drivers/net/ethernet/freescale/enetc/enetc.h | 1 + 2 files changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index c7b940979314..56190d861bb9 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -9,6 +9,26 @@ #include #include +static int enetc_num_stack_tx_queues(struct enetc_ndev_priv *priv) +{ + int num_tx_rings = priv->num_tx_rings; + int i; + + for (i = 0; i < priv->num_rx_rings; i++) + if (priv->rx_ring[i]->xdp.prog) + return num_tx_rings - num_possible_cpus(); + + return num_tx_rings; +} + +static struct enetc_bdr *enetc_rx_ring_from_xdp_tx_ring(struct enetc_ndev_priv *priv, + struct enetc_bdr *tx_ring) +{ + int index = &priv->tx_ring[tx_ring->index] - priv->xdp_tx_ring; + + return priv->rx_ring[index]; +} + static struct sk_buff *enetc_tx_swbd_get_skb(struct enetc_tx_swbd *tx_swbd) { if (tx_swbd->is_xdp_tx || tx_swbd->is_xdp_redirect) @@ -468,7 +488,6 @@ static void enetc_recycle_xdp_tx_buff(struct enetc_bdr *tx_ring, struct enetc_tx_swbd *tx_swbd) { struct enetc_ndev_priv *priv = netdev_priv(tx_ring->ndev); - struct enetc_bdr *rx_ring = priv->rx_ring[tx_ring->index]; struct enetc_rx_swbd rx_swbd = { .dma = tx_swbd->dma, .page = tx_swbd->page, @@ -476,6 +495,9 @@ static void enetc_recycle_xdp_tx_buff(struct enetc_bdr *tx_ring, .dir = tx_swbd->dir, .len = tx_swbd->len, }; + struct enetc_bdr *rx_ring; + + rx_ring = enetc_rx_ring_from_xdp_tx_ring(priv, tx_ring); if (likely(enetc_swbd_unused(rx_ring))) { enetc_reuse_page(rx_ring, &rx_swbd); @@ -1059,7 +1081,7 @@ int enetc_xdp_xmit(struct net_device *ndev, int num_frames, int xdp_tx_bd_cnt, i, k; int xdp_tx_frm_cnt = 0; - tx_ring = priv->tx_ring[smp_processor_id()]; + tx_ring = priv->xdp_tx_ring[smp_processor_id()]; prefetchw(ENETC_TXBD(*tx_ring, tx_ring->next_to_use)); @@ -1221,8 +1243,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, int xdp_tx_bd_cnt, xdp_tx_frm_cnt = 0, xdp_redirect_frm_cnt = 0; struct enetc_tx_swbd xdp_tx_arr[ENETC_MAX_SKB_FRAGS] = {0}; struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev); - struct enetc_bdr *tx_ring = priv->tx_ring[rx_ring->index]; int rx_frm_cnt = 0, rx_byte_cnt = 0; + struct enetc_bdr *tx_ring; int cleaned_cnt, i; u32 xdp_act; @@ -1280,6 +1302,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, napi_gro_receive(napi, skb); break; case XDP_TX: + tx_ring = priv->xdp_tx_ring[rx_ring->index]; xdp_tx_bd_cnt = enetc_rx_swbd_to_xdp_tx_swbd(xdp_tx_arr, rx_ring, orig_i, i); @@ -2022,6 +2045,7 @@ void enetc_start(struct net_device *ndev) int enetc_open(struct net_device *ndev) { struct enetc_ndev_priv *priv = netdev_priv(ndev); + int num_stack_tx_queues; int err; err = enetc_setup_irqs(priv); @@ -2040,7 +2064,9 @@ int enetc_open(struct net_device *ndev) if (err) goto err_alloc_rx; - err = netif_set_real_num_tx_queues(ndev, priv->num_tx_rings); + num_stack_tx_queues = enetc_num_stack_tx_queues(priv); + + err = netif_set_real_num_tx_queues(ndev, num_stack_tx_queues); if (err) goto err_set_queues; @@ -2113,15 +2139,17 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data) struct enetc_ndev_priv *priv = netdev_priv(ndev); struct tc_mqprio_qopt *mqprio = type_data; struct enetc_bdr *tx_ring; + int num_stack_tx_queues; u8 num_tc; int i; + num_stack_tx_queues = enetc_num_stack_tx_queues(priv); mqprio->hw = TC_MQPRIO_HW_OFFLOAD_TCS; num_tc = mqprio->num_tc; if (!num_tc) { netdev_reset_tc(ndev); - netif_set_real_num_tx_queues(ndev, priv->num_tx_rings); + netif_set_real_num_tx_queues(ndev, num_stack_tx_queues); /* Reset all ring priorities to 0 */ for (i = 0; i < priv->num_tx_rings; i++) { @@ -2133,7 +2161,7 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data) } /* Check if we have enough BD rings available to accommodate all TCs */ - if (num_tc > priv->num_tx_rings) { + if (num_tc > num_stack_tx_queues) { netdev_err(ndev, "Max %d traffic classes supported\n", priv->num_tx_rings); return -EINVAL; @@ -2421,8 +2449,9 @@ int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd) int enetc_alloc_msix(struct enetc_ndev_priv *priv) { struct pci_dev *pdev = priv->si->pdev; - int v_tx_rings; + int first_xdp_tx_ring; int i, n, err, nvec; + int v_tx_rings; nvec = ENETC_BDR_INT_BASE_IDX + priv->bdr_int_num; /* allocate MSIX for both messaging and Rx/Tx interrupts */ @@ -2497,6 +2526,9 @@ int enetc_alloc_msix(struct enetc_ndev_priv *priv) } } + first_xdp_tx_ring = priv->num_tx_rings - num_possible_cpus(); + priv->xdp_tx_ring = &priv->tx_ring[first_xdp_tx_ring]; + return 0; fail: diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h index 6f818e33e03b..3de71669e317 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.h +++ b/drivers/net/ethernet/freescale/enetc/enetc.h @@ -317,6 +317,7 @@ struct enetc_ndev_priv { u32 speed; /* store speed for compare update pspeed */ + struct enetc_bdr **xdp_tx_ring; struct enetc_bdr *tx_ring[16]; struct enetc_bdr *rx_ring[16]; From patchwork Fri Apr 16 21:22:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31ABEC433ED for ; Fri, 16 Apr 2021 21:23:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 13686613D2 for ; Fri, 16 Apr 2021 21:23:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235589AbhDPVYR (ORCPT ); Fri, 16 Apr 2021 17:24:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344383AbhDPVYL (ORCPT ); Fri, 16 Apr 2021 17:24:11 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0DC1C06175F for ; Fri, 16 Apr 2021 14:23:44 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id p67so14224404pfp.10 for ; Fri, 16 Apr 2021 14:23:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d6wQXcTlOKY4WvS+BhBP+DZHP7vls4Ke105vdWlaVbY=; b=kM4tcKGj8sHqgZ9LyR5h9Kbt7hGslyZAoXBh/Gs9owg+nlNBhPcEFj0mos8iSYaUyj 7NT80k5oHDmUB3dWPCf/blNWBgjnr60FUY9k/4mFFNnz0rcnjkbEgn3h/LzQFATeojpk KFvRorluNEDeYxYynJlG6YfHoR9s/e+eyu+eO8uoqLl+FlpneN/y2DpNAdU34WIUZWgT sW6Lx93cPSTjdbN2v+P1m6cNLla6xbfXtQWhAb8U4R3FoYQLwJ5uq3e/RUWrvbVbouM1 5jcPuqBLMwXGsqDLW7huSyb9m1OGFrmyoIWQOm5KZlXJnd1nKKFMaULh31A9IPj2PvgM spCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d6wQXcTlOKY4WvS+BhBP+DZHP7vls4Ke105vdWlaVbY=; b=WOvvZD3Mir8wtlAK1TiMcYXzI0OYPiWKuS3knzviIXo5huwDKWTxvC/w9pTQbD3Nev 7SPHHp5mpxUkWSnp6H7Tt8NmLCbSMpk58sbgXEYgHz/KhsS6OAXD6NDAPACJB67EYL/D aKWVW1MUIDJjW0hW/koNsTpeH4znGl/5cec3b8RxoFpHruqo6qjo37TIOZa+ZQUhaQAp pKRRvwTx/szwGvBq9SObjNH0YydvIeXnDK4DAGLxKYXF7h1R0/UGC+AvjYqBYgMXq9B4 1nZUf5gI19rZoje8yUx1ne3Wx2jmiMxL1jOkxlrtuSQKq+iVQyrYOwisMmdO32IgHISZ G+IQ== X-Gm-Message-State: AOAM532o4BBmnjfh+8dH52Gjhr/3J8ZQvZ3UQnpMISQRRMnGpHBDQpmT aIqMR3ERVH92pfkIsXmMyyg= X-Google-Smtp-Source: ABdhPJyL3P9jw53Md/LSQDdL4zzDhnFmc3UNZfqXZvFjLIai/yRz5DaLr0cuXXs4cY9BSp+0y6clpQ== X-Received: by 2002:aa7:88d4:0:b029:24e:93bf:f472 with SMTP id k20-20020aa788d40000b029024e93bff472mr9537106pff.8.1618608224571; Fri, 16 Apr 2021 14:23:44 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:44 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 08/10] net: enetc: handle the invalid XDP action the same way as XDP_DROP Date: Sat, 17 Apr 2021 00:22:23 +0300 Message-Id: <20210416212225.3576792-9-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean When the XDP program returns an invalid action, we should free the RX buffer. Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index 56190d861bb9..0b84d4a74889 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1282,6 +1282,9 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, xdp_act = bpf_prog_run_xdp(prog, &xdp_buff); switch (xdp_act) { + default: + bpf_warn_invalid_xdp_action(xdp_act); + fallthrough; case XDP_ABORTED: trace_xdp_exception(rx_ring->ndev, prog, xdp_act); fallthrough; @@ -1346,10 +1349,6 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, xdp_redirect_frm_cnt++; rx_ring->stats.xdp_redirect++; } - - break; - default: - bpf_warn_invalid_xdp_action(xdp_act); } rx_frm_cnt++; From patchwork Fri Apr 16 21:22:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B069BC433B4 for ; Fri, 16 Apr 2021 21:23:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 97C85613B0 for ; Fri, 16 Apr 2021 21:23:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344347AbhDPVYU (ORCPT ); Fri, 16 Apr 2021 17:24:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344359AbhDPVYR (ORCPT ); Fri, 16 Apr 2021 17:24:17 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFAC9C061574 for ; Fri, 16 Apr 2021 14:23:50 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id j32so5260928pgm.6 for ; Fri, 16 Apr 2021 14:23:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XKvE4DWjeCShfQALBHh7dPy8UVszUBAktwTlqpJ1SuA=; b=RTh3dyjdNkg2l3qy+UqXDbVeCrbuHkUiczFfgCK3MlfzRA+UM+OPsnupQ6evvAMhQC /KGMEx7IpjhDG77/d3KJftoNiBQEYiQ7Bn3ULoed99jrPwlxHP9a+AyNDCbrGPM5HFJF Yw3y2kOfplHB62qZA4av4+lAGN3GLufjIJbT6ZfxTXoQnBoeh43asztPTnMVUvkdCdjx V+Flp1UdAA6HtBFaKTdfbJ50glzi7OqgwSwXksqkvnE8lNgVhVZwAPVpW9BbRm6UlTK2 CFc3wI3M6f8jd/qN4GVoPxBwzKTEG38gO5D1onAJGt9ovut7k2DjUntaFQG7OHQ0pUCe uHqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XKvE4DWjeCShfQALBHh7dPy8UVszUBAktwTlqpJ1SuA=; b=O12EiHd88GUuUNBuSIuUP5bq9Li1Lm5Qlg3cgrU95yVVvUqFODwWv71IJtd0PxZhU0 LjeMwwoyzJE6QAaOE1wDuz9hANOpBIn5ISwEiZkxI4s9B6hnnsNrYb7C73A3KHyTdXan rYof/gqTk4wL7Ju4uVxJfT1+qYZxKDFqHmHoiCvutn67Awzw/8nCMxzEbjAG+LSzStQm wLzQlsDVFzZicbRRUGfXNfMFOxxn369DvUGMn7z7D7evhd0bwZl7k+5bhtHWwA4xKQCn xouPaejqLCsC2VwU4zctP+xcwJea9vgTwsHoXvmcK12rfu/L9OyQLID1FyDe3YrT7Pa4 LDwg== X-Gm-Message-State: AOAM532Qc6lh6wAEIjnr+NdcvgwBQp+NU3e6CrREee1Qv5mT7gTSO8gh Sf1MRf1tnisewRX5x5qr9ziOFm9nVpWTJg== X-Google-Smtp-Source: ABdhPJzVJ8QZZdbz/QVNSTYcuDPABiYfYCmYUH1UI/9mMjwkeGKq7mu8SCbPRApPMGJR46DKNwJs2A== X-Received: by 2002:a65:6201:: with SMTP id d1mr922931pgv.147.1618608230538; Fri, 16 Apr 2021 14:23:50 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:50 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 09/10] net: enetc: fix buffer leaks with XDP_TX enqueue rejections Date: Sat, 17 Apr 2021 00:22:24 +0300 Message-Id: <20210416212225.3576792-10-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean If the TX ring is congested, enetc_xdp_tx() returns false for the current XDP frame (represented as an array of software BDs). This array of software TX BDs is constructed in enetc_rx_swbd_to_xdp_tx_swbd from software BDs freshly cleaned from the RX ring. The issue is that we scrub the RX software BDs too soon, more precisely before we know that we can enqueue the TX BDs successfully into the TX ring. If we can't enqueue them (and enetc_xdp_tx returns false), we call enetc_xdp_drop which attempts to recycle the buffers held by the RX software BDs. But because we scrubbed those RX BDs already, two things happen: (a) we leak their memory (b) we populate the RX software BD ring with an all-zero rx_swbd structure, which makes the buffer refill path allocate more memory. enetc_refill_rx_ring -> if (unlikely(!rx_swbd->page)) -> enetc_new_page That is a recipe for fast OOM. Fixes: 7ed2bc80074e ("net: enetc: add support for XDP_TX") Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index 0b84d4a74889..f0ba612d5ce3 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1175,9 +1175,7 @@ static void enetc_build_xdp_buff(struct enetc_bdr *rx_ring, u32 bd_status, } /* Convert RX buffer descriptors to TX buffer descriptors. These will be - * recycled back into the RX ring in enetc_clean_tx_ring. We need to scrub the - * RX software BDs because the ownership of the buffer no longer belongs to the - * RX ring, so enetc_refill_rx_ring may not reuse rx_swbd->page. + * recycled back into the RX ring in enetc_clean_tx_ring. */ static int enetc_rx_swbd_to_xdp_tx_swbd(struct enetc_tx_swbd *xdp_tx_arr, struct enetc_bdr *rx_ring, @@ -1199,7 +1197,6 @@ static int enetc_rx_swbd_to_xdp_tx_swbd(struct enetc_tx_swbd *xdp_tx_arr, tx_swbd->is_dma_page = true; tx_swbd->is_xdp_tx = true; tx_swbd->is_eof = false; - memset(rx_swbd, 0, sizeof(*rx_swbd)); } /* We rely on caller providing an rx_ring_last > rx_ring_first */ @@ -1317,6 +1314,17 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, tx_ring->stats.xdp_tx += xdp_tx_bd_cnt; rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt; xdp_tx_frm_cnt++; + /* The XDP_TX enqueue was successful, so we + * need to scrub the RX software BDs because + * the ownership of the buffers no longer + * belongs to the RX ring, and we must prevent + * enetc_refill_rx_ring() from reusing + * rx_swbd->page. + */ + while (orig_i != i) { + rx_ring->rx_swbd[orig_i].page = NULL; + enetc_bdr_idx_inc(rx_ring, &orig_i); + } } break; case XDP_REDIRECT: From patchwork Fri Apr 16 21:22:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 423137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89E57C433ED for ; Fri, 16 Apr 2021 21:23:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D0CB613B0 for ; Fri, 16 Apr 2021 21:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344367AbhDPVYX (ORCPT ); Fri, 16 Apr 2021 17:24:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344341AbhDPVYX (ORCPT ); Fri, 16 Apr 2021 17:24:23 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1F13C061574 for ; Fri, 16 Apr 2021 14:23:56 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id u14-20020a17090a1f0eb029014e38011b09so10457320pja.5 for ; Fri, 16 Apr 2021 14:23:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Z+KBB3WVHECWMr+21ttbt1jBhUBOV9FuzyHsPxovwg=; b=nevmX/wSK3bKNss3CHXywbtjuCAWbmT1QtLoEFlC3RNiVtvRFVQ8gm5B7fS83LAmUt YxEi3ZQ/xm36MTfutczEwX4MQl2OKeZiNsGVOM4cjjH9i73BC/rWKKKrCVMPSb0s0k8/ qUKDvqSgfcDsfOxhKAVX8Id9uSI/Ef8Lko8g49dDfBZ6T+o7WBWIJEZIHqv7k1U3esNn oRyCeieZHhC5zkdhd3EIBS0HV2nyKA28dZUT1uV7rmCdiotM5M/EHYMe48MKXlyUk4ya wV4zpPkBeK+M9dXDCOo7+Wzqp0E5dkoUnxjRTa/ynejTux6qM6hDRjvwGaNnR76p/pt4 QX3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Z+KBB3WVHECWMr+21ttbt1jBhUBOV9FuzyHsPxovwg=; b=aeT3zFeLktypsM23dFm+/XM9drlq9qjko15mbDgu93fiw15b1N1BwfNq0DdKhf3Csu 31TnKFnWQPqklWki4oDuqazQSKl+UuJyX3odvVycl/GopabP4VndTzYE/N82m0+MXdD9 g+NQsvpBo5dT2W7gef2BcqN72xhXtCnoiw8nKWGt9fYmgbw3g2xfShRAhNounh2opAM5 Nadk34vETQcgHWiUpWHydUUF7gKaX0wNUOKFOvJdP0bK5y03rxLMq11CQ5F+2+s7HoAQ 8SAjUu4rquW+nIkZ9AShBh2N1KVw735VOpy3M2WpCcyY8dKP+4VAPvaL/cJJTmpYPmQP PTww== X-Gm-Message-State: AOAM530+FehBkfiRu1oDwRUoe4uQBLhmsP37bi4AW+2qfKOHE+EMzOc4 H+RScV/RbPijfZA66pUwx9E= X-Google-Smtp-Source: ABdhPJxgjS6u2V/mXTAKXUqsvrYHtlVUNG0BQegAh46cP1w67619phEmYxjr9u2QFJRsdZZX1Lckqw== X-Received: by 2002:a17:90a:5d8f:: with SMTP id t15mr11581655pji.28.1618608236429; Fri, 16 Apr 2021 14:23:56 -0700 (PDT) Received: from localhost.localdomain (5-12-16-165.residential.rdsnet.ro. [5.12.16.165]) by smtp.gmail.com with ESMTPSA id t23sm6069132pju.15.2021.04.16.14.23.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:23:56 -0700 (PDT) From: Vladimir Oltean To: Jakub Kicinski , "David S. Miller" , netdev@vger.kernel.org, Po Liu Cc: Claudiu Manoil , Alex Marginean , Yangbo Lu , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?b?w7hyZ2Vuc2Vu?= , Vladimir Oltean Subject: [PATCH net-next 10/10] net: enetc: apply the MDIO workaround for XDP_REDIRECT too Date: Sat, 17 Apr 2021 00:22:25 +0300 Message-Id: <20210416212225.3576792-11-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210416212225.3576792-1-olteanv@gmail.com> References: <20210416212225.3576792-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Described in fd5736bf9f23 ("enetc: Workaround for MDIO register access issue") is a workaround for a hardware bug that requires a register access of the MDIO controller to never happen concurrently with a register access of a port PF. To avoid that, a mutual exclusion scheme with rwlocks was implemented - the port PF accessors are the 'read' side, and the MDIO accessors are the 'write' side. When we do XDP_REDIRECT between two ENETC interfaces, all is fine because the MDIO lock is already taken from the NAPI poll loop. But when the ingress interface is not ENETC, just the egress is, the MDIO lock is not taken, so we might access the port PF registers concurrently with MDIO, which will make the link flap due to wrong values returned from the PHY. To avoid this, let's just slap an enetc_lock_mdio/enetc_unlock_mdio at the beginning and ending of enetc_xdp_xmit. The fact that the MDIO lock is designed as a rwlock is important here, because the read side is reentrant (that is one of the main reasons why we chose it). Usually, the way we benefit of its reentrancy is by running the data path concurrently on both CPUs, but in this case, we benefit from the reentrancy by taking the lock even when the lock is already taken (and that's the situation where ENETC is both the ingress and the egress interface for XDP_REDIRECT, which was fine before and still is fine now). Signed-off-by: Vladimir Oltean --- drivers/net/ethernet/freescale/enetc/enetc.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index f0ba612d5ce3..4f23829e7317 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1081,6 +1081,8 @@ int enetc_xdp_xmit(struct net_device *ndev, int num_frames, int xdp_tx_bd_cnt, i, k; int xdp_tx_frm_cnt = 0; + enetc_lock_mdio(); + tx_ring = priv->xdp_tx_ring[smp_processor_id()]; prefetchw(ENETC_TXBD(*tx_ring, tx_ring->next_to_use)); @@ -1109,6 +1111,8 @@ int enetc_xdp_xmit(struct net_device *ndev, int num_frames, tx_ring->stats.xdp_tx += xdp_tx_frm_cnt; + enetc_unlock_mdio(); + return xdp_tx_frm_cnt; }