From patchwork Fri May 29 17:43:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ioana Ciornei X-Patchwork-Id: 218193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB821C433E0 for ; Fri, 29 May 2020 17:44:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D82692065C for ; Fri, 29 May 2020 17:44:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728110AbgE2Ro7 (ORCPT ); Fri, 29 May 2020 13:44:59 -0400 Received: from inva020.nxp.com ([92.121.34.13]:50832 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728037AbgE2Roy (ORCPT ); Fri, 29 May 2020 13:44:54 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 5304E1A11D9; Fri, 29 May 2020 19:44:53 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 3D0B81A11D7; Fri, 29 May 2020 19:44:53 +0200 (CEST) Received: from fsr-ub1864-126.ea.freescale.net (fsr-ub1864-126.ea.freescale.net [10.171.82.212]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id F39BA2039E; Fri, 29 May 2020 19:44:52 +0200 (CEST) From: Ioana Ciornei To: netdev@vger.kernel.org Cc: kuba@kernel.org, davem@davemloft.net, Ioana Radulescu , Ioana Ciornei Subject: [PATCH net-next v3 5/7] dpaa2-eth: Update FQ taildrop threshold and buffer pool count Date: Fri, 29 May 2020 20:43:43 +0300 Message-Id: <20200529174345.27537-6-ioana.ciornei@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200529174345.27537-1-ioana.ciornei@nxp.com> References: <20200529174345.27537-1-ioana.ciornei@nxp.com> Reply-to: ioana.ciornei@nxp.com X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ioana Radulescu Now that we have congestion group taildrop configured at all times, we can afford to increase the frame queue taildrop threshold; this will ensure a better response when receiving bursts of large-sized frames. Also decouple the buffer pool count from the Rx FQ taildrop threshold, as above change would increase it too much. Instead, keep the old count as a hardcoded value. With the new limits, we try to ensure that: * we allow enough leeway for large frame bursts (by buffering enough of them in queues to avoid heavy dropping in case of bursty traffic, but when overall ingress bandwidth is manageable) * allow pending frames to be evenly spread between ingress FQs, regardless of frame size * avoid dropping frames due to the buffer pool being empty; this is not a bad behaviour per se, but system overall response is more linear and predictable when frames are dropped at frame queue/group level. Signed-off-by: Ioana Radulescu Signed-off-by: Ioana Ciornei --- Changes in v3: - none .../net/ethernet/freescale/dpaa2/dpaa2-eth.h | 23 +++++++++---------- 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index 184d5d83e497..02c0eea69a23 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -36,24 +36,24 @@ /* Convert L3 MTU to L2 MFL */ #define DPAA2_ETH_L2_MAX_FRM(mtu) ((mtu) + VLAN_ETH_HLEN) -/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo - * frames in the Rx queues (length of the current frame is not - * taken into account when making the taildrop decision) +/* Set the taildrop threshold (in bytes) to allow the enqueue of a large + * enough number of jumbo frames in the Rx queues (length of the current + * frame is not taken into account when making the taildrop decision) */ -#define DPAA2_ETH_FQ_TAILDROP_THRESH (64 * 1024) +#define DPAA2_ETH_FQ_TAILDROP_THRESH (1024 * 1024) /* Maximum number of Tx confirmation frames to be processed * in a single NAPI call */ #define DPAA2_ETH_TXCONF_PER_NAPI 256 -/* Buffer quota per queue. Must be large enough such that for minimum sized - * frames taildrop kicks in before the bpool gets depleted, so we compute - * how many 64B frames fit inside the taildrop threshold and add a margin - * to accommodate the buffer refill delay. +/* Buffer qouta per channel. We want to keep in check number of ingress frames + * in flight: for small sized frames, congestion group taildrop may kick in + * first; for large sizes, Rx FQ taildrop threshold will ensure only a + * reasonable number of frames will be pending at any given time. + * Ingress frame drop due to buffer pool depletion should be a corner case only */ -#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE (DPAA2_ETH_FQ_TAILDROP_THRESH / 64) -#define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256) +#define DPAA2_ETH_NUM_BUFS 1280 #define DPAA2_ETH_REFILL_THRESH \ (DPAA2_ETH_NUM_BUFS - DPAA2_ETH_BUFS_PER_CMD) @@ -63,8 +63,7 @@ * taildrop kicks in */ #define DPAA2_ETH_CG_TAILDROP_THRESH(priv) \ - (DPAA2_ETH_MAX_FRAMES_PER_QUEUE * dpaa2_eth_queue_count(priv) / \ - dpaa2_eth_tc_count(priv)) + (1024 * dpaa2_eth_queue_count(priv) / dpaa2_eth_tc_count(priv)) /* Maximum number of buffers that can be acquired/released through a single * QBMan command