From patchwork Tue Apr 21 15:21:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ioana Ciornei X-Patchwork-Id: 220834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B771C54FCC for ; Tue, 21 Apr 2020 15:22:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2129E2068F for ; Tue, 21 Apr 2020 15:22:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726671AbgDUPWk (ORCPT ); Tue, 21 Apr 2020 11:22:40 -0400 Received: from inva021.nxp.com ([92.121.34.21]:36172 "EHLO inva021.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726024AbgDUPWh (ORCPT ); Tue, 21 Apr 2020 11:22:37 -0400 Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C5181200D16; Tue, 21 Apr 2020 17:22:35 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B8A6C200CF3; Tue, 21 Apr 2020 17:22:35 +0200 (CEST) Received: from fsr-ub1864-126.ea.freescale.net (fsr-ub1864-126.ea.freescale.net [10.171.82.212]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id 83578205A2; Tue, 21 Apr 2020 17:22:35 +0200 (CEST) From: Ioana Ciornei To: davem@davemloft.net, netdev@vger.kernel.org Cc: brouer@redhat.com, Ioana Ciornei Subject: [PATCH net-next 1/4] dpaa2-eth: return num_enqueued frames from enqueue callback Date: Tue, 21 Apr 2020 18:21:51 +0300 Message-Id: <20200421152154.10965-2-ioana.ciornei@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200421152154.10965-1-ioana.ciornei@nxp.com> References: <20200421152154.10965-1-ioana.ciornei@nxp.com> Reply-to: ioana.ciornei@nxp.com X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The enqueue dpaa2-eth callback now returns the number of successfully enqueued frames. This is a preliminary patch necessary for adding support for bulk ring mode enqueue. Signed-off-by: Ioana Ciornei --- .../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 34 +++++++++++++------ .../net/ethernet/freescale/dpaa2/dpaa2-eth.h | 5 +-- 2 files changed, 26 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index b6c46639aa4c..7b41ece8f160 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) /* Copyright 2014-2016 Freescale Semiconductor Inc. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP */ #include #include @@ -268,7 +268,7 @@ static int xdp_enqueue(struct dpaa2_eth_priv *priv, struct dpaa2_fd *fd, fq = &priv->fq[queue_id]; for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, fd, 0); + err = priv->enqueue(priv, fq, fd, 0, NULL); if (err != -EBUSY) break; } @@ -847,7 +847,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) * the Tx confirmation callback for this frame */ for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, prio); + err = priv->enqueue(priv, fq, &fd, prio, NULL); if (err != -EBUSY) break; } @@ -1937,7 +1937,7 @@ static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev, fq = &priv->fq[smp_processor_id() % dpaa2_eth_queue_count(priv)]; for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, 0); + err = priv->enqueue(priv, fq, &fd, 0, NULL); if (err != -EBUSY) break; } @@ -2523,19 +2523,31 @@ static int set_buffer_layout(struct dpaa2_eth_priv *priv) static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio) + struct dpaa2_fd *fd, u8 prio, + int *frames_enqueued) { - return dpaa2_io_service_enqueue_qd(fq->channel->dpio, - priv->tx_qdid, prio, - fq->tx_qdbin, fd); + int err; + + err = dpaa2_io_service_enqueue_qd(fq->channel->dpio, + priv->tx_qdid, prio, + fq->tx_qdbin, fd); + if (!err && frames_enqueued) + *frames_enqueued = 1; + return err; } static inline int dpaa2_eth_enqueue_fq(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio) + struct dpaa2_fd *fd, u8 prio, + int *frames_enqueued) { - return dpaa2_io_service_enqueue_fq(fq->channel->dpio, - fq->tx_fqid[prio], fd); + int err; + + err = dpaa2_io_service_enqueue_fq(fq->channel->dpio, + fq->tx_fqid[prio], fd); + if (!err && frames_enqueued) + *frames_enqueued = 1; + return err; } static void set_enqueue_mode(struct dpaa2_eth_priv *priv) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index 7635db3ef903..085ff750e4b5 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */ /* Copyright 2014-2016 Freescale Semiconductor Inc. - * Copyright 2016 NXP + * Copyright 2016-2020 NXP */ #ifndef __DPAA2_ETH_H @@ -371,7 +371,8 @@ struct dpaa2_eth_priv { struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES]; int (*enqueue)(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio); + struct dpaa2_fd *fd, u8 prio, + int *frames_enqueued); u8 num_channels; struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS]; From patchwork Tue Apr 21 15:21:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ioana Ciornei X-Patchwork-Id: 220833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 764BAC55183 for ; Tue, 21 Apr 2020 15:22:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F13A206A5 for ; Tue, 21 Apr 2020 15:22:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726801AbgDUPWm (ORCPT ); Tue, 21 Apr 2020 11:22:42 -0400 Received: from inva020.nxp.com ([92.121.34.13]:52572 "EHLO inva020.nxp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726067AbgDUPWi (ORCPT ); Tue, 21 Apr 2020 11:22:38 -0400 Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 14C261A0DBA; Tue, 21 Apr 2020 17:22:36 +0200 (CEST) Received: from inva024.eu-rdc02.nxp.com (inva024.eu-rdc02.nxp.com [134.27.226.22]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0850C1A0DB5; Tue, 21 Apr 2020 17:22:36 +0200 (CEST) Received: from fsr-ub1864-126.ea.freescale.net (fsr-ub1864-126.ea.freescale.net [10.171.82.212]) by inva024.eu-rdc02.nxp.com (Postfix) with ESMTP id C729C205A2; Tue, 21 Apr 2020 17:22:35 +0200 (CEST) From: Ioana Ciornei To: davem@davemloft.net, netdev@vger.kernel.org Cc: brouer@redhat.com, Ioana Ciornei Subject: [PATCH net-next 2/4] dpaa2-eth: use the bulk ring mode enqueue interface Date: Tue, 21 Apr 2020 18:21:52 +0300 Message-Id: <20200421152154.10965-3-ioana.ciornei@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200421152154.10965-1-ioana.ciornei@nxp.com> References: <20200421152154.10965-1-ioana.ciornei@nxp.com> Reply-to: ioana.ciornei@nxp.com X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Update the dpaa2-eth driver to use the bulk enqueue function introduced with the change to QBMAN ring mode. At the moment, no functional changes are made but rather the driver just transitions to the new interface while still enqueuing just one frame at a time. Signed-off-by: Ioana Ciornei --- .../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 35 +++++++++++-------- .../net/ethernet/freescale/dpaa2/dpaa2-eth.h | 1 + 2 files changed, 22 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 7b41ece8f160..26c2868435d5 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -268,7 +268,7 @@ static int xdp_enqueue(struct dpaa2_eth_priv *priv, struct dpaa2_fd *fd, fq = &priv->fq[queue_id]; for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, fd, 0, NULL); + err = priv->enqueue(priv, fq, fd, 0, 1, NULL); if (err != -EBUSY) break; } @@ -847,7 +847,7 @@ static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev) * the Tx confirmation callback for this frame */ for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, prio, NULL); + err = priv->enqueue(priv, fq, &fd, prio, 1, NULL); if (err != -EBUSY) break; } @@ -1937,7 +1937,7 @@ static int dpaa2_eth_xdp_xmit_frame(struct net_device *net_dev, fq = &priv->fq[smp_processor_id() % dpaa2_eth_queue_count(priv)]; for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) { - err = priv->enqueue(priv, fq, &fd, 0, NULL); + err = priv->enqueue(priv, fq, &fd, 0, 1, NULL); if (err != -EBUSY) break; } @@ -2524,6 +2524,7 @@ static int set_buffer_layout(struct dpaa2_eth_priv *priv) static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, struct dpaa2_fd *fd, u8 prio, + u32 num_frames __always_unused, int *frames_enqueued) { int err; @@ -2536,18 +2537,24 @@ static inline int dpaa2_eth_enqueue_qd(struct dpaa2_eth_priv *priv, return err; } -static inline int dpaa2_eth_enqueue_fq(struct dpaa2_eth_priv *priv, - struct dpaa2_eth_fq *fq, - struct dpaa2_fd *fd, u8 prio, - int *frames_enqueued) +static inline int dpaa2_eth_enqueue_fq_multiple(struct dpaa2_eth_priv *priv, + struct dpaa2_eth_fq *fq, + struct dpaa2_fd *fd, + u8 prio, u32 num_frames, + int *frames_enqueued) { int err; - err = dpaa2_io_service_enqueue_fq(fq->channel->dpio, - fq->tx_fqid[prio], fd); - if (!err && frames_enqueued) - *frames_enqueued = 1; - return err; + err = dpaa2_io_service_enqueue_multiple_fq(fq->channel->dpio, + fq->tx_fqid[prio], + fd, num_frames); + + if (err == 0) + return -EBUSY; + + if (frames_enqueued) + *frames_enqueued = err; + return 0; } static void set_enqueue_mode(struct dpaa2_eth_priv *priv) @@ -2556,7 +2563,7 @@ static void set_enqueue_mode(struct dpaa2_eth_priv *priv) DPNI_ENQUEUE_FQID_VER_MINOR) < 0) priv->enqueue = dpaa2_eth_enqueue_qd; else - priv->enqueue = dpaa2_eth_enqueue_fq; + priv->enqueue = dpaa2_eth_enqueue_fq_multiple; } static int set_pause(struct dpaa2_eth_priv *priv) @@ -2617,7 +2624,7 @@ static void update_tx_fqids(struct dpaa2_eth_priv *priv) } } - priv->enqueue = dpaa2_eth_enqueue_fq; + priv->enqueue = dpaa2_eth_enqueue_fq_multiple; return; diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h index 085ff750e4b5..2440ba6b21ef 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h @@ -372,6 +372,7 @@ struct dpaa2_eth_priv { int (*enqueue)(struct dpaa2_eth_priv *priv, struct dpaa2_eth_fq *fq, struct dpaa2_fd *fd, u8 prio, + u32 num_frames, int *frames_enqueued); u8 num_channels;