From patchwork Thu Aug 29 10:27:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sachin Saxena X-Patchwork-Id: 172567 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp2030553ily; Thu, 29 Aug 2019 03:44:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqztJXT4DgyRGXhNwdZjDDyC2tmySf2+5DC+f5wgrX2c36+KfknHkYtLtbpYy7CFx0qu7yzN X-Received: by 2002:aa7:d3d9:: with SMTP id o25mr8684107edr.141.1567075452676; Thu, 29 Aug 2019 03:44:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567075452; cv=none; d=google.com; s=arc-20160816; b=W8FQs7IqY1Lcor36XBQZjCesq1VyTPGpEgebC6QBvvjtAxmjeNXl7PQNFjF2Z+gbMm tZ7jLJwNqZgbvHK3m8FAE9UGCABepQgEoySf/xUg8xtV1S1ucX1cGOiYmDCk4xOq9RaK lsRWcGtJjiWvMfgPwAfKgdbfPhRe/MyCwOQPeyYmTlJBWmMWTcoxXmELiL/Z6lPDj491 ip5xxJRcwNAN4zdCMseG/MFxLwaZsSfpkl4ZEOILOHjPZlc4ui7iYZzEHzI40CnVkV0F f3nq4l4YRqViS47gfH7Gnu34KNh449dpJjxwdY4LnWcohCtyNdRVgDIp5sRg1mScHyj7 eC2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Z9Kik0L90dx1S29MbOFwfgr8OPeYkWmZyTJkiqJg9ic=; b=PKo14z08Wtci/lS95N7VhgX/kBJiqFp5C4BzOJHYZNEZlHJLw3mHAYtntDWKgXWN0U Z0ny0kG+eu+ONRTiAlrpXQt6w4KwtbS/dBr7UU2yFxh3bkpevk63pAjj30u4e+vZYPqg v9B0T2BO8hM4bllll2EOYuW6DdgfQsTP3ko/GMqo0j5bjUHLoDSnY7klPbV4tMTpHMTw SNrDthiPtAlJc8a90tPSzZzotgH9TqqlrlEs9o6MbNAlKwVRVEqiwLpHHkkbQAXkNUv4 PM1o3ZLFRZsahyOsIu9CvevwE8gw94T1DWXUQlBIhTQvh6+PqoULQd5DgOhyN41cquOR q4Rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id z37si986514edz.99.2019.08.29.03.44.12; Thu, 29 Aug 2019 03:44:12 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7FE911E569; Thu, 29 Aug 2019 12:42:32 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 1334F1D42B for ; Thu, 29 Aug 2019 12:41:55 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id EA4ED200321; Thu, 29 Aug 2019 12:41:54 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B1243200764; Thu, 29 Aug 2019 12:41:52 +0200 (CEST) Received: from GDB1.ap.freescale.net (GDB1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id AF479402BE; Thu, 29 Aug 2019 18:41:49 +0800 (SGT) From: Sachin Saxena To: dev@dpdk.org Cc: thomas@monjalon.net, Hemant Agrawal Date: Thu, 29 Aug 2019 15:57:23 +0530 Message-Id: <20190829102737.13267-17-sachin.saxena@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190829102737.13267-1-sachin.saxena@nxp.com> References: <20190827070730.11206-1-sachin.saxena@nxp.com> <20190829102737.13267-1-sachin.saxena@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 16/30] net/dpaa2: add taildrop support on frame count basis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hemant Agrawal The existing taildrop was based on queue data size. This patch replaces it with frame count bases using CGR methods of DPAA2 device. The number of CGRs are limited. So, - use per queue CGR based tail drop for as many as CGR available. - Remaining queues shall use the legacy byte based tail drop Number of CGRs can be controlled by dpl file during dpni_create. Signed-off-by: Hemant Agrawal --- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 5 +- drivers/net/dpaa2/dpaa2_ethdev.c | 113 +++++++++++++++++++++--- drivers/net/dpaa2/dpaa2_ethdev.h | 6 +- 3 files changed, 107 insertions(+), 17 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index 4bb6b26c7..7f7e2fd78 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -145,10 +145,10 @@ struct dpaa2_queue { struct rte_eth_dev_data *eth_data; struct rte_cryptodev_data *crypto_data; }; - int32_t eventfd; /*!< Event Fd of this queue */ uint32_t fqid; /*!< Unique ID of this queue */ - uint8_t tc_index; /*!< traffic class identifier */ uint16_t flow_id; /*!< To be used by DPAA2 frmework */ + uint8_t tc_index; /*!< traffic class identifier */ + uint8_t cgid; /*! < Congestion Group id for this queue */ uint64_t rx_pkts; uint64_t tx_pkts; uint64_t err_pkts; @@ -157,6 +157,7 @@ struct dpaa2_queue { struct qbman_result *cscn; }; struct rte_event ev; + int32_t eventfd; /*!< Event Fd of this queue */ dpaa2_queue_cb_dqrr_t *cb; dpaa2_queue_cb_eqresp_free_t *cb_eqresp_free; struct dpaa2_bp_info *bp_array; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index f25cdfb3d..877acff94 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -514,7 +514,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) static int dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, - uint16_t nb_rx_desc __rte_unused, + uint16_t nb_rx_desc, unsigned int socket_id __rte_unused, const struct rte_eth_rxconf *rx_conf __rte_unused, struct rte_mempool *mb_pool) @@ -526,7 +526,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev, uint8_t options = 0; uint8_t flow_id; uint32_t bpid; - int ret; + int i, ret; PMD_INIT_FUNC_TRACE(); @@ -545,12 +545,28 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev, dpaa2_q->bp_array = rte_dpaa2_bpid_info; /*Get the flow id from given VQ id*/ - flow_id = rx_queue_id % priv->nb_rx_queues; + flow_id = dpaa2_q->flow_id; memset(&cfg, 0, sizeof(struct dpni_queue)); options = options | DPNI_QUEUE_OPT_USER_CTX; cfg.user_context = (size_t)(dpaa2_q); + /* check if a private cgr available. */ + for (i = 0; i < priv->max_cgs; i++) { + if (!priv->cgid_in_use[i]) { + priv->cgid_in_use[i] = 1; + break; + } + } + + if (i < priv->max_cgs) { + options |= DPNI_QUEUE_OPT_SET_CGID; + cfg.cgid = i; + dpaa2_q->cgid = cfg.cgid; + } else { + dpaa2_q->cgid = 0xff; + } + /*if ls2088 or rev2 device, enable the stashing */ if ((dpaa2_svr_family & 0xffff0000) != SVR_LS2080A) { @@ -579,15 +595,56 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev, struct dpni_taildrop taildrop; taildrop.enable = 1; - /*enabling per rx queue congestion control */ - taildrop.threshold = CONG_THRESHOLD_RX_Q; - taildrop.units = DPNI_CONGESTION_UNIT_BYTES; - taildrop.oal = CONG_RX_OAL; - DPAA2_PMD_DEBUG("Enabling Early Drop on queue = %d", - rx_queue_id); - ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token, + + /* Private CGR will use tail drop length as nb_rx_desc. + * for rest cases we can use standard byte based tail drop. + * There is no HW restriction, but number of CGRs are limited, + * hence this restriction is placed. + */ + if (dpaa2_q->cgid != 0xff) { + /*enabling per rx queue congestion control */ + taildrop.threshold = nb_rx_desc; + taildrop.units = DPNI_CONGESTION_UNIT_FRAMES; + taildrop.oal = 0; + DPAA2_PMD_DEBUG("Enabling CG Tail Drop on queue = %d", + rx_queue_id); + ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token, + DPNI_CP_CONGESTION_GROUP, + DPNI_QUEUE_RX, + dpaa2_q->tc_index, + flow_id, &taildrop); + } else { + /*enabling per rx queue congestion control */ + taildrop.threshold = CONG_THRESHOLD_RX_BYTES_Q; + taildrop.units = DPNI_CONGESTION_UNIT_BYTES; + taildrop.oal = CONG_RX_OAL; + DPAA2_PMD_DEBUG("Enabling Byte based Drop on queue= %d", + rx_queue_id); + ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token, + DPNI_CP_QUEUE, DPNI_QUEUE_RX, + dpaa2_q->tc_index, flow_id, + &taildrop); + } + if (ret) { + DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)", + ret); + return -1; + } + } else { /* Disable tail Drop */ + struct dpni_taildrop taildrop = {0}; + DPAA2_PMD_INFO("Tail drop is disabled on queue"); + + taildrop.enable = 0; + if (dpaa2_q->cgid != 0xff) { + ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token, + DPNI_CP_CONGESTION_GROUP, DPNI_QUEUE_RX, + dpaa2_q->tc_index, + flow_id, &taildrop); + } else { + ret = dpni_set_taildrop(dpni, CMD_PRI_LOW, priv->token, DPNI_CP_QUEUE, DPNI_QUEUE_RX, dpaa2_q->tc_index, flow_id, &taildrop); + } if (ret) { DPAA2_PMD_ERR("Error in setting taildrop. err=(%d)", ret); @@ -655,7 +712,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, dpaa2_q->tc_index = tc_id; if (!(priv->flags & DPAA2_TX_CGR_OFF)) { - struct dpni_congestion_notification_cfg cong_notif_cfg; + struct dpni_congestion_notification_cfg cong_notif_cfg = {0}; cong_notif_cfg.units = DPNI_CONGESTION_UNIT_FRAMES; cong_notif_cfg.threshold_entry = CONG_ENTER_TX_THRESHOLD; @@ -693,7 +750,29 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, static void dpaa2_dev_rx_queue_release(void *q __rte_unused) { + struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)q; + struct dpaa2_dev_priv *priv = dpaa2_q->eth_data->dev_private; + struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw; + uint8_t options = 0; + int ret; + struct dpni_queue cfg; + + memset(&cfg, 0, sizeof(struct dpni_queue)); PMD_INIT_FUNC_TRACE(); + if (dpaa2_q->cgid != 0xff) { + options = DPNI_QUEUE_OPT_CLEAR_CGID; + cfg.cgid = dpaa2_q->cgid; + + ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, + DPNI_QUEUE_RX, + dpaa2_q->tc_index, dpaa2_q->flow_id, + options, &cfg); + if (ret) + DPAA2_PMD_ERR("Unable to clear CGR from q=%u err=%d", + dpaa2_q->fqid, ret); + priv->cgid_in_use[dpaa2_q->cgid] = 0; + dpaa2_q->cgid = 0xff; + } } static void @@ -2166,6 +2245,14 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) } priv->num_rx_tc = attr.num_rx_tcs; + /* only if the custom CG is enabled */ + if (attr.options & DPNI_OPT_CUSTOM_CG) + priv->max_cgs = attr.num_cgs; + else + priv->max_cgs = 0; + + for (i = 0; i < priv->max_cgs; i++) + priv->cgid_in_use[i] = 0; for (i = 0; i < attr.num_rx_tcs; i++) priv->nb_rx_queues += attr.num_queues; @@ -2173,9 +2260,9 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) /* Using number of TX queues as number of TX TCs */ priv->nb_tx_queues = attr.num_tx_tcs; - DPAA2_PMD_DEBUG("RX-TC= %d, nb_rx_queues= %d, nb_tx_queues=%d", + DPAA2_PMD_DEBUG("RX-TC= %d, rx_queues= %d, tx_queues=%d, max_cgs=%d", priv->num_rx_tc, priv->nb_rx_queues, - priv->nb_tx_queues); + priv->nb_tx_queues, priv->max_cgs); priv->hw = dpni_dev; priv->hw_id = hw_id; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index a991ccc1d..2f14a3525 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -37,9 +37,9 @@ #define CONG_RETRY_COUNT 18000 /* RX queue tail drop threshold - * currently considering 32 KB packets + * currently considering 64 KB packets */ -#define CONG_THRESHOLD_RX_Q (64 * 1024) +#define CONG_THRESHOLD_RX_BYTES_Q (64 * 1024) #define CONG_RX_OAL 128 /* Size of the input SMMU mapped memory required by MC */ @@ -115,6 +115,8 @@ struct dpaa2_dev_priv { uint8_t flags; /*dpaa2 config flags */ uint8_t en_ordered; uint8_t en_loose_ordered; + uint8_t max_cgs; + uint8_t cgid_in_use[MAX_RX_QUEUES]; struct pattern_s { uint8_t item_count;