From patchwork Mon Mar 2 14:58:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184073 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2191587ile; Mon, 2 Mar 2020 01:26:41 -0800 (PST) X-Google-Smtp-Source: APXvYqwRqZcH1nJ0cPb/PVqONH0BEX9i3khTlpdCSys8x/lzcs7uCIEa8wshaKnC19e1qQOfNJf0 X-Received: by 2002:aa7:c1da:: with SMTP id d26mr16410328edp.3.1583141201819; Mon, 02 Mar 2020 01:26:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141201; cv=none; d=google.com; s=arc-20160816; b=n1tMk3pxsKJNoTshkgUpjncYqrlM9b3oaDXiRo2EEgwHl4279F2luZ2TxZTgvvTtvC lzdHwG/uSQP0/Yx5zPnsDcQydhkkl0sc08wAGMXeaIGHDujyXsI+x7VunryJXCdHbjCF L7If9WYEhB7i/pRD+AbZ0J9m+13Ac8Z+hInTAcFkxwtqI512HZI/ngW6vNpVOfxVgETU 0qBeKEOJIQC7i4UVdwDQG5pIR8JogsAcpzdKbOucjJ3Sg67WiJE2JUWA0X4u3pFlaGC2 4f0BPoLtKHB7Vm9yVRl/B9iBjOqfH8Nu7gkHFDAN6HIFEykKBgzq/JLCdBccu3VmdafB tQDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=iH69F+1bHovb2FtykibUQ/uB83A9jEN0+4NK0isxprc=; b=Kc/9I2SQtsBOwQdoevyl6YRCsVzqrAkrntY99o/JaKj2UnlhHTFIFjXz2Gp4sYzcZA uDqYlbxf32OHyvKKQRf058ltPoE6Kp/efxMKaB8l9uYqpbO1FSAqV+m7fmJIUUmtVwlE pBXo5hi2WLPMowtLFt3kp8dheJKjiOYaSnlGYsbTHaSRxDQt/vTjjbJstViYrg8r6hzc 9mNy7rk6NDZx/mahFANilxiZlKW3cX21rvBt+Gmv0uSy2mKwPOssgoiL8Q1bAsMIGaRw JB4nHliG1wtSh4CKNsuilVy/orGX0Me+TLeGBQUka0dXNSrfL8OZ73OHsze8NDzejqmT CGTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id k23si6414033ejc.528.2020.03.02.01.26.41; Mon, 02 Mar 2020 01:26:41 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DC57D5681; Mon, 2 Mar 2020 10:26:36 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id CFB1731FC; Mon, 2 Mar 2020 10:26:34 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id ACE691A0FAA; Mon, 2 Mar 2020 10:26:34 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 2D33F1A02D7; Mon, 2 Mar 2020 10:26:32 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 5B1704029E; Mon, 2 Mar 2020 17:26:28 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, stable@dpdk.org, Rohit Raj Date: Mon, 2 Mar 2020 20:28:14 +0530 Message-Id: <20200302145829.27808-2-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 01/16] net/dpaa2: fix 10g port negotiation issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj Fixed 10g port negotiation issue with another 10G/non 10G port. Initialize the port link speed. Fixes: c5acbb5ea20e ("net/dpaa2: support link status event") Cc: stable@dpdk.org Signed-off-by: Rohit Raj --- drivers/net/dpaa2/dpaa2_ethdev.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 2cde55e7c..4fc550a88 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -553,9 +553,6 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK); - /* update the current status */ - dpaa2_dev_link_update(dev, 0); - return 0; } @@ -1757,6 +1754,7 @@ dpaa2_dev_set_link_up(struct rte_eth_dev *dev) /* changing tx burst function to start enqueues */ dev->tx_pkt_burst = dpaa2_dev_tx; dev->data->dev_link.link_status = state.up; + dev->data->dev_link.link_speed = state.rate; if (state.up) DPAA2_PMD_INFO("Port %d Link is Up", dev->data->port_id); From patchwork Mon Mar 2 14:58:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184074 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2191733ile; Mon, 2 Mar 2020 01:26:53 -0800 (PST) X-Google-Smtp-Source: APXvYqyyiaaqvfPeGyCcEiFOalCnj5TGWQye3y6HxOPaw0uimiA6XKjtYjDRQCuYTR1XBqFuaiGT X-Received: by 2002:a50:fb0b:: with SMTP id d11mr15176383edq.252.1583141213230; Mon, 02 Mar 2020 01:26:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141213; cv=none; d=google.com; s=arc-20160816; b=dEvqhe1jhj5jdma/CZcH2AMRECMrLzWGYqeeFy+xnFbCq4d0fHWyAjsEXbxR0Mepgo UXkPHfgBHcIUL0SBFCoKG7773/iUz2Cb733pqvhF0BpiuCkEoTlijrQGVPMP3lQKTVr7 MLGqhbfJC+J8uHXu0sG2KTMufH/EAqNfB4Tdl5/Iq2ZAennIqYlFIgAyQGdIA+JxnwW5 RGjs6NPY8fLohMX1arfMVl36monm8xqFHQY9l8uaaCe1pVwwZAdvZZ5ytq0n2K+QN20X rBtAnv4lUAJPiv5bqjHLoiXyMkviveJsLPzqreh8smnHHyBq2c6rVpprCWB2sKk3EA3/ 61yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=/Z0Lv5Ag9Pn/SNHwwnoj9gwOh+Z/OQgdZjsTu2OP4eU=; b=ZbHKoI2S5ZUPn2WiNuG5dqXAEw48sRO8qCOj+1sH7K11pcYxtsg4e9wKsFt8CSWbTN enqq0i36//zqNj+mbSHXBKX5RZsDRsuGj2gqogC5YcgrjEy6BBymsNmdkvG1ydcvdGVz Uxge00zbseLoH0yxr4Ac5TKpFWGobI2kh3/5j/oBNq4W+7XY/Rhcc+eVp+PxezW5Qkou 5nZ2214xtyyT665m7Yk4uj7IdCvpM0otOa9DQjiTsyZqrF/CYfxhthNT/hRgNIsObDUO RbD1UpMRRzIIBC41ebVz8+WZ4ooEtrQA/ngZTm2UFWu1F4uWATxl/Rxo+NKg8GDKFXDp /OeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id rp17si6541150ejb.428.2020.03.02.01.26.53; Mon, 02 Mar 2020 01:26:53 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B8E71C022; Mon, 2 Mar 2020 10:26:38 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id E012E4C9D; Mon, 2 Mar 2020 10:26:35 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C0E651A0FBB; Mon, 2 Mar 2020 10:26:35 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 40EE81A0FAD; Mon, 2 Mar 2020 10:26:33 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 61EEC402A7; Mon, 2 Mar 2020 17:26:29 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, stable@dpdk.org, Apeksha Gupta Date: Mon, 2 Mar 2020 20:28:15 +0530 Message-Id: <20200302145829.27808-3-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 02/16] bus/fslmc: fix dereferencing null pointer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Apeksha Gupta This patch fixees the nxp internal coverity reported null pointer dereferncing issue. Fixes: 6fef517e17cf ("bus/fslmc: add qman HW fq query count API") Cc: stable@dpdk.org Signed-off-by: Apeksha Gupta --- drivers/bus/fslmc/qbman/qbman_debug.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c index 0bb2ce880..34374ae4b 100644 --- a/drivers/bus/fslmc/qbman/qbman_debug.c +++ b/drivers/bus/fslmc/qbman/qbman_debug.c @@ -20,26 +20,27 @@ struct qbman_fq_query_desc { uint8_t verb; uint8_t reserved[3]; uint32_t fqid; - uint8_t reserved2[57]; + uint8_t reserved2[56]; }; int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid, struct qbman_fq_query_np_rslt *r) { struct qbman_fq_query_desc *p; + struct qbman_fq_query_np_rslt *var; p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s); if (!p) return -EBUSY; p->fqid = fqid; - *r = *(struct qbman_fq_query_np_rslt *)qbman_swp_mc_complete(s, p, - QBMAN_FQ_QUERY_NP); - if (!r) { + var = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP); + if (!var) { pr_err("qbman: Query FQID %d NP fields failed, no response\n", fqid); return -EIO; } + *r = *var; /* Decode the outcome */ QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY_NP); From patchwork Mon Mar 2 14:58:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184075 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2191866ile; Mon, 2 Mar 2020 01:27:03 -0800 (PST) X-Google-Smtp-Source: ADFU+vsXUYVKcJtNtbtKFmsE7hdwN+1B5Hsod50LFF4gaDMJdU9Dr1rLd1hadQ3PttSg/m2wrXwY X-Received: by 2002:a17:906:95d1:: with SMTP id n17mr4839677ejy.278.1583141223633; Mon, 02 Mar 2020 01:27:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141223; cv=none; d=google.com; s=arc-20160816; b=MpWcp9tX7ZMZvkXG2yI5KH2B6TPBGT7uVPPBjRP2DjFO64w8aqO42VTiSP9F1y1rVm Yw7tLvserUh+P7dHuXnpnW3DBJZn1UU4uUCrMUM5ab/XWGpOCm69CEMxL4jD+2K6JzsR R5zxzq7k5fgc6JH+Yyez66w0LjAC5Ha2JeWJNzDNfQNNUUaYhFDd94Ma3KunQJM50OED EasXB/FkyMrGZbxSOct4gbsz/ayzLHlspFttxpbO13J2plvDFjUI++V0mUNE5SLuQC/q JIoVA/V9X+OAr5w5sgw6d2wIuJWx/Juih0IYNfw5SzwvE9QjZgueL9qPusJyrW0AGc1S sk0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=PIgwjf4OazzCykqmR1WRFS1a/l4NkC7hM5kd5XaYT2I=; b=L5S9hmOruFhwYHpmcipKE79uNuWiu+La5p7iKLVB6icdgiX0wBkRIOTgvAto8nvk0w eEvFhotab/Uh8Cczil+oJdXoQbltS2ifJk33J+K77eQF0teNuoFq0Ayh2ILGzir8Z/Pp CEAVQuZ8iwtsK6ABbKqTg/b81rut/4sRHB5HIxLBhIrozPBv5PrZ3JHxrwqEl8GqIgg+ 1cu9LjwqHM8pAbzdGWM8q/MxvVhc4GgKlXklWQCaWe58V3/YraMZHeFLSm7IpOAF84LC /Z9kDhuxQq3UpSZzwCFvEKpf8PewhdYKqwkqUnowKotl+UFJvzbHQXmbNZVw0uFOhh1F NRMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id be23si6619807edb.266.2020.03.02.01.27.03; Mon, 02 Mar 2020 01:27:03 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 024F51C02D; Mon, 2 Mar 2020 10:26:40 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 1D5D45681 for ; Mon, 2 Mar 2020 10:26:36 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id F3D321A0FBD; Mon, 2 Mar 2020 10:26:35 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A1B4C1A0FA5; Mon, 2 Mar 2020 10:26:33 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 6AB57402D5; Mon, 2 Mar 2020 17:26:30 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Gagandeep Singh Date: Mon, 2 Mar 2020 20:28:16 +0530 Message-Id: <20200302145829.27808-4-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 03/16] bus/fslmc: combine thread specific variables X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh This is to reduce the thread local storage Signed-off-by: Gagandeep Singh --- drivers/bus/fslmc/fslmc_bus.c | 2 -- drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 7 +++++++ drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 8 ++++++++ drivers/bus/fslmc/rte_fslmc.h | 18 ------------------ 4 files changed, 15 insertions(+), 20 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index b3e964aa9..1f657822f 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -37,8 +37,6 @@ rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type) return rte_fslmc_bus.device_count[device_type]; } -RTE_DEFINE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs); - static void cleanup_fslmc_device_list(void) { diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h index 2829c9380..9da4af782 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h @@ -28,6 +28,13 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io); #define DPAA2_PER_LCORE_ETHRX_DPIO RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev #define DPAA2_PER_LCORE_ETHRX_PORTAL DPAA2_PER_LCORE_ETHRX_DPIO->sw_portal +#define DPAA2_PER_LCORE_DQRR_SIZE \ + RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_size +#define DPAA2_PER_LCORE_DQRR_HELD \ + RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_held +#define DPAA2_PER_LCORE_DQRR_MBUF(i) \ + RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.mbuf[i] + /* Variable to store DPAA2 DQRR size */ extern uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index ab2b213f8..bde1441f4 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -87,6 +87,13 @@ struct eqresp_metadata { struct rte_mempool *mp; }; +#define DPAA2_PORTAL_DEQUEUE_DEPTH 32 +struct dpaa2_portal_dqrr { + struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH]; + uint64_t dqrr_held; + uint8_t dqrr_size; +}; + struct dpaa2_dpio_dev { TAILQ_ENTRY(dpaa2_dpio_dev) next; /**< Pointer to Next device instance */ @@ -112,6 +119,7 @@ struct dpaa2_dpio_dev { struct rte_intr_handle intr_handle; /* Interrupt related info */ int32_t epoll_fd; /**< File descriptor created for interrupt polling */ int32_t hw_id; /**< An unique ID of this DPIO device instance */ + struct dpaa2_portal_dqrr dpaa2_held_bufs; }; struct dpaa2_dpbp_dev { diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h index 96ba8dc25..a59f0077e 100644 --- a/drivers/bus/fslmc/rte_fslmc.h +++ b/drivers/bus/fslmc/rte_fslmc.h @@ -137,24 +137,6 @@ struct rte_fslmc_bus { /**< Count of all devices scanned */ }; -#define DPAA2_PORTAL_DEQUEUE_DEPTH 32 - -/* Create storage for dqrr entries per lcore */ -struct dpaa2_portal_dqrr { - struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH]; - uint64_t dqrr_held; - uint8_t dqrr_size; -}; - -RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs); - -#define DPAA2_PER_LCORE_DQRR_SIZE \ - RTE_PER_LCORE(dpaa2_held_bufs).dqrr_size -#define DPAA2_PER_LCORE_DQRR_HELD \ - RTE_PER_LCORE(dpaa2_held_bufs).dqrr_held -#define DPAA2_PER_LCORE_DQRR_MBUF(i) \ - RTE_PER_LCORE(dpaa2_held_bufs).mbuf[i] - /** * Register a DPAA2 driver. * From patchwork Mon Mar 2 14:58:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184076 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192024ile; Mon, 2 Mar 2020 01:27:13 -0800 (PST) X-Google-Smtp-Source: APXvYqy81j84Hrk+d0UNudJp8DSR0BDpXdxCPm6CQaJclsJzCClz0tIddtfmpYZmP2I2Um9EadNQ X-Received: by 2002:a50:eb04:: with SMTP id y4mr14974276edp.170.1583141233549; Mon, 02 Mar 2020 01:27:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141233; cv=none; d=google.com; s=arc-20160816; b=EEyzqvW/O/9iYdAEW2Au/D2Js5MtnMmPLB+LX98wZ6FgsiX70pcWiBJ8rnNgoMRIeq CwNT7L9pkx0Ze+aM/mC0JM3Mn3lUIn7PSXB8KDGOXw/8QRyIlfPb9ZH0kMpxUqOmBQWU kzjp1YAilbBNjv40cIn7a+mMpQHdQtEpCzUnIr5/MomtVg05mew2EDx1QChN9M9g4z/+ YZz/VQSkyhdwFONPIHPNyESkQ5IHYV4TGzYDc7wh7x8i8DvN9UvU5ciMKzv1cq0p/iFv rZcN6kepIZ55nQxWQB2IctZRo7/DQAW6uGND69myrKQQEqTTRh3CbtO8RRFzZPJMfh86 2e4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=PSE67nJilBWjfAbigWpkk1NFKywOvUgPemMlBWxEKYY=; b=n2029CHcHlWh2DDHHvx2FutzJ++PoD6dqGzYztBvJGsCyP1W6LuMP420uRK7tKRUPU 6DzK186S5Zq7K8Qk/a8OAO1ieElBOVyoK5c0dUGAXPpZsByXvSrhmZKFoE5WatG3ctC1 S0dm0jLtTZ3/9JSBxWQ/hedNeeSnB299tPQqGw+LOgngFk/th/aoQEW3Ea2KC0oMZLXG wvgkKTMGph3q/gQL5PHkWRYnxSKi+stUz+QMKt16ew95zBR98hpoazNYYLV6g+vZ56jU ktHTzXhaNDJ2zDERRBiNPKkWtBZlevtfZnQ9pqB4uxqvIw57FpwqGcN7DXtdzbwk8GE6 iJ4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id i13si1872191edq.240.2020.03.02.01.27.13; Mon, 02 Mar 2020 01:27:13 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 380EC1C066; Mon, 2 Mar 2020 10:26:41 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 083E51BFF4 for ; Mon, 2 Mar 2020 10:26:37 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D88081A0FAA; Mon, 2 Mar 2020 10:26:36 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8A15A1A0F8C; Mon, 2 Mar 2020 10:26:34 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 53AA9402ED; Mon, 2 Mar 2020 17:26:31 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:17 +0530 Message-Id: <20200302145829.27808-5-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 04/16] bus/fslmc: rework portal allocation to a per thread basis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The patch reworks the portal allocation which was previously being done on per lcore basis to a per thread basis. Now user can also create its own threads and use DPAA2 portals for packet I/O. Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/Makefile | 1 + drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 210 ++++++++++++----------- drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 3 - 3 files changed, 114 insertions(+), 100 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile index 6d2286088..b38305fb4 100644 --- a/drivers/bus/fslmc/Makefile +++ b/drivers/bus/fslmc/Makefile @@ -18,6 +18,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include CFLAGS += -I$(RTE_SDK)/drivers/common/dpaax CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common +LDLIBS += -lpthread LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring LDLIBS += -lrte_ethdev LDLIBS += -lrte_common_dpaax diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 739ce434b..e765a382f 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -62,6 +62,9 @@ uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ uint8_t dpaa2_eqcr_size; +/* Variable to hold the portal_key, once created.*/ +static pthread_key_t dpaa2_portal_key; + /*Stashing Macros default for LS208x*/ static int dpaa2_core_cluster_base = 0x04; static int dpaa2_cluster_sz = 2; @@ -87,6 +90,32 @@ static int dpaa2_cluster_sz = 2; * Cluster 4 (ID = x07) : CPU14, CPU15; */ +static int +dpaa2_get_core_id(void) +{ + rte_cpuset_t cpuset; + int i, ret, cpu_id = -1; + + ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t), + &cpuset); + if (ret) { + DPAA2_BUS_ERR("pthread_getaffinity_np() failed"); + return ret; + } + + for (i = 0; i < RTE_MAX_LCORE; i++) { + if (CPU_ISSET(i, &cpuset)) { + if (cpu_id == -1) + cpu_id = i; + else + /* Multiple cpus are affined */ + return -1; + } + } + + return cpu_id; +} + static int dpaa2_core_cluster_sdest(int cpu_id) { @@ -97,7 +126,7 @@ dpaa2_core_cluster_sdest(int cpu_id) #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV static void -dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) +dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id) { #define STRING_LEN 28 #define COMMAND_LEN 50 @@ -130,7 +159,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) return; } - cpu_mask = cpu_mask << dpaa2_cpu[lcoreid]; + cpu_mask = cpu_mask << dpaa2_cpu[cpu_id]; snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity", cpu_mask, token); ret = system(command); @@ -144,7 +173,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) fclose(file); } -static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) +static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int cpu_id) { struct epoll_event epoll_ev; int eventfd, dpio_epoll_fd, ret; @@ -181,36 +210,42 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) } dpio_dev->epoll_fd = dpio_epoll_fd; - dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid); + dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, cpu_id); return 0; } + +static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev) +{ + int ret; + + ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0); + if (ret) + DPAA2_BUS_ERR("DPIO interrupt disable failed"); + + close(dpio_dev->epoll_fd); +} #endif static int -dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) +dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev) { int sdest, ret; int cpu_id; /* Set the Stashing Destination */ - if (lcoreid < 0) { - lcoreid = rte_get_master_lcore(); - if (lcoreid < 0) { - DPAA2_BUS_ERR("Getting CPU Index failed"); - return -1; - } + cpu_id = dpaa2_get_core_id(); + if (cpu_id < 0) { + DPAA2_BUS_ERR("Thread not affined to a single core"); + return -1; } - cpu_id = dpaa2_cpu[lcoreid]; - /* Set the STASH Destination depending on Current CPU ID. * Valid values of SDEST are 4,5,6,7. Where, */ - sdest = dpaa2_core_cluster_sdest(cpu_id); - DPAA2_BUS_DEBUG("Portal= %d CPU= %u lcore id =%u SDEST= %d", - dpio_dev->index, cpu_id, lcoreid, sdest); + DPAA2_BUS_DEBUG("Portal= %d CPU= %u SDEST= %d", + dpio_dev->index, cpu_id, sdest); ret = dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token, sdest); @@ -220,7 +255,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) } #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV - if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) { + if (dpaa2_dpio_intr_init(dpio_dev, cpu_id)) { DPAA2_BUS_ERR("Interrupt registration failed for dpio"); return -1; } @@ -229,7 +264,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) return 0; } -static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) +static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) { struct dpaa2_dpio_dev *dpio_dev = NULL; int ret; @@ -245,108 +280,74 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); - ret = dpaa2_configure_stashing(dpio_dev, lcoreid); - if (ret) + ret = dpaa2_configure_stashing(dpio_dev); + if (ret) { DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); + return NULL; + } return dpio_dev; } +static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev) +{ +#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV + dpaa2_dpio_intr_deinit(dpio_dev); +#endif + if (dpio_dev) + rte_atomic16_clear(&dpio_dev->ref_count); +} + int dpaa2_affine_qbman_swp(void) { - unsigned int lcore_id = rte_lcore_id(); + struct dpaa2_dpio_dev *dpio_dev; uint64_t tid = syscall(SYS_gettid); - if (lcore_id == LCORE_ID_ANY) - lcore_id = rte_get_master_lcore(); - /* if the core id is not supported */ - else if (lcore_id >= RTE_MAX_LCORE) - return -1; - - if (dpaa2_io_portal[lcore_id].dpio_dev) { - DPAA2_BUS_DP_INFO("DPAA Portal=%p (%d) is being shared" - " between thread %" PRIu64 " and current " - "%" PRIu64 "\n", - dpaa2_io_portal[lcore_id].dpio_dev, - dpaa2_io_portal[lcore_id].dpio_dev->index, - dpaa2_io_portal[lcore_id].net_tid, - tid); - RTE_PER_LCORE(_dpaa2_io).dpio_dev - = dpaa2_io_portal[lcore_id].dpio_dev; - rte_atomic16_inc(&dpaa2_io_portal - [lcore_id].dpio_dev->ref_count); - dpaa2_io_portal[lcore_id].net_tid = tid; - - DPAA2_BUS_DP_DEBUG("Old Portal=%p (%d) affined thread - " - "%" PRIu64 "\n", - dpaa2_io_portal[lcore_id].dpio_dev, - dpaa2_io_portal[lcore_id].dpio_dev->index, - tid); - return 0; - } - /* Populate the dpaa2_io_portal structure */ - dpaa2_io_portal[lcore_id].dpio_dev = dpaa2_get_qbman_swp(lcore_id); - - if (dpaa2_io_portal[lcore_id].dpio_dev) { - RTE_PER_LCORE(_dpaa2_io).dpio_dev - = dpaa2_io_portal[lcore_id].dpio_dev; - dpaa2_io_portal[lcore_id].net_tid = tid; + if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) { + dpio_dev = dpaa2_get_qbman_swp(); + if (!dpio_dev) { + DPAA2_BUS_ERR("No software portal resource left"); + return -1; + } + RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev; - return 0; - } else { - return -1; + DPAA2_BUS_INFO( + "DPAA Portal=%p (%d) is affined to thread %" PRIu64, + dpio_dev, dpio_dev->index, tid); } + return 0; } int dpaa2_affine_qbman_ethrx_swp(void) { - unsigned int lcore_id = rte_lcore_id(); + struct dpaa2_dpio_dev *dpio_dev; uint64_t tid = syscall(SYS_gettid); - if (lcore_id == LCORE_ID_ANY) - lcore_id = rte_get_master_lcore(); - /* if the core id is not supported */ - else if (lcore_id >= RTE_MAX_LCORE) - return -1; + /* Populate the dpaa2_io_portal structure */ + if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) { + dpio_dev = dpaa2_get_qbman_swp(); + if (!dpio_dev) { + DPAA2_BUS_ERR("No software portal resource left"); + return -1; + } + RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev; - if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) { - DPAA2_BUS_DP_INFO( - "DPAA Portal=%p (%d) is being shared between thread" - " %" PRIu64 " and current %" PRIu64 "\n", - dpaa2_io_portal[lcore_id].ethrx_dpio_dev, - dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index, - dpaa2_io_portal[lcore_id].sec_tid, - tid); - RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev - = dpaa2_io_portal[lcore_id].ethrx_dpio_dev; - rte_atomic16_inc(&dpaa2_io_portal - [lcore_id].ethrx_dpio_dev->ref_count); - dpaa2_io_portal[lcore_id].sec_tid = tid; - - DPAA2_BUS_DP_DEBUG( - "Old Portal=%p (%d) affined thread" - " - %" PRIu64 "\n", - dpaa2_io_portal[lcore_id].ethrx_dpio_dev, - dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index, - tid); - return 0; + DPAA2_BUS_INFO( + "DPAA Portal=%p (%d) is affined for eth rx to thread %" + PRIu64, dpio_dev, dpio_dev->index, tid); } + return 0; +} - /* Populate the dpaa2_io_portal structure */ - dpaa2_io_portal[lcore_id].ethrx_dpio_dev = - dpaa2_get_qbman_swp(lcore_id); - - if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) { - RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev - = dpaa2_io_portal[lcore_id].ethrx_dpio_dev; - dpaa2_io_portal[lcore_id].sec_tid = tid; - return 0; - } else { - return -1; - } +static void __attribute__((destructor(102))) dpaa2_portal_finish(void *arg) +{ + RTE_SET_USED(arg); + + dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev); + dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev); } /* @@ -398,6 +399,7 @@ dpaa2_create_dpio_device(int vdev_fd, struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)}; struct qbman_swp_desc p_des; struct dpio_attr attr; + int ret; static int check_lcore_cpuset; if (obj_info->num_regions < NUM_DPIO_REGIONS) { @@ -547,12 +549,26 @@ dpaa2_create_dpio_device(int vdev_fd, TAILQ_INSERT_TAIL(&dpio_dev_list, dpio_dev, next); + if (!dpaa2_portal_key) { + /* create the key, supplying a function that'll be invoked + * when a portal affined thread will be deleted. + */ + ret = pthread_key_create(&dpaa2_portal_key, + dpaa2_portal_finish); + if (ret) { + DPAA2_BUS_DEBUG("Unable to create pthread key (%d)", + ret); + goto err; + } + } + return 0; err: if (dpio_dev->dpio) { dpio_disable(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token); dpio_close(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token); + rte_free(dpio_dev->eqresp); rte_free(dpio_dev->dpio); } diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h index 9da4af782..8af0474a1 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h @@ -14,9 +14,6 @@ struct dpaa2_io_portal_t { struct dpaa2_dpio_dev *dpio_dev; struct dpaa2_dpio_dev *ethrx_dpio_dev; - uint64_t net_tid; - uint64_t sec_tid; - void *eventdev; }; /*! Global per thread DPIO portal */ From patchwork Mon Mar 2 14:58:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184077 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192198ile; Mon, 2 Mar 2020 01:27:27 -0800 (PST) X-Google-Smtp-Source: ADFU+vuSD4V3oyVUIECFoQ4v4yIobmn+pFNhlUNO3ZZZjzTbps9qqE2oXQhRGir2lrM3+sBt27E+ X-Received: by 2002:a17:906:7804:: with SMTP id u4mr2985779ejm.231.1583141247341; Mon, 02 Mar 2020 01:27:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141247; cv=none; d=google.com; s=arc-20160816; b=PAMlJI3iIWAtfH5C8ZFW1kLmHPYYCIV/5WzxEnGMDiH2iAWdYfQdCFYYDmmJRzVh2x 1T3/mE9gVOZWPnVwa2iXi9Yw/nL9vg7h65/2sAxaOHCqd3K7SGMwL9utodAkGN1jJhgz bAih4qoDK/S4j0gRvT9uaEjvY1qEX3goIkBZqdOwKd8rpZqYJKR4GBIKEdYovrXPwQGN RiFuQ1hnAPJPEf0kwTUgDQF6hTvbKIIfQGvccsxtdtdsQSkkJSwUhAtl9jnQQ2ii/l22 rDuzRmelXz8xnSsV7eP4doTeCG1Bf2b0dHft8il6pV4uMdgaS2CNaEt16CvP24nssBg5 75JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Zzun9V5RN15MMkqrRrwz00rFsKuKWipu9MHL6XMxTwY=; b=ZsCNfuGQDHY5d8tUfBCcWCPjsbATTGzm5HVVG9rtxXxVNyEhp/Q/GkfbaJch0qIxuK aJxaM3VwBbk164n6pmF1Hlk4DaqkqlGytZkwpnoEYSHb5ftoD5oDirv95pLX5FU5qhW5 Rui3Su/3BFKtZ5zsm8wfEZvpv1DxYZK86RNz8OFOymwcJ45hICH9teD2WguO4vTViuIq cPdaBKnQgRg6daqsUomcGOgqTCw8K+ubJLba6dNGvlV2bmpRncz8wwUYpj+4yFMO5wX6 C7qP/BURnqVnlsf6NDoWJ5U1n1GLgMMCCyVz5Fz36Do5OPJ50oIEAbzEClKWZmn6brnd UwjA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id dn24si1592966edb.163.2020.03.02.01.27.27; Mon, 02 Mar 2020 01:27:27 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AD5371C07C; Mon, 2 Mar 2020 10:26:42 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 8736F1C028 for ; Mon, 2 Mar 2020 10:26:38 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 6A3941A0FA5; Mon, 2 Mar 2020 10:26:38 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id DDA6F1A0FAD; Mon, 2 Mar 2020 10:26:35 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 2F64E40299; Mon, 2 Mar 2020 17:26:32 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta , Hemant Agrawal Date: Mon, 2 Mar 2020 20:28:18 +0530 Message-Id: <20200302145829.27808-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 05/16] bus/fslmc: support handle portal alloc failure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the error handling on failure. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 28 ++++++++++++++---------- 1 file changed, 16 insertions(+), 12 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index e765a382f..1a1453ea3 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -264,6 +264,16 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev) return 0; } +static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev) +{ + if (dpio_dev) { +#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV + dpaa2_dpio_intr_deinit(dpio_dev); +#endif + rte_atomic16_clear(&dpio_dev->ref_count); + } +} + static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) { struct dpaa2_dpio_dev *dpio_dev = NULL; @@ -274,8 +284,10 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) if (dpio_dev && rte_atomic16_test_and_set(&dpio_dev->ref_count)) break; } - if (!dpio_dev) + if (!dpio_dev) { + DPAA2_BUS_ERR("No software portal resource left"); return NULL; + } DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); @@ -283,21 +295,13 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) ret = dpaa2_configure_stashing(dpio_dev); if (ret) { DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); + rte_atomic16_clear(&dpio_dev->ref_count); return NULL; } return dpio_dev; } -static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev) -{ -#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV - dpaa2_dpio_intr_deinit(dpio_dev); -#endif - if (dpio_dev) - rte_atomic16_clear(&dpio_dev->ref_count); -} - int dpaa2_affine_qbman_swp(void) { @@ -308,7 +312,7 @@ dpaa2_affine_qbman_swp(void) if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) { dpio_dev = dpaa2_get_qbman_swp(); if (!dpio_dev) { - DPAA2_BUS_ERR("No software portal resource left"); + DPAA2_BUS_ERR("Error in software portal allocation"); return -1; } RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev; @@ -330,7 +334,7 @@ dpaa2_affine_qbman_ethrx_swp(void) if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) { dpio_dev = dpaa2_get_qbman_swp(); if (!dpio_dev) { - DPAA2_BUS_ERR("No software portal resource left"); + DPAA2_BUS_ERR("Error in software portal allocation"); return -1; } RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev; From patchwork Mon Mar 2 14:58:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184078 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192315ile; Mon, 2 Mar 2020 01:27:37 -0800 (PST) X-Google-Smtp-Source: APXvYqx72xbtLrkWFBFdyKYiBdQ4Et8OfK2RiXWrAnin5Lcw+E2B5FZe3EACAzREkGDP88hE7ohF X-Received: by 2002:a05:6402:153:: with SMTP id s19mr15224801edu.149.1583141257055; Mon, 02 Mar 2020 01:27:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141257; cv=none; d=google.com; s=arc-20160816; b=jrsoz4CbsI+9MnKt+wkEdh2zBzXaPpz4xxXu6xuR//dqj+F2SeQeDvEY49HIamHpPZ qULfave24JWlp/FZi0m/9s80bX18/aoo2hQYLKyPcMuAM/O9emHISygxjDcpuRWXao0m Xodvw2otaw/On7LpUqXGe9EI8sNlca3PoIsXGP5FjDPAcxlhnoe7Xl3VHWYTN85Z74qR lCIcsnovO2/6D3RTUHnc4MjaophbX6sdc6yfcBWVMbOJZGeaoHXymNSfW07PNM/x8O4T nHCJAaZEPwE2QOGt36ppWsbmLCljvQGozH81bZGW5XvRSZpaOI33okdjk8fs4xIcY+2H Iwng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=flnDolx4+AgifEEonUxUXPuA2wkg88lpPccymmgBjh4=; b=AgaKfdstjZSWfMCbcNk1Sty2PK22ZywTW1pNYatd9Ry95Kvu5Ac84mnjMbIJacVvuC r5AHhdcN36ysvL5FBWB69FSwivqKyEKvebhQDojremOdKwbfResqucTv9loOkG25E/fl QAteoRdPRDEBG0nbYys/z5wwlaFr8SLaf8Ne0xfhsADbThJo3n5340Y2MNI3syWrEKjr 7gmFQjyPupDPO66MhDMl00q8dysLzhDDurGcn92BMKSYx3GGNu3CeVio4oO+hzGEKt5q 7P3joJNBaF+9gY2lLjgqYPjZBVPHumMBr7MTVGs5ZP0A4Hoh6dkL7xrTHg3PeNQsl0Qm jFkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b1si6729634eds.169.2020.03.02.01.27.36; Mon, 02 Mar 2020 01:27:37 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 28E211C08C; Mon, 2 Mar 2020 10:26:44 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 93F171C02A for ; Mon, 2 Mar 2020 10:26:38 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 6C5721A0FB0; Mon, 2 Mar 2020 10:26:38 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 5B2E11A02D7; Mon, 2 Mar 2020 10:26:36 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 197A1402F3; Mon, 2 Mar 2020 17:26:32 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:19 +0530 Message-Id: <20200302145829.27808-7-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 06/16] bus/fslmc: limit pthread destructor called for dpaa2 only X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The destructor was being called for non-dpaa2 as well Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 1a1453ea3..054d45306 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -299,6 +299,13 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) return NULL; } + ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev); + if (ret) { + DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret); + dpaa2_put_qbman_swp(dpio_dev); + return NULL; + } + return dpio_dev; } @@ -346,12 +353,14 @@ dpaa2_affine_qbman_ethrx_swp(void) return 0; } -static void __attribute__((destructor(102))) dpaa2_portal_finish(void *arg) +static void dpaa2_portal_finish(void *arg) { RTE_SET_USED(arg); dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev); dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev); + + pthread_setspecific(dpaa2_portal_key, NULL); } /* From patchwork Mon Mar 2 14:58:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184079 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192500ile; Mon, 2 Mar 2020 01:27:49 -0800 (PST) X-Google-Smtp-Source: APXvYqzIVdqq0Vfkf3UDCOVhaIQqlWOxtvCsNJrSnErTwU6Fz5c5+AgNxVaLJj7XaTBAg1SrwQv6 X-Received: by 2002:a05:6402:b0f:: with SMTP id bm15mr15153092edb.235.1583141269039; Mon, 02 Mar 2020 01:27:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141269; cv=none; d=google.com; s=arc-20160816; b=sAr1oK4JF/ZDBHTX4XK2lEIlhQ5ac6Rxq716qFjlrwT3wA+o+CF+ehH76ZlxCJJ1lE Qy2wB5bJkkWEh0RM4EK9BklGZ1J19NFqyIrwNursnU4iizoSDBAY0XBolCRpxGd5Lsdo a6f03f+UOa4fu4ERwDc8JimenLXjc+rXx3npFs1WV6vNMn687GP+H1sS8hFcJCUdZTFH C6Oz7q20JGf9TTbosa8FJvZssDXb0wEdRj2Qur4urOf/Y+jsEEGfQS4N2/YhuC3l7yDE KV0i7tqqp4WifpeiOB/G7Ti1H2xexMJvj2MMTB8onJGWVmypBgWpCFK2atDlWwuPetwR gs6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=zXY6fRXp62PSnP33oFxqN+a4iUwc2ZywoNdPHNtHml0=; b=F5KDJ1bSWg74p+JcDUkA70F7mQbdeoEzPYV4ugRx4yjJ0DUub/cRjCZ0vrx3qqWRaS tnBNb+kfqg2fI7eDrfRi7Hl8OcUqrOMIMqL4SfrbzbZs5j0O3b7AaL2DJ1RT51JNd/hl siaiKkCuX2W/cUzvT4qUjF0VlHwfNtO1XGUkr2qUo6gmIpsg3PCk92OmAI9CDGr/VdxS ZJex08FQTH9xoksA+daCbDQEdt1zI+ZvCtmo/iaE+WqqmSCvfFplzkWJLz1WrDoHTuTu wLX9EaZtzIWA/jWQQMx4+B85DhMssF2BN4lU7BBEE5oki6JL/JTW1k+7VOZeGmHUJmLN ETUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g19si2459422edt.308.2020.03.02.01.27.48; Mon, 02 Mar 2020 01:27:49 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 088C41C011; Mon, 2 Mar 2020 10:26:46 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 159CD1C036 for ; Mon, 2 Mar 2020 10:26:40 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E91911A0FB6; Mon, 2 Mar 2020 10:26:39 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 648931A0FB4; Mon, 2 Mar 2020 10:26:37 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 0DBE6402FC; Mon, 2 Mar 2020 17:26:33 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:20 +0530 Message-Id: <20200302145829.27808-8-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 07/16] bus/fslmc: support portal migration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The patch adds support for portal migration by disabling stashing for the portals which is used in the non-affined threads, or on threads affined to multiple cores Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 83 +-- .../fslmc/qbman/include/fsl_qbman_portal.h | 8 +- drivers/bus/fslmc/qbman/qbman_portal.c | 554 +++++++++++++++++- drivers/bus/fslmc/qbman/qbman_portal.h | 19 +- drivers/bus/fslmc/qbman/qbman_sys.h | 135 ++++- 5 files changed, 717 insertions(+), 82 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 054d45306..2102d2981 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -53,10 +53,6 @@ static uint32_t io_space_count; /* Variable to store DPAA2 platform type */ uint32_t dpaa2_svr_family; -/* Physical core id for lcores running on dpaa2. */ -/* DPAA2 only support 1 lcore to 1 phy cpu mapping */ -static unsigned int dpaa2_cpu[RTE_MAX_LCORE]; - /* Variable to store DPAA2 DQRR size */ uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ @@ -159,7 +155,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id) return; } - cpu_mask = cpu_mask << dpaa2_cpu[cpu_id]; + cpu_mask = cpu_mask << cpu_id; snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity", cpu_mask, token); ret = system(command); @@ -228,17 +224,9 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev) #endif static int -dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev) +dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id) { int sdest, ret; - int cpu_id; - - /* Set the Stashing Destination */ - cpu_id = dpaa2_get_core_id(); - if (cpu_id < 0) { - DPAA2_BUS_ERR("Thread not affined to a single core"); - return -1; - } /* Set the STASH Destination depending on Current CPU ID. * Valid values of SDEST are 4,5,6,7. Where, @@ -277,6 +265,7 @@ static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev) static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) { struct dpaa2_dpio_dev *dpio_dev = NULL; + int cpu_id; int ret; /* Get DPIO dev handle from list using index */ @@ -292,11 +281,19 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); - ret = dpaa2_configure_stashing(dpio_dev); - if (ret) { - DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); - rte_atomic16_clear(&dpio_dev->ref_count); - return NULL; + /* Set the Stashing Destination */ + cpu_id = dpaa2_get_core_id(); + if (cpu_id < 0) { + DPAA2_BUS_WARN("Thread not affined to a single core"); + if (dpaa2_svr_family != SVR_LX2160A) + qbman_swp_update(dpio_dev->sw_portal, 1); + } else { + ret = dpaa2_configure_stashing(dpio_dev, cpu_id); + if (ret) { + DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); + rte_atomic16_clear(&dpio_dev->ref_count); + return NULL; + } } ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev); @@ -363,46 +360,6 @@ static void dpaa2_portal_finish(void *arg) pthread_setspecific(dpaa2_portal_key, NULL); } -/* - * This checks for not supported lcore mappings as well as get the physical - * cpuid for the lcore. - * one lcore can only map to 1 cpu i.e. 1@10-14 not supported. - * one cpu can be mapped to more than one lcores. - */ -static int -dpaa2_check_lcore_cpuset(void) -{ - unsigned int lcore_id, i; - int ret = 0; - - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) - dpaa2_cpu[lcore_id] = 0xffffffff; - - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { - rte_cpuset_t cpuset = rte_lcore_cpuset(lcore_id); - - for (i = 0; i < CPU_SETSIZE; i++) { - if (!CPU_ISSET(i, &cpuset)) - continue; - if (i >= RTE_MAX_LCORE) { - DPAA2_BUS_ERR("ERR:lcore map to core %u (>= %u) not supported", - i, RTE_MAX_LCORE); - ret = -1; - continue; - } - RTE_LOG(DEBUG, EAL, "lcore id = %u cpu=%u\n", - lcore_id, i); - if (dpaa2_cpu[lcore_id] != 0xffffffff) { - DPAA2_BUS_ERR("ERR:lcore map to multi-cpu not supported"); - ret = -1; - continue; - } - dpaa2_cpu[lcore_id] = i; - } - } - return ret; -} - static int dpaa2_create_dpio_device(int vdev_fd, struct vfio_device_info *obj_info, @@ -413,7 +370,6 @@ dpaa2_create_dpio_device(int vdev_fd, struct qbman_swp_desc p_des; struct dpio_attr attr; int ret; - static int check_lcore_cpuset; if (obj_info->num_regions < NUM_DPIO_REGIONS) { DPAA2_BUS_ERR("Not sufficient number of DPIO regions"); @@ -433,13 +389,6 @@ dpaa2_create_dpio_device(int vdev_fd, /* Using single portal for all devices */ dpio_dev->mc_portal = rte_mcp_ptr_list[MC_PORTAL_INDEX]; - if (!check_lcore_cpuset) { - check_lcore_cpuset = 1; - - if (dpaa2_check_lcore_cpuset() < 0) - goto err; - } - dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io), RTE_CACHE_LINE_SIZE); if (!dpio_dev->dpio) { diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h index 88f0a9968..0d6364d99 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014 Freescale Semiconductor, Inc. - * Copyright 2015-2019 NXP + * Copyright 2015-2020 NXP * */ #ifndef _FSL_QBMAN_PORTAL_H @@ -43,6 +43,12 @@ extern uint32_t dpaa2_svr_family; */ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d); +/** + * qbman_swp_update() - Update portal cacheability attributes. + * @p: the given qbman swp portal + */ +int qbman_swp_update(struct qbman_swp *p, int stash_off); + /** * qbman_swp_finish() - Create and destroy a functional object representing * the given QBMan portal descriptor. diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index d4223bdc8..a06b88dd2 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2020 NXP * */ @@ -82,6 +82,10 @@ qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); static int +qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd); +static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); @@ -99,6 +103,12 @@ qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int +qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + uint32_t *flags, + int num_frames); +static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -118,6 +128,12 @@ qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int +qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames); +static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, @@ -135,6 +151,11 @@ qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s, const struct qbman_fd *fd, int num_frames); static int +qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + int num_frames); +static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -143,9 +164,12 @@ qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, static int qbman_swp_pull_direct(struct qbman_swp *s, struct qbman_pull_desc *d); static int +qbman_swp_pull_cinh_direct(struct qbman_swp *s, struct qbman_pull_desc *d); +static int qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d); const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s); +const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s); const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s); static int @@ -153,6 +177,10 @@ qbman_swp_release_direct(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, unsigned int num_buffers); static int +qbman_swp_release_cinh_direct(struct qbman_swp *s, + const struct qbman_release_desc *d, + const uint64_t *buffers, unsigned int num_buffers); +static int qbman_swp_release_mem_back(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, unsigned int num_buffers); @@ -327,6 +355,28 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) return p; } +int qbman_swp_update(struct qbman_swp *p, int stash_off) +{ + const struct qbman_swp_desc *d = &p->desc; + struct qbman_swp_sys *s = &p->sys; + int ret; + + /* Nothing needs to be done for QBMAN rev > 5000 with fast access */ + if ((qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) + return 0; + + ret = qbman_swp_sys_update(s, d, p->dqrr.dqrr_size, stash_off); + if (ret) { + pr_err("qbman_swp_sys_init() failed %d\n", ret); + return ret; + } + + p->stash_off = stash_off; + + return 0; +} + void qbman_swp_finish(struct qbman_swp *p) { #ifdef QBMAN_CHECKING @@ -462,6 +512,27 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb) #endif } +void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb) +{ + uint8_t *v = cmd; +#ifdef QBMAN_CHECKING + QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit)); +#endif + /* TBD: "|=" is going to hurt performance. Need to move as many fields + * out of word zero, and for those that remain, the "OR" needs to occur + * at the caller side. This debug check helps to catch cases where the + * caller wants to OR but has forgotten to do so. + */ + QBMAN_BUG_ON((*v & cmd_verb) != *v); + dma_wmb(); + *v = cmd_verb | p->mc.valid_bit; + qbman_cinh_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd); + clean(cmd); +#ifdef QBMAN_CHECKING + p->mc.check = swp_mc_can_poll; +#endif +} + void *qbman_swp_mc_result(struct qbman_swp *p) { uint32_t *ret, verb; @@ -500,6 +571,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p) return ret; } +void *qbman_swp_mc_result_cinh(struct qbman_swp *p) +{ + uint32_t *ret, verb; +#ifdef QBMAN_CHECKING + QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll); +#endif + ret = qbman_cinh_read_shadow(&p->sys, + QBMAN_CENA_SWP_RR(p->mc.valid_bit)); + /* Remove the valid-bit - + * command completed iff the rest is non-zero + */ + verb = ret[0] & ~QB_VALID_BIT; + if (!verb) + return NULL; + p->mc.valid_bit ^= QB_VALID_BIT; +#ifdef QBMAN_CHECKING + p->mc.check = swp_mc_can_start; +#endif + return ret; +} + /***********/ /* Enqueue */ /***********/ @@ -640,6 +732,16 @@ static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p, QMAN_RT_MODE); } +static void memcpy_byte_by_byte(void *to, const void *from, size_t n) +{ + const uint8_t *src = from; + volatile uint8_t *dest = to; + size_t i; + + for (i = 0; i < n; i++) + dest[i] = src[i]; +} + static int qbman_swp_enqueue_array_mode_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, @@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct( return -EBUSY; } - p = qbman_cena_write_start_wo_shadow(&s->sys, + p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask)); memcpy(&p[1], &cl[1], 28); memcpy(&p[8], fd, sizeof(*fd)); @@ -762,8 +864,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct( /* Set the verb byte, have to substitute in the valid-bit */ p[0] = cl[0] | s->eqcr.pi_vb; - qbman_cena_write_complete_wo_shadow(&s->sys, + s->eqcr.pi++; + s->eqcr.pi &= full_mask; + s->eqcr.available--; + if (!(s->eqcr.pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + + return 0; +} + +static int qbman_swp_enqueue_ring_mode_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd) +{ + uint32_t *p; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, full_mask, half_mask; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return -EBUSY; + } + + p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask)); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], fd, sizeof(*fd)); + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + p[0] = cl[0] | s->eqcr.pi_vb; s->eqcr.pi++; s->eqcr.pi &= full_mask; s->eqcr.available--; @@ -815,7 +953,10 @@ static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd) { - return qbman_swp_enqueue_ring_mode_ptr(s, d, fd); + if (!s->stash_off) + return qbman_swp_enqueue_ring_mode_ptr(s, d, fd); + else + return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd); } int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d, @@ -966,6 +1107,67 @@ static int qbman_swp_enqueue_multiple_cinh_direct( return num_enqueued; } +static int qbman_swp_enqueue_multiple_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + uint32_t *flags, + int num_frames) +{ + uint32_t *p = NULL; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + p[0] = cl[0] | s->eqcr.pi_vb; + if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { + struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; + + d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | + ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); + } + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -1025,7 +1227,12 @@ inline int qbman_swp_enqueue_multiple(struct qbman_swp *s, uint32_t *flags, int num_frames) { - return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames); + if (!s->stash_off) + return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, + num_frames); + else + return qbman_swp_enqueue_multiple_cinh_direct(s, d, fd, flags, + num_frames); } static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, @@ -1167,6 +1374,67 @@ static int qbman_swp_enqueue_multiple_fd_cinh_direct( return num_enqueued; } +static int qbman_swp_enqueue_multiple_fd_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames) +{ + uint32_t *p = NULL; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], fd[i], sizeof(struct qbman_fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + p[0] = cl[0] | s->eqcr.pi_vb; + if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { + struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; + + d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | + ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); + } + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, @@ -1233,7 +1501,12 @@ inline int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, uint32_t *flags, int num_frames) { - return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames); + if (!s->stash_off) + return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, + num_frames); + else + return qbman_swp_enqueue_multiple_fd_cinh_direct(s, d, fd, + flags, num_frames); } static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, @@ -1365,6 +1638,62 @@ static int qbman_swp_enqueue_multiple_desc_cinh_direct( return num_enqueued; } +static int qbman_swp_enqueue_multiple_desc_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + int num_frames) +{ + uint32_t *p; + const uint32_t *cl; + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + cl = qb_cl(&d[i]); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + cl = qb_cl(&d[i]); + p[0] = cl[0] | s->eqcr.pi_vb; + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -1426,7 +1755,13 @@ inline int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s, const struct qbman_fd *fd, int num_frames) { - return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, num_frames); + if (!s->stash_off) + return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, + num_frames); + else + return qbman_swp_enqueue_multiple_desc_cinh_direct(s, d, fd, + num_frames); + } /*************************/ @@ -1574,6 +1909,30 @@ static int qbman_swp_pull_direct(struct qbman_swp *s, return 0; } +static int qbman_swp_pull_cinh_direct(struct qbman_swp *s, + struct qbman_pull_desc *d) +{ + uint32_t *p; + uint32_t *cl = qb_cl(d); + + if (!atomic_dec_and_test(&s->vdq.busy)) { + atomic_inc(&s->vdq.busy); + return -EBUSY; + } + + d->pull.tok = s->sys.idx + 1; + s->vdq.storage = (void *)(size_t)d->pull.rsp_addr_virt; + p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR); + memcpy_byte_by_byte(&p[1], &cl[1], 12); + + /* Set the verb byte, have to substitute in the valid-bit */ + lwsync(); + p[0] = cl[0] | s->vdq.valid_bit; + s->vdq.valid_bit ^= QB_VALID_BIT; + + return 0; +} + static int qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d) { @@ -1601,7 +1960,10 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s, inline int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d) { - return qbman_swp_pull_ptr(s, d); + if (!s->stash_off) + return qbman_swp_pull_ptr(s, d); + else + return qbman_swp_pull_cinh_direct(s, d); } /****************/ @@ -1638,7 +2000,10 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s) */ inline const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s) { - return qbman_swp_dqrr_next_ptr(s); + if (!s->stash_off) + return qbman_swp_dqrr_next_ptr(s); + else + return qbman_swp_dqrr_next_cinh_direct(s); } const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s) @@ -1718,6 +2083,81 @@ const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s) return p; } +const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s) +{ + uint32_t verb; + uint32_t response_verb; + uint32_t flags; + const struct qbman_result *p; + + /* Before using valid-bit to detect if something is there, we have to + * handle the case of the DQRR reset bug... + */ + if (s->dqrr.reset_bug) { + /* We pick up new entries by cache-inhibited producer index, + * which means that a non-coherent mapping would require us to + * invalidate and read *only* once that PI has indicated that + * there's an entry here. The first trip around the DQRR ring + * will be much less efficient than all subsequent trips around + * it... + */ + uint8_t pi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI) & + QMAN_DQRR_PI_MASK; + + /* there are new entries if pi != next_idx */ + if (pi == s->dqrr.next_idx) + return NULL; + + /* if next_idx is/was the last ring index, and 'pi' is + * different, we can disable the workaround as all the ring + * entries have now been DMA'd to so valid-bit checking is + * repaired. Note: this logic needs to be based on next_idx + * (which increments one at a time), rather than on pi (which + * can burst and wrap-around between our snapshots of it). + */ + QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0); + if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) { + pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n", + s->dqrr.next_idx, pi); + s->dqrr.reset_bug = 0; + } + } + p = qbman_cinh_read_wo_shadow(&s->sys, + QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx)); + + verb = p->dq.verb; + + /* If the valid-bit isn't of the expected polarity, nothing there. Note, + * in the DQRR reset bug workaround, we shouldn't need to skip these + * check, because we've already determined that a new entry is available + * and we've invalidated the cacheline before reading it, so the + * valid-bit behaviour is repaired and should tell us what we already + * knew from reading PI. + */ + if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit) + return NULL; + + /* There's something there. Move "next_idx" attention to the next ring + * entry (and prefetch it) before returning what we found. + */ + s->dqrr.next_idx++; + if (s->dqrr.next_idx == s->dqrr.dqrr_size) { + s->dqrr.next_idx = 0; + s->dqrr.valid_bit ^= QB_VALID_BIT; + } + /* If this is the final response to a volatile dequeue command + * indicate that the vdq is no longer busy + */ + flags = p->dq.stat; + response_verb = verb & QBMAN_RESPONSE_VERB_MASK; + if ((response_verb == QBMAN_RESULT_DQ) && + (flags & QBMAN_DQ_STAT_VOLATILE) && + (flags & QBMAN_DQ_STAT_EXPIRED)) + atomic_inc(&s->vdq.busy); + + return p; +} + const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s) { uint32_t verb; @@ -2096,6 +2536,37 @@ static int qbman_swp_release_direct(struct qbman_swp *s, return 0; } +static int qbman_swp_release_cinh_direct(struct qbman_swp *s, + const struct qbman_release_desc *d, + const uint64_t *buffers, + unsigned int num_buffers) +{ + uint32_t *p; + const uint32_t *cl = qb_cl(d); + uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR); + + pr_debug("RAR=%08x\n", rar); + if (!RAR_SUCCESS(rar)) + return -EBUSY; + + QBMAN_BUG_ON(!num_buffers || (num_buffers > 7)); + + /* Start the release command */ + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_RCR(RAR_IDX(rar))); + + /* Copy the caller's buffer pointers to the command */ + memcpy_byte_by_byte(&p[2], buffers, num_buffers * sizeof(uint64_t)); + + /* Set the verb byte, have to substitute in the valid-bit and the + * number of buffers. + */ + lwsync(); + p[0] = cl[0] | RAR_VB(rar) | num_buffers; + + return 0; +} + static int qbman_swp_release_mem_back(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, @@ -2134,7 +2605,11 @@ inline int qbman_swp_release(struct qbman_swp *s, const uint64_t *buffers, unsigned int num_buffers) { - return qbman_swp_release_ptr(s, d, buffers, num_buffers); + if (!s->stash_off) + return qbman_swp_release_ptr(s, d, buffers, num_buffers); + else + return qbman_swp_release_cinh_direct(s, d, buffers, + num_buffers); } /*******************/ @@ -2157,8 +2632,8 @@ struct qbman_acquire_rslt { uint64_t buf[7]; }; -int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, - unsigned int num_buffers) +static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid, + uint64_t *buffers, unsigned int num_buffers) { struct qbman_acquire_desc *p; struct qbman_acquire_rslt *r; @@ -2202,6 +2677,61 @@ int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, return (int)r->num; } +static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid, + uint64_t *buffers, unsigned int num_buffers) +{ + struct qbman_acquire_desc *p; + struct qbman_acquire_rslt *r; + + if (!num_buffers || (num_buffers > 7)) + return -EINVAL; + + /* Start the management command */ + p = qbman_swp_mc_start(s); + + if (!p) + return -EBUSY; + + /* Encode the caller-provided attributes */ + p->bpid = bpid; + p->num = num_buffers; + + /* Complete the management command */ + r = qbman_swp_mc_complete_cinh(s, p, QBMAN_MC_ACQUIRE); + if (!r) { + pr_err("qbman: acquire from BPID %d failed, no response\n", + bpid); + return -EIO; + } + + /* Decode the outcome */ + QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_MC_ACQUIRE); + + /* Determine success or failure */ + if (r->rslt != QBMAN_MC_RSLT_OK) { + pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n", + bpid, r->rslt); + return -EIO; + } + + QBMAN_BUG_ON(r->num > num_buffers); + + /* Copy the acquired buffers to the caller's array */ + u64_from_le32_copy(buffers, &r->buf[0], r->num); + + return (int)r->num; +} + +int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, + unsigned int num_buffers) +{ + if (!s->stash_off) + return qbman_swp_acquire_direct(s, bpid, buffers, num_buffers); + else + return qbman_swp_acquire_cinh_direct(s, bpid, buffers, + num_buffers); +} + /*****************/ /* FQ management */ /*****************/ diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h index 3aaacae52..1cf791830 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.h +++ b/drivers/bus/fslmc/qbman/qbman_portal.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2020 NXP * */ @@ -102,6 +102,7 @@ struct qbman_swp { uint32_t ci; int available; } eqcr; + uint8_t stash_off; }; /* -------------------------- */ @@ -118,7 +119,9 @@ struct qbman_swp { */ void *qbman_swp_mc_start(struct qbman_swp *p); void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb); +void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb); void *qbman_swp_mc_result(struct qbman_swp *p); +void *qbman_swp_mc_result_cinh(struct qbman_swp *p); /* Wraps up submit + poll-for-result */ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd, @@ -135,6 +138,20 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd, return cmd; } +static inline void *qbman_swp_mc_complete_cinh(struct qbman_swp *swp, void *cmd, + uint8_t cmd_verb) +{ + int loopvar = 1000; + + qbman_swp_mc_submit_cinh(swp, cmd, cmd_verb); + do { + cmd = qbman_swp_mc_result_cinh(swp); + } while (!cmd && loopvar--); + QBMAN_BUG_ON(!loopvar); + + return cmd; +} + /* ---------------------- */ /* Descriptors/cachelines */ /* ---------------------- */ diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h index 55449edf3..61f817c47 100644 --- a/drivers/bus/fslmc/qbman/qbman_sys.h +++ b/drivers/bus/fslmc/qbman/qbman_sys.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. - * Copyright 2019 NXP + * Copyright 2019-2020 NXP */ /* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the * driver. They are only included via qbman_private.h, which is itself a @@ -190,6 +190,34 @@ static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset, #endif } +static inline void *qbman_cinh_write_start_wo_shadow(struct qbman_swp_sys *s, + uint32_t offset) +{ +#ifdef QBMAN_CINH_TRACE + pr_info("qbman_cinh_write_start(%p:%d:0x%03x)\n", + s->addr_cinh, s->idx, offset); +#endif + QBMAN_BUG_ON(offset & 63); + return (s->addr_cinh + offset); +} + +static inline void qbman_cinh_write_complete(struct qbman_swp_sys *s, + uint32_t offset, void *cmd) +{ + const uint32_t *shadow = cmd; + int loop; +#ifdef QBMAN_CINH_TRACE + pr_info("qbman_cinh_write_complete(%p:%d:0x%03x) %p\n", + s->addr_cinh, s->idx, offset, shadow); + hexdump(cmd, 64); +#endif + for (loop = 15; loop >= 1; loop--) + __raw_writel(shadow[loop], s->addr_cinh + + offset + loop * 4); + lwsync(); + __raw_writel(shadow[0], s->addr_cinh + offset); +} + static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset) { uint32_t reg = __raw_readl(s->addr_cinh + offset); @@ -200,6 +228,35 @@ static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset) return reg; } +static inline void *qbman_cinh_read_shadow(struct qbman_swp_sys *s, + uint32_t offset) +{ + uint32_t *shadow = (uint32_t *)(s->cena + offset); + unsigned int loop; +#ifdef QBMAN_CINH_TRACE + pr_info(" %s (%p:%d:0x%03x) %p\n", __func__, + s->addr_cinh, s->idx, offset, shadow); +#endif + + for (loop = 0; loop < 16; loop++) + shadow[loop] = __raw_readl(s->addr_cinh + offset + + loop * 4); +#ifdef QBMAN_CINH_TRACE + hexdump(shadow, 64); +#endif + return shadow; +} + +static inline void *qbman_cinh_read_wo_shadow(struct qbman_swp_sys *s, + uint32_t offset) +{ +#ifdef QBMAN_CINH_TRACE + pr_info("qbman_cinh_read(%p:%d:0x%03x)\n", + s->addr_cinh, s->idx, offset); +#endif + return s->addr_cinh + offset; +} + static inline void *qbman_cena_write_start(struct qbman_swp_sys *s, uint32_t offset) { @@ -476,6 +533,82 @@ static inline int qbman_swp_sys_init(struct qbman_swp_sys *s, return 0; } +static inline int qbman_swp_sys_update(struct qbman_swp_sys *s, + const struct qbman_swp_desc *d, + uint8_t dqrr_size, + int stash_off) +{ + uint32_t reg; + int i; + int cena_region_size = 4*1024; + uint8_t est = 1; +#ifdef RTE_ARCH_64 + uint8_t wn = CENA_WRITE_ENABLE; +#else + uint8_t wn = CINH_WRITE_ENABLE; +#endif + + if (stash_off) + wn = CINH_WRITE_ENABLE; + + QBMAN_BUG_ON(d->idx < 0); +#ifdef QBMAN_CHECKING + /* We should never be asked to initialise for a portal that isn't in + * the power-on state. (Ie. don't forget to reset portals when they are + * decommissioned!) + */ + reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG); + QBMAN_BUG_ON(reg); +#endif + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) + memset(s->addr_cena, 0, cena_region_size); + else { + /* Invalidate the portal memory. + * This ensures no stale cache lines + */ + for (i = 0; i < cena_region_size; i += 64) + dccivac(s->addr_cena + i); + } + + if (dpaa2_svr_family == SVR_LS1080A) + est = 0; + + if (s->eqcr_mode == qman_eqcr_vb_array) { + reg = qbman_set_swp_cfg(dqrr_size, wn, + 0, 3, 2, 3, 1, 1, 1, 1, 1, 1); + } else { + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 && + (d->cena_access_mode == qman_cena_fastest_access)) + reg = qbman_set_swp_cfg(dqrr_size, wn, + 1, 3, 2, 0, 1, 1, 1, 1, 1, 1); + else + reg = qbman_set_swp_cfg(dqrr_size, wn, + est, 3, 2, 2, 1, 1, 1, 1, 1, 1); + } + + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) + reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */ + 1 << SWP_CFG_VPM_SHIFT | /* VDQCR read triggered mode */ + 1 << SWP_CFG_CPM_SHIFT; /* CR read triggered mode */ + + qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg); + reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG); + if (!reg) { + pr_err("The portal %d is not enabled!\n", s->idx); + return -1; + } + + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) { + qbman_cinh_write(s, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE); + qbman_cinh_write(s, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE); + } + + return 0; +} + static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s) { free(s->cena); From patchwork Mon Mar 2 14:58:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184080 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192695ile; Mon, 2 Mar 2020 01:28:01 -0800 (PST) X-Google-Smtp-Source: APXvYqxYkG8C9vKeCU0xdaz9IAgcrdS9NQC+6up5vcFh4BB7otSHY2WhkTEmMVG2g8V/dMYUvv4I X-Received: by 2002:a17:906:d14e:: with SMTP id br14mr13980134ejb.105.1583141281240; Mon, 02 Mar 2020 01:28:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141281; cv=none; d=google.com; s=arc-20160816; b=DHvOoQGPOth9Bchf4ylUBx8YXAtUqNh6N5tdoBmQet8l1uGhst/lwvU3QXI7go+wVK pBd8r0gDFVL4CBQt5REVqVGeMRDjuBrRlcO8vXkjmN0//iMZDYRc5XzeTKe84vl1lCSq iEsMpcsUz/ATtTRWNP4xOY+F3r00OHuxVT1Lql9ElrSILEWmdyr4kvlChEc1F4DaGgBz UrmjXdh8LeCmMnt91Fg52F84rZXZI2heLc1hIaqpKOzdpXld4ePf4NYOadOZFuv/YZ1Q FVfR+mL7LoVlh66W+xR2rGq2yOwJ83mLPEEHhE/41q0EYXJtc84Ns3PDMH/8qePlBsRt tDzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=CAvWqo0VQ3fzJHuuRsNVMT0OBJ2gufOrQZrrDsGIX9c=; b=MtldTNAcY1NN7XbEy32qQ6nhCv95Dk4EJZY9RAOy33pPQPn7LvFvqhujoDz2X7KDdF h51aDr8ZaTPBFTEIeR1DKk70kYG6sK90slQFuE1zzcNl4g4FqYgJcuedIEgRWUfpgF90 kPbA60j0gqVFyb9g6Ve6NOjctpduVB0aj8RcYYsy7dn1RTI5diWwumKdiQ1ZyH2w9z83 qFsrXNX0jnAm9qLHw0JiK7wpatzHhcGO+YTUj8mr3WsqOirpZ4r7vV/r7CcocszM48Lo FU2YKinVM/yLIrBs3gCFu31gkZyGCXJWYNQ+81rBa37Pu6x4+92vJOJtiSs+LgNHm1RN Bm9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id k24si3465071ejx.460.2020.03.02.01.28.01; Mon, 02 Mar 2020 01:28:01 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 710A11C0C2; Mon, 2 Mar 2020 10:26:47 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 00B4C1C036 for ; Mon, 2 Mar 2020 10:26:40 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B779C200F40; Mon, 2 Mar 2020 10:26:40 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 6C969200F22; Mon, 2 Mar 2020 10:26:38 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 0A8564029E; Mon, 2 Mar 2020 17:26:34 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:21 +0530 Message-Id: <20200302145829.27808-9-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 08/16] drivers: enhance portal allocation failure log X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The change adds printing the thread id when portal allocation failure occurs Signed-off-by: Nipun Gupta --- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 ++++++-- drivers/event/dpaa2/dpaa2_eventdev.c | 8 ++++++-- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 12 +++++++++--- drivers/net/dpaa2/dpaa2_ethdev.c | 4 +++- drivers/net/dpaa2/dpaa2_rxtx.c | 16 ++++++++++++---- drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c | 8 ++++++-- drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 12 +++++++++--- 7 files changed, 51 insertions(+), 17 deletions(-) -- 2.17.1 diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index 6ed2701ab..cf08003cc 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -1459,7 +1459,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, if (!DPAA2_PER_LCORE_DPIO) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_SEC_ERR("Failure in affining portal"); + DPAA2_SEC_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -1641,7 +1643,9 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops, if (!DPAA2_PER_LCORE_DPIO) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_SEC_ERR("Failure in affining portal"); + DPAA2_SEC_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 1833d659d..bb02ea9fb 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -74,7 +74,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], /* Affine current thread context to a qman portal */ ret = dpaa2_affine_qbman_swp(); if (ret < 0) { - DPAA2_EVENTDEV_ERR("Failure in affining portal"); + DPAA2_EVENTDEV_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -273,7 +275,9 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[], /* Affine current thread context to a qman portal */ ret = dpaa2_affine_qbman_swp(); if (ret < 0) { - DPAA2_EVENTDEV_ERR("Failure in affining portal"); + DPAA2_EVENTDEV_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c index 48887beb7..fa9b53e64 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -69,7 +69,9 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp) if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_MEMPOOL_ERR("Failure in affining portal"); + DPAA2_MEMPOOL_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); goto err1; } } @@ -198,7 +200,9 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret != 0) { - DPAA2_MEMPOOL_ERR("Failed to allocate IO portal"); + DPAA2_MEMPOOL_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return; } } @@ -317,7 +321,9 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret != 0) { - DPAA2_MEMPOOL_ERR("Failed to allocate IO portal"); + DPAA2_MEMPOOL_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return ret; } } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 4fc550a88..4a61c6f78 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -887,7 +887,9 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_PMD_ERR("Failure in affining portal"); + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return -EINVAL; } } diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 52d913d9e..d809e0f4b 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -760,7 +760,9 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_PMD_ERR("Failure in affining portal\n"); + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -872,7 +874,9 @@ uint16_t dpaa2_dev_tx_conf(void *queue) if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_PMD_ERR("Failure in affining portal\n"); + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -1011,7 +1015,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_PMD_ERR("Failure in affining portal"); + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -1272,7 +1278,9 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_PMD_ERR("Failure in affining portal"); + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c index 997d1c873..7c21c6a52 100644 --- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c +++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c @@ -70,7 +70,9 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_CMDIF_ERR("Failure in affining portal\n"); + DPAA2_CMDIF_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -133,7 +135,9 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_CMDIF_ERR("Failure in affining portal\n"); + DPAA2_CMDIF_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c index c90595400..d5202d652 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -666,7 +666,9 @@ dpdmai_dev_enqueue_multi(struct dpaa2_dpdmai_dev *dpdmai_dev, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR("Failure in affining portal"); + DPAA2_QDMA_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -788,7 +790,9 @@ dpdmai_dev_dequeue_multijob_prefetch( if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR("Failure in affining portal"); + DPAA2_QDMA_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } @@ -929,7 +933,9 @@ dpdmai_dev_dequeue_multijob_no_prefetch( if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR("Failure in affining portal"); + DPAA2_QDMA_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); return 0; } } From patchwork Mon Mar 2 14:58:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184081 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192872ile; Mon, 2 Mar 2020 01:28:14 -0800 (PST) X-Google-Smtp-Source: APXvYqzLKHXjm4Xw0FoXPzp8I37DuMkU2RDnxeS2ndZTBj+3k9P/eHBioYD4+8iqXH8+70CpX8oS X-Received: by 2002:a05:6402:1acf:: with SMTP id ba15mr14809166edb.279.1583141294232; Mon, 02 Mar 2020 01:28:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141294; cv=none; d=google.com; s=arc-20160816; b=IOoVkpQ41fC4vJU7bIHvDvmIEHlbjfgnxrn5SDzHgjTPf0OO06iOaqTN+OVZANaWDa vS2g5QhoPl+6OFHRsa21VY+TE9OjsxZE4uZJ8/s5k8Kun3deh2WmMIT5rcG78q6OkIBL APGaJAUIKZJRswmvW6nZVXm1ix9qW0jarO11AZ9gXF5AN/jNUM4W8tVtHMjF5IRaZb1K HzVy8VpTrP6YCppiVV/askun/L9W5iUksYu/xcMbe75agFNzuI+ywNPXwT2iEHK+Eysx 7m5Es6XC03TLhptlUeiXT0ShnR6FMPu4fhSHcxK8ddIvbPabNyRotGanc1SAiC8JaCv6 ZwwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=wbHv81dJyuDAb21SUx1lUmu7sQtdmYdtgA5Q4c/64GE=; b=CTWaVIwNo0dG0jmwQaHh58hb03c68BzPSxUUmPjUSbfPTbwD7XP/pFT4jdY+11Peyv T+FRjll4E9N7JLiUQkubNkcVSIL0A7O5WEIXHOG+k1l/xRRzSvHRyGLvc9WEnNFDCfD+ UVWXHRGOeFs54MNKXHHVgLp+wplOC9R4KeQsg8SJhLhMvjVQesMCuJq/wQBSkXTym6I8 pfsaO1gvy/OAPOS7c7DLI9TGkTFQ38M0rr4G9qlSq7F5YLoAVVsnb+c/t5QQJm2NE4mD QOcDgbx6xZdqjhHZHpRcpQ9B6l5YdoEGyCP4fw28icRVgQ1khBaZVc9W2aJrR09WAuLV 5Hxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id 19si634201ejz.200.2020.03.02.01.28.14; Mon, 02 Mar 2020 01:28:14 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 230871C0CE; Mon, 2 Mar 2020 10:26:49 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id CCB4A1C012 for ; Mon, 2 Mar 2020 10:26:41 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id AF0F3200F55; Mon, 2 Mar 2020 10:26:41 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 65E85200F4D; Mon, 2 Mar 2020 10:26:39 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id D755B402A7; Mon, 2 Mar 2020 17:26:35 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:22 +0530 Message-Id: <20200302145829.27808-10-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 09/16] bus/fslmc: rename the cinh read functions used for ls1088 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta This patch changes the qbman I/O function names as they are only reading from cinh register, but writing to cena registers. This gives way to add functions which purely work in cinh mode Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/qbman/qbman_portal.c | 34 +++++++++++++------------- 1 file changed, 17 insertions(+), 17 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index a06b88dd2..207faada3 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -78,7 +78,7 @@ qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); static int -qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); static int @@ -97,7 +97,7 @@ qbman_swp_enqueue_multiple_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int -qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_multiple_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, uint32_t *flags, @@ -122,7 +122,7 @@ qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int -qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_multiple_fd_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, uint32_t *flags, @@ -146,7 +146,7 @@ qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, const struct qbman_fd *fd, int num_frames); static int -qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_multiple_desc_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, int num_frames); @@ -309,15 +309,15 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) && (d->cena_access_mode == qman_cena_fastest_access)) { p->eqcr.pi_ring_size = 32; qbman_swp_enqueue_array_mode_ptr = - qbman_swp_enqueue_array_mode_mem_back; + qbman_swp_enqueue_array_mode_mem_back; qbman_swp_enqueue_ring_mode_ptr = - qbman_swp_enqueue_ring_mode_mem_back; + qbman_swp_enqueue_ring_mode_mem_back; qbman_swp_enqueue_multiple_ptr = - qbman_swp_enqueue_multiple_mem_back; + qbman_swp_enqueue_multiple_mem_back; qbman_swp_enqueue_multiple_fd_ptr = - qbman_swp_enqueue_multiple_fd_mem_back; + qbman_swp_enqueue_multiple_fd_mem_back; qbman_swp_enqueue_multiple_desc_ptr = - qbman_swp_enqueue_multiple_desc_mem_back; + qbman_swp_enqueue_multiple_desc_mem_back; qbman_swp_pull_ptr = qbman_swp_pull_mem_back; qbman_swp_dqrr_next_ptr = qbman_swp_dqrr_next_mem_back; qbman_swp_release_ptr = qbman_swp_release_mem_back; @@ -325,13 +325,13 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) if (dpaa2_svr_family == SVR_LS1080A) { qbman_swp_enqueue_ring_mode_ptr = - qbman_swp_enqueue_ring_mode_cinh_direct; + qbman_swp_enqueue_ring_mode_cinh_read_direct; qbman_swp_enqueue_multiple_ptr = - qbman_swp_enqueue_multiple_cinh_direct; + qbman_swp_enqueue_multiple_cinh_read_direct; qbman_swp_enqueue_multiple_fd_ptr = - qbman_swp_enqueue_multiple_fd_cinh_direct; + qbman_swp_enqueue_multiple_fd_cinh_read_direct; qbman_swp_enqueue_multiple_desc_ptr = - qbman_swp_enqueue_multiple_desc_cinh_direct; + qbman_swp_enqueue_multiple_desc_cinh_read_direct; } for (mask_size = p->eqcr.pi_ring_size; mask_size > 0; mask_size >>= 1) @@ -835,7 +835,7 @@ static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s, return 0; } -static int qbman_swp_enqueue_ring_mode_cinh_direct( +static int qbman_swp_enqueue_ring_mode_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd) @@ -1037,7 +1037,7 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s, return num_enqueued; } -static int qbman_swp_enqueue_multiple_cinh_direct( +static int qbman_swp_enqueue_multiple_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -1304,7 +1304,7 @@ static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, return num_enqueued; } -static int qbman_swp_enqueue_multiple_fd_cinh_direct( +static int qbman_swp_enqueue_multiple_fd_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, @@ -1573,7 +1573,7 @@ static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, return num_enqueued; } -static int qbman_swp_enqueue_multiple_desc_cinh_direct( +static int qbman_swp_enqueue_multiple_desc_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, From patchwork Mon Mar 2 14:58:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184082 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2192987ile; Mon, 2 Mar 2020 01:28:24 -0800 (PST) X-Google-Smtp-Source: ADFU+vsqruRfYv/GfASKdb9bq5WVwdAds2cCHTgJ0qyS5P+pPiwEkouKvibd8W/DbwDr7aCcWBLG X-Received: by 2002:aa7:c544:: with SMTP id s4mr3747964edr.268.1583141304411; Mon, 02 Mar 2020 01:28:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141304; cv=none; d=google.com; s=arc-20160816; b=BoBiR8B8yb3a225HjwXEyFKt2PKwtlqtAOGfxe6vBs1AxnKlTbkEcjLZoeHxfC6dXQ WGV0ulg5FMVjrdP9Pc7PTZwxofrGA9m70DDf+9zBpT/g4cZQH+b0vygFQkFHEECf1RoK PUZpB5fjww+bgXxn3sDrsN2MCkaCvH6gXmOySZDHRmTXsafZoj8++NFfGejckWe5AJGn BF6BAkzO1RBZ2uMECmxGBnr6NTr9528p3NaEKD6h0KicBjwhjud3p6OcRYWLUTCvxDRC 8ELaKfwa+tXImUKkRlwYOMLuAsm76WY2eKYRep3Xnz9ISZZBh19TUzYxO3EnnL/2L026 iVYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=MyLf8u5+ZTIehgCj5owSNnv8eRSEOf6syY4epk9D5PE=; b=YrdM7e5ZSyRawGYnX2Is8/95T5qoc1l3xn573JTH91UMwU+ZzUBCLaS8tkUzt6KCki Y5HzASL+N34JOe4n+CFXWKu0gOCLtAl/mubPMjZfvkeyvhwSDi3z4lSkxso1yTf4ceAt s45Komw9p4/ohhAqhjy/zMYmfdVtMqhGYLmZazeP2hZ+I7kvNy2AxcFOI8wRs9iwCbfQ pkPNFsHW2d57+VjAx/8UZbNfQbjpR5xpqK97dEXPH9Jc2+sbmnJ1qEhMaq1mKpt1TKgL Kt2Xmw4//FkTgPPDKtXUC1uNA1GzyvJ6RKL39ocgzBENj/yDr1TMc7JRcyXZNqg/JPHr D//w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id n16si1859893edb.51.2020.03.02.01.28.24; Mon, 02 Mar 2020 01:28:24 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 575F01C0D1; Mon, 2 Mar 2020 10:26:50 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 702241C012 for ; Mon, 2 Mar 2020 10:26:42 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 522861A0FA6; Mon, 2 Mar 2020 10:26:42 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 405B81A0F8C; Mon, 2 Mar 2020 10:26:40 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 0CD3E402D5; Mon, 2 Mar 2020 17:26:36 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:23 +0530 Message-Id: <20200302145829.27808-11-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 10/16] net/dpaa: return error on multiple mp config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta multiple buffer pools are not supported on a single device. Signed-off-by: Nipun Gupta --- drivers/mempool/dpaa/dpaa_mempool.c | 1 + drivers/net/dpaa/dpaa_ethdev.c | 6 ++++++ 2 files changed, 7 insertions(+) -- 2.17.1 diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index 3a2528331..00db6f9bc 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -132,6 +132,7 @@ dpaa_mbuf_free_pool(struct rte_mempool *mp) DPAA_MEMPOOL_INFO("BMAN pool freed for bpid =%d", bp_info->bpid); rte_free(mp->pool_data); + bp_info->bp = NULL; mp->pool_data = NULL; } } diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index fce9ce2fe..0384532d2 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -587,6 +587,12 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, DPAA_PMD_INFO("Rx queue setup for queue index: %d fq_id (0x%x)", queue_idx, rxq->fqid); + if (dpaa_intf->bp_info && dpaa_intf->bp_info->bp && + dpaa_intf->bp_info->mp != mp) { + DPAA_PMD_WARN("Multiple pools on same interface not supported"); + return -EINVAL; + } + /* Max packet can fit in single buffer */ if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) { ; From patchwork Mon Mar 2 14:58:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184083 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2193118ile; Mon, 2 Mar 2020 01:28:35 -0800 (PST) X-Google-Smtp-Source: APXvYqwqiLj/HtTH2GVFt03u2P2YIIkTiu/HXmkaA9uKpidvd59NHOEn2ct3RjwFBwHvnmPG2pFq X-Received: by 2002:a50:cbc3:: with SMTP id l3mr16160723edi.258.1583141314898; Mon, 02 Mar 2020 01:28:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141314; cv=none; d=google.com; s=arc-20160816; b=S4HN38TPLtYWuUaD1UCtxvZcR+9vGiSwzTK0YvCSTeCBobeHvGyMj/Zn//spw+jYth boaqP4Kn25JdG89SQQjxB4w6semBIWzTpK/qz+icWeqSklW/O8Namomso+aL+bp+lFa0 sfefW4vPcqyixfjtc92sEJV53srEUYsvxiS6J0G52kZDg36BOAlKGPIRKQM/nM3kh4f9 P1fXV2aXCgdKcWZnKPMRWu9aMLh8tCoVBQZwSGGCZJiU/B6Am3jaXfRSV/FBqPt8SM9o cXvwhj3kXysykHFrOEJ4m0qqOXVq/VBgvAMk99HN9g0GMI5r62BgZdcCxdmMXtQpqO/2 3PLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=fAGRf1GKXn3VQZ2f74bC5daUfDioE4y6bQTGzFrUqHQ=; b=xap96giRzQUbDt7r7iQ0bhEMVj9PV6rGFMcqyX52SV2/I5a5B7fopfJEM6YpP6KhJ0 k5RzCV1tZA1ZXMBvVb+xA90+FzbD1nPOrd+sFDPBKMvTy+5ULvqq0xqi5QSJzeF2Ki9J ns+tCHKOuziwhqSFTR5YBRKBc+hJMBDE8rt/iLu9pv9lCCXdAkn8DCQfPOxV8V3Y73B9 NpDN3xpjK06KUSE8cH3EGvXstz0G5jjbtGHdrlXdQ4B31lldqGwQrBNiYu9Zb/pDmuxK kMAJE3qMafM2PqkEDRcjcJ+wuyRb4KE23pJueTIHAzLKHmPfcJBFFFf8+M+4HN4dzf0c A1lw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id q28si2351632edw.34.2020.03.02.01.28.34; Mon, 02 Mar 2020 01:28:34 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B6AAC1C0D6; Mon, 2 Mar 2020 10:26:51 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 2BDDE1C08C for ; Mon, 2 Mar 2020 10:26:43 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 05D611A0F8C; Mon, 2 Mar 2020 10:26:43 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id AFDDA1A0FA5; Mon, 2 Mar 2020 10:26:40 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id E2FEE402ED; Mon, 2 Mar 2020 17:26:37 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Gagandeep Singh Date: Mon, 2 Mar 2020 20:28:24 +0530 Message-Id: <20200302145829.27808-12-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 11/16] net/dpaa: enable Tx queue taildrop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh Enable congestion handling/tail drop for TX queues. Signed-off-by: Gagandeep Singh --- drivers/bus/dpaa/base/qbman/qman.c | 43 +++++++++ drivers/bus/dpaa/include/fsl_qman.h | 17 ++++ drivers/bus/dpaa/rte_bus_dpaa_version.map | 7 ++ drivers/net/dpaa/dpaa_ethdev.c | 111 ++++++++++++++++++++-- drivers/net/dpaa/dpaa_ethdev.h | 2 +- drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++ drivers/net/dpaa/dpaa_rxtx.h | 3 + 7 files changed, 247 insertions(+), 7 deletions(-) -- 2.17.1 diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c index b53eb9e5e..494aca1d0 100644 --- a/drivers/bus/dpaa/base/qbman/qman.c +++ b/drivers/bus/dpaa/base/qbman/qman.c @@ -40,6 +40,8 @@ spin_unlock(&__fq478->fqlock); \ } while (0) +static qman_cb_free_mbuf qman_free_mbuf_cb; + static inline void fq_set(struct qman_fq *fq, u32 mask) { dpaa_set_bits(mask, &fq->flags); @@ -790,6 +792,47 @@ static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq, FQUNLOCK(fq); } +void +qman_ern_register_cb(qman_cb_free_mbuf cb) +{ + qman_free_mbuf_cb = cb; +} + + +void +qman_ern_poll_free(void) +{ + struct qman_portal *p = get_affine_portal(); + u8 verb, num = 0; + const struct qm_mr_entry *msg; + const struct qm_fd *fd; + struct qm_mr_entry swapped_msg; + + qm_mr_pvb_update(&p->p); + msg = qm_mr_current(&p->p); + + while (msg != NULL) { + swapped_msg = *msg; + hw_fd_to_cpu(&swapped_msg.ern.fd); + verb = msg->ern.verb & QM_MR_VERB_TYPE_MASK; + fd = &swapped_msg.ern.fd; + + if (unlikely(verb & 0x20)) { + printf("HW ERN notification, Nothing to do\n"); + } else { + if ((fd->bpid & 0xff) != 0xff) + qman_free_mbuf_cb(fd); + } + + num++; + qm_mr_next(&p->p); + qm_mr_pvb_update(&p->p); + msg = qm_mr_current(&p->p); + } + + qm_mr_cci_consume(&p->p, num); +} + static u32 __poll_portal_slow(struct qman_portal *p, u32 is) { const struct qm_mr_entry *msg; diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index 4deea5e75..80fdb3950 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1152,6 +1152,10 @@ typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq, /* This callback type is used when handling DCP ERNs */ typedef void (*qman_cb_dc_ern)(struct qman_portal *qm, const struct qm_mr_entry *msg); + +/* This callback function will be used to free mbufs of ERN */ +typedef uint16_t (*qman_cb_free_mbuf)(const struct qm_fd *fd); + /* * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active + * held-active + held-suspended are just "sched". Things like "retired" will not @@ -1778,6 +1782,19 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags); int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags, int frames_to_send); +/** + * qman_ern_poll_free - Polling on MR and calling a callback function to free + * mbufs when SW ERNs received. + */ +__rte_experimental +void qman_ern_poll_free(void); + +/** + * qman_ern_register_cb - Register a callback function to free buffers. + */ +__rte_experimental +void qman_ern_register_cb(qman_cb_free_mbuf cb); + /** * qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame * queues. diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index e6ca4361e..86f5811b0 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -94,3 +94,10 @@ DPDK_20.0 { local: *; }; + +EXPERIMENTAL { + global: + + qman_ern_poll_free; + qman_ern_register_cb; +}; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 0384532d2..2ae79c9f5 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ /* System headers */ @@ -86,9 +86,12 @@ static int dpaa_push_mode_max_queue = DPAA_DEFAULT_PUSH_MODE_QUEUE; static int dpaa_push_queue_idx; /* Queue index which are in push mode*/ -/* Per FQ Taildrop in frame count */ +/* Per RX FQ Taildrop in frame count */ static unsigned int td_threshold = CGR_RX_PERFQ_THRESH; +/* Per TX FQ Taildrop in frame count, disabled by default */ +static unsigned int td_tx_threshold; + struct rte_dpaa_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; uint32_t offset; @@ -275,7 +278,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); /* Change tx callback to the real one */ - dev->tx_pkt_burst = dpaa_eth_queue_tx; + if (dpaa_intf->cgr_tx) + dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; + else + dev->tx_pkt_burst = dpaa_eth_queue_tx; + fman_if_enable_rx(dpaa_intf->fif); return 0; @@ -869,6 +876,7 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, DPAA_PMD_INFO("Tx queue setup for queue index: %d fq_id (0x%x)", queue_idx, dpaa_intf->tx_queues[queue_idx].fqid); dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx]; + return 0; } @@ -1239,9 +1247,19 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx, /* Initialise a Tx FQ */ static int dpaa_tx_queue_init(struct qman_fq *fq, - struct fman_if *fman_intf) + struct fman_if *fman_intf, + struct qman_cgr *cgr_tx) { struct qm_mcc_initfq opts = {0}; + struct qm_mcc_initcgr cgr_opts = { + .we_mask = QM_CGR_WE_CS_THRES | + QM_CGR_WE_CSTD_EN | + QM_CGR_WE_MODE, + .cgr = { + .cstd_en = QM_CGR_EN, + .mode = QMAN_CGR_MODE_FRAME + } + }; int ret; ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID | @@ -1260,6 +1278,27 @@ static int dpaa_tx_queue_init(struct qman_fq *fq, opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi; opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo; DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid); + + if (cgr_tx) { + /* Enable tail drop with cgr on this queue */ + qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, + td_tx_threshold, 0); + cgr_tx->cb = NULL; + ret = qman_create_cgr(cgr_tx, QMAN_CGR_FLAG_USE_INIT, + &cgr_opts); + if (ret) { + DPAA_PMD_WARN( + "rx taildrop init fail on rx fqid 0x%x(ret=%d)", + fq->fqid, ret); + goto without_cgr; + } + opts.we_mask |= QM_INITFQ_WE_CGID; + opts.fqd.cgid = cgr_tx->cgrid; + opts.fqd.fq_ctrl |= QM_FQCTRL_CGE; + DPAA_PMD_DEBUG("Tx FQ tail drop enabled, threshold = %d\n", + td_tx_threshold); + } +without_cgr: ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts); if (ret) DPAA_PMD_ERR("init tx fqid 0x%x failed %d", fq->fqid, ret); @@ -1312,6 +1351,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) struct fman_if *fman_intf; struct fman_if_bpool *bp, *tmp_bp; uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES]; + uint32_t cgrid_tx[MAX_DPAA_CORES]; PMD_INIT_FUNC_TRACE(); @@ -1321,7 +1361,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) eth_dev->dev_ops = &dpaa_devops; /* Plugging of UCODE burst API not supported in Secondary */ eth_dev->rx_pkt_burst = dpaa_eth_queue_rx; - eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; + if (dpaa_intf->cgr_tx) + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; + else + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP qman_set_fq_lookup_table( dpaa_intf->rx_queues->qman_fq_lookup_table); @@ -1368,6 +1411,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) return -ENOMEM; } + memset(cgrid, 0, sizeof(cgrid)); + memset(cgrid_tx, 0, sizeof(cgrid_tx)); + + /* if DPAA_TX_TAILDROP_THRESHOLD is set, use that value; if 0, it means + * Tx tail drop is disabled. + */ + if (getenv("DPAA_TX_TAILDROP_THRESHOLD")) { + td_tx_threshold = atoi(getenv("DPAA_TX_TAILDROP_THRESHOLD")); + DPAA_PMD_DEBUG("Tail drop threshold env configured: %u", + td_tx_threshold); + /* if a very large value is being configured */ + if (td_tx_threshold > UINT16_MAX) + td_tx_threshold = CGR_RX_PERFQ_THRESH; + } + /* If congestion control is enabled globally*/ if (td_threshold) { dpaa_intf->cgr_rx = rte_zmalloc(NULL, @@ -1416,9 +1474,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) goto free_rx; } + /* If congestion control is enabled globally*/ + if (td_tx_threshold) { + dpaa_intf->cgr_tx = rte_zmalloc(NULL, + sizeof(struct qman_cgr) * MAX_DPAA_CORES, + MAX_CACHELINE); + if (!dpaa_intf->cgr_tx) { + DPAA_PMD_ERR("Failed to alloc mem for cgr_tx\n"); + ret = -ENOMEM; + goto free_rx; + } + + ret = qman_alloc_cgrid_range(&cgrid_tx[0], MAX_DPAA_CORES, + 1, 0); + if (ret != MAX_DPAA_CORES) { + DPAA_PMD_WARN("insufficient CGRIDs available"); + ret = -EINVAL; + goto free_rx; + } + } else { + dpaa_intf->cgr_tx = NULL; + } + + for (loop = 0; loop < MAX_DPAA_CORES; loop++) { + if (dpaa_intf->cgr_tx) + dpaa_intf->cgr_tx[loop].cgrid = cgrid_tx[loop]; + ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop], - fman_intf); + fman_intf, + dpaa_intf->cgr_tx ? &dpaa_intf->cgr_tx[loop] : NULL); if (ret) goto free_tx; dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf; @@ -1495,6 +1580,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) free_rx: rte_free(dpaa_intf->cgr_rx); + rte_free(dpaa_intf->cgr_tx); rte_free(dpaa_intf->rx_queues); dpaa_intf->rx_queues = NULL; dpaa_intf->nb_rx_queues = 0; @@ -1535,6 +1621,17 @@ dpaa_dev_uninit(struct rte_eth_dev *dev) rte_free(dpaa_intf->cgr_rx); dpaa_intf->cgr_rx = NULL; + /* Release TX congestion Groups */ + if (dpaa_intf->cgr_tx) { + for (loop = 0; loop < MAX_DPAA_CORES; loop++) + qman_delete_cgr(&dpaa_intf->cgr_tx[loop]); + + qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid, + MAX_DPAA_CORES); + rte_free(dpaa_intf->cgr_tx); + dpaa_intf->cgr_tx = NULL; + } + rte_free(dpaa_intf->rx_queues); dpaa_intf->rx_queues = NULL; @@ -1640,6 +1737,8 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, eth_dev->device = &dpaa_dev->device; dpaa_dev->eth_dev = eth_dev; + qman_ern_register_cb(dpaa_free_mbuf); + /* Invoke PMD device initialization function */ diag = dpaa_dev_init(eth_dev); if (diag == 0) { diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index da06f1faa..3eab029fd 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -110,6 +110,7 @@ struct dpaa_if { struct qman_fq *rx_queues; struct qman_cgr *cgr_rx; struct qman_fq *tx_queues; + struct qman_cgr *cgr_tx; struct qman_fq debug_queues[2]; uint16_t nb_rx_queues; uint16_t nb_tx_queues; @@ -181,5 +182,4 @@ dpaa_rx_cb_atomic(void *event, struct qman_fq *fq, const struct qm_dqrr_entry *dqrr, void **bufs); - #endif diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index 5dba1db8b..5dedff25e 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -398,6 +398,69 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid) return mbuf; } +uint16_t +dpaa_free_mbuf(const struct qm_fd *fd) +{ + struct rte_mbuf *mbuf; + struct dpaa_bp_info *bp_info; + uint8_t format; + void *ptr; + + bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid); + format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT; + if (unlikely(format == qm_fd_sg)) { + struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp; + struct qm_sg_entry *sgt, *sg_temp; + void *vaddr, *sg_vaddr; + int i = 0; + uint16_t fd_offset = fd->offset; + + vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd)); + if (!vaddr) { + DPAA_PMD_ERR("unable to convert physical address"); + return -1; + } + sgt = vaddr + fd_offset; + sg_temp = &sgt[i++]; + hw_sg_to_cpu(sg_temp); + temp = (struct rte_mbuf *) + ((char *)vaddr - bp_info->meta_data_size); + sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info, + qm_sg_entry_get64(sg_temp)); + + first_seg = (struct rte_mbuf *)((char *)sg_vaddr - + bp_info->meta_data_size); + first_seg->nb_segs = 1; + prev_seg = first_seg; + while (i < DPAA_SGT_MAX_ENTRIES) { + sg_temp = &sgt[i++]; + hw_sg_to_cpu(sg_temp); + sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info, + qm_sg_entry_get64(sg_temp)); + cur_seg = (struct rte_mbuf *)((char *)sg_vaddr - + bp_info->meta_data_size); + first_seg->nb_segs += 1; + prev_seg->next = cur_seg; + if (sg_temp->final) { + cur_seg->next = NULL; + break; + } + prev_seg = cur_seg; + } + + rte_pktmbuf_free_seg(temp); + rte_pktmbuf_free_seg(first_seg); + return 0; + } + + ptr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd)); + mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size); + + rte_pktmbuf_free(mbuf); + + return 0; +} + /* Specific for LS1043 */ void dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr, @@ -1011,6 +1074,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) return sent; } +uint16_t +dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) +{ + qman_ern_poll_free(); + + return dpaa_eth_queue_tx(q, bufs, nb_bufs); +} + uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused, struct rte_mbuf **bufs __rte_unused, uint16_t nb_bufs __rte_unused) diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h index 75b093c1e..d41add704 100644 --- a/drivers/net/dpaa/dpaa_rxtx.h +++ b/drivers/net/dpaa/dpaa_rxtx.h @@ -254,6 +254,8 @@ struct annotations_t { uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs); +uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, + uint16_t nb_bufs); uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs); uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused, @@ -266,6 +268,7 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, struct qm_fd *fd, uint32_t bpid); +uint16_t dpaa_free_mbuf(const struct qm_fd *fd); void dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs); From patchwork Mon Mar 2 14:58:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184085 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2193440ile; Mon, 2 Mar 2020 01:28:57 -0800 (PST) X-Google-Smtp-Source: ADFU+vv71SSDCi/0L4qZEt5Wf9AmJP5D2Z0B3p+tcgcdodUpbyY20p0A1X9XHJ/OAmLfjLu0f8ty X-Received: by 2002:a17:906:7f1a:: with SMTP id d26mr3852079ejr.115.1583141337148; Mon, 02 Mar 2020 01:28:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141337; cv=none; d=google.com; s=arc-20160816; b=OJElXC4HgdMmo5oJkQK+OeUhp9BMvvHZ8d83sMrsvTPfN6Xs6b7TSRNcTCryKAZSPM 6IX9fFhRMO1QsBzgLh+fzFnb38wnphlgWifnldKvFYohuaYejFp78b+T9uY7Qi5jRtif xwr0kxABFxARSm8MEGGDxFZhQ/wihr+WdVFg7BXihscP9W7fhvTLZOzpQRlh8E0LjYjI KB0R+svgbwRc+h2Cmo+HpPYQRLQlLCo0IxPbUAOmME2FjV6BebgFFwCZ7dwDVebKm4DJ xcwtOrvD2Z1bCSiCCSMZbA4lHuL/0w5nn/Vghgt3KovhjanCm8Uizv/YaqkS2n/zBw+c x1qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=z2c6E9fJAnU8Z/D6xWCWwqik2KgkZ1A2UQATXdafMHE=; b=kSfxtTYVlliK8s2gx9RRiO3jhJkpk0DLF+1yPHoAwisFDYUX5LnucW9iKtBUp6Hd77 ZX+7LTgsnDYQ6nzNXhq+RXnv/Ss9mfglsFSqdYkNpYj4oU+inYmt0sMFHYEaVpAoXIvd jDodvLVq+1+WhHDyzePKXQ21zH0YW0+y/ieVtisvedkSW8oyHi0w56WuIaqUoi3PuB1I on1e5QTAloA/DMaXIrm5Hq4FDtVL3Ik3Fh6l6BlvjrwVCxSHx6QHx3uWfuQ/eHAoqM5l hplRZsOF8OcSLtWZiVplmeORvdfW7TLuU0FxUzMXE9ji4KnWdrayJ7ykJbORdPtR0sH3 Bv0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id x25si6840276edr.436.2020.03.02.01.28.56; Mon, 02 Mar 2020 01:28:57 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1D3F11C10D; Mon, 2 Mar 2020 10:26:54 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 19BE01C0AE for ; Mon, 2 Mar 2020 10:26:45 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id F13291A0FAA; Mon, 2 Mar 2020 10:26:44 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 30DB91A02D7; Mon, 2 Mar 2020 10:26:42 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id C866540299; Mon, 2 Mar 2020 17:26:38 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Sachin Saxena , Gagandeep Singh Date: Mon, 2 Mar 2020 20:28:25 +0530 Message-Id: <20200302145829.27808-13-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 12/16] net/dpaa: add 2.5G support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sachin Saxena Handle 2.5Gbps ethernet ports as well. Signed-off-by: Sachin Saxena Signed-off-by: Gagandeep Singh --- drivers/bus/dpaa/base/fman/fman.c | 6 ++++-- drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 ++- drivers/bus/dpaa/include/fman.h | 1 + drivers/net/dpaa/dpaa_ethdev.c | 9 ++++++++- 4 files changed, 15 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index 6d77a7e39..ae26041ca 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -263,7 +263,7 @@ fman_if_init(const struct device_node *dpa_node) fman_dealloc_bufs_mask_hi = 0; fman_dealloc_bufs_mask_lo = 0; } - /* Is the MAC node 1G, 10G? */ + /* Is the MAC node 1G, 2.5G, 10G? */ __if->__if.is_memac = 0; if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac")) @@ -279,7 +279,9 @@ fman_if_init(const struct device_node *dpa_node) /* Right now forcing memac to 1g in case of error*/ __if->__if.mac_type = fman_mac_1g; } else { - if (strstr(char_prop, "sgmii")) + if (strstr(char_prop, "sgmii-2500")) + __if->__if.mac_type = fman_mac_2_5g; + else if (strstr(char_prop, "sgmii")) __if->__if.mac_type = fman_mac_1g; else if (strstr(char_prop, "rgmii")) { __if->__if.mac_type = fman_mac_1g; diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c index 36eca88cd..b7009f229 100644 --- a/drivers/bus/dpaa/base/fman/netcfg_layer.c +++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c @@ -44,7 +44,8 @@ dump_netcfg(struct netcfg_info *cfg_ptr) printf("\n+ Fman %d, MAC %d (%s);\n", __if->fman_idx, __if->mac_idx, - (__if->mac_type == fman_mac_1g) ? "1G" : "10G"); + (__if->mac_type == fman_mac_1g) ? "1G" : + (__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G"); printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n", (&__if->mac_addr)->addr_bytes[0], diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h index c02d32d22..12e598b2d 100644 --- a/drivers/bus/dpaa/include/fman.h +++ b/drivers/bus/dpaa/include/fman.h @@ -71,6 +71,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if); enum fman_mac_type { fman_offline = 0, fman_mac_1g, + fman_mac_2_5g, fman_mac_10g, }; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 2ae79c9f5..1d23fc674 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -356,8 +356,13 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev, if (dpaa_intf->fif->mac_type == fman_mac_1g) { dev_info->speed_capa = ETH_LINK_SPEED_1G; + } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) { + dev_info->speed_capa = ETH_LINK_SPEED_1G + | ETH_LINK_SPEED_2_5G; } else if (dpaa_intf->fif->mac_type == fman_mac_10g) { - dev_info->speed_capa = (ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G); + dev_info->speed_capa = ETH_LINK_SPEED_1G + | ETH_LINK_SPEED_2_5G + | ETH_LINK_SPEED_10G; } else { DPAA_PMD_ERR("invalid link_speed: %s, %d", dpaa_intf->name, dpaa_intf->fif->mac_type); @@ -384,6 +389,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, if (dpaa_intf->fif->mac_type == fman_mac_1g) link->link_speed = ETH_SPEED_NUM_1G; + else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) + link->link_speed = ETH_SPEED_NUM_2_5G; else if (dpaa_intf->fif->mac_type == fman_mac_10g) link->link_speed = ETH_SPEED_NUM_10G; else From patchwork Mon Mar 2 14:58:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184084 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2193290ile; Mon, 2 Mar 2020 01:28:46 -0800 (PST) X-Google-Smtp-Source: ADFU+vsxnfi4tqfl92PeReopOW5hy55rN3DAC5Yiy/6AheW0RVpGhmapEaSoD5p5fa0BdKDL6A1L X-Received: by 2002:a17:907:2116:: with SMTP id qn22mr7392622ejb.291.1583141326585; Mon, 02 Mar 2020 01:28:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141326; cv=none; d=google.com; s=arc-20160816; b=WR3F9I4I+7NiR0GXoPSWDtlijboLrp2hA9WpDl+YyNWEVyMF77v8m4KcsaltN0kQfD eBo2dsBpodmEyrOKSwJaSkbGyRlD1YItw5NYwjFBCfKL6I2p8Rr2BHXI8sA3/rFeq2Vx c5R9TqreK5ElhLkMq/U30gICHB6RGXdzWttAYB2j8jtFhaeHxiP1sNurgjV6u30qxZXb zxWY7X9eQsWFqCdcJ8cFGsLRTCs97tYeA+G1M6+RL5NQM0NLoYA5CP5/MR7+Juskqc3w ZFvpnJWzWBAwbSD43bfCPrYWyaqTWjYy9WvoUPF+3dFxzMN7FpE1PuM8cB/omaJqZoD7 hgIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=l6wrjDd1dta0vW2d7bKDheKJsxS/GCZDVRNbSBLaN4o=; b=gZykIgpifTLOW/IOXjOA2HtGlnf/s7t8M0wJGXxFXojaE7lLBXQ6iKK4T/dOpf8raW nLatcrI5VnoygiE6TJsTfIQE40vnr1chKT5lIf9SX1QkO5QGsfAh1lq1qmuy7DkJ4mHO wYAbzZpn1gcR4RHp1rljbVVyLUYKT++vsVdnkdfEYAnMrb0NdaGhtAZOHp9/JEA1qIgj 8i/KQqWAUAlgDcbeZ2X7OvOiO2PivrFP+ANb9U/+jVLR0qKdI88u3GyizX6GpRaA1ykm aG7ixx8mP/9CMi9xGJnPwTj2NmtXtlGvlciRkx70pKiXAobkR4HXKiTf/6q2I67t8fLJ MA3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id f3si4805085edn.60.2020.03.02.01.28.46; Mon, 02 Mar 2020 01:28:46 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D4CE71C0DA; Mon, 2 Mar 2020 10:26:52 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 277601C0B0 for ; Mon, 2 Mar 2020 10:26:45 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id EE01A200F55; Mon, 2 Mar 2020 10:26:44 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 9D4D8200F22; Mon, 2 Mar 2020 10:26:42 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id D2B3A402F3; Mon, 2 Mar 2020 17:26:39 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:26 +0530 Message-Id: <20200302145829.27808-14-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 13/16] net/dpaa: update process specific device info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta For DPAA devices the memory maps stored in the FMAN interface information is per process. Store them in the device process specific area. Signed-off-by: Nipun Gupta --- drivers/net/dpaa/dpaa_ethdev.c | 207 ++++++++++++++++----------------- drivers/net/dpaa/dpaa_ethdev.h | 1 - 2 files changed, 102 insertions(+), 106 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 1d23fc674..abe247acd 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -149,7 +149,6 @@ dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts) static int dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE; uint32_t buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; @@ -185,7 +184,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - fman_if_set_maxfrm(dpaa_intf->fif, frame_size); + fman_if_set_maxfrm(dev->process_private, frame_size); return 0; } @@ -193,7 +192,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) static int dpaa_eth_dev_configure(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_conf *eth_conf = &dev->data->dev_conf; uint64_t rx_offloads = eth_conf->rxmode.offloads; uint64_t tx_offloads = eth_conf->txmode.offloads; @@ -232,14 +230,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) max_len = DPAA_MAX_RX_PKT_LEN; } - fman_if_set_maxfrm(dpaa_intf->fif, max_len); + fman_if_set_maxfrm(dev->process_private, max_len); dev->data->mtu = max_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE; } if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) { DPAA_PMD_DEBUG("enabling scatter mode"); - fman_if_set_sg(dpaa_intf->fif, 1); + fman_if_set_sg(dev->process_private, 1); dev->data->scattered_rx = 1; } @@ -283,18 +281,18 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev) else dev->tx_pkt_burst = dpaa_eth_queue_tx; - fman_if_enable_rx(dpaa_intf->fif); + fman_if_enable_rx(dev->process_private); return 0; } static void dpaa_eth_dev_stop(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; PMD_INIT_FUNC_TRACE(); - fman_if_disable_rx(dpaa_intf->fif); + fman_if_disable_rx(fif); dev->tx_pkt_burst = dpaa_eth_tx_drop_all; } @@ -342,6 +340,7 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; DPAA_PMD_DEBUG(": %s", dpaa_intf->name); @@ -354,18 +353,18 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev, dev_info->max_vmdq_pools = ETH_16_POOLS; dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL; - if (dpaa_intf->fif->mac_type == fman_mac_1g) { + if (fif->mac_type == fman_mac_1g) { dev_info->speed_capa = ETH_LINK_SPEED_1G; - } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) { + } else if (fif->mac_type == fman_mac_2_5g) { dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G; - } else if (dpaa_intf->fif->mac_type == fman_mac_10g) { + } else if (fif->mac_type == fman_mac_10g) { dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_10G; } else { DPAA_PMD_ERR("invalid link_speed: %s, %d", - dpaa_intf->name, dpaa_intf->fif->mac_type); + dpaa_intf->name, fif->mac_type); return -EINVAL; } @@ -384,18 +383,19 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, { struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_link *link = &dev->data->dev_link; + struct fman_if *fif = dev->process_private; PMD_INIT_FUNC_TRACE(); - if (dpaa_intf->fif->mac_type == fman_mac_1g) + if (fif->mac_type == fman_mac_1g) link->link_speed = ETH_SPEED_NUM_1G; - else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) + else if (fif->mac_type == fman_mac_2_5g) link->link_speed = ETH_SPEED_NUM_2_5G; - else if (dpaa_intf->fif->mac_type == fman_mac_10g) + else if (fif->mac_type == fman_mac_10g) link->link_speed = ETH_SPEED_NUM_10G; else DPAA_PMD_ERR("invalid link_speed: %s, %d", - dpaa_intf->name, dpaa_intf->fif->mac_type); + dpaa_intf->name, fif->mac_type); link->link_status = dpaa_intf->valid; link->link_duplex = ETH_LINK_FULL_DUPLEX; @@ -406,21 +406,17 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, static int dpaa_eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_stats_get(dpaa_intf->fif, stats); + fman_if_stats_get(dev->process_private, stats); return 0; } static int dpaa_eth_stats_reset(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_stats_reset(dpaa_intf->fif); + fman_if_stats_reset(dev->process_private); return 0; } @@ -429,7 +425,6 @@ static int dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings); uint64_t values[sizeof(struct dpaa_if_stats) / 8]; @@ -439,7 +434,7 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, if (xstats == NULL) return 0; - fman_if_stats_get_all(dpaa_intf->fif, values, + fman_if_stats_get_all(dev->process_private, values, sizeof(struct dpaa_if_stats) / 8); for (i = 0; i < num; i++) { @@ -476,15 +471,13 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8]; if (!ids) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - if (n < stat_cnt) return stat_cnt; if (!values) return 0; - fman_if_stats_get_all(dpaa_intf->fif, values_copy, + fman_if_stats_get_all(dev->process_private, values_copy, sizeof(struct dpaa_if_stats) / 8); for (i = 0; i < stat_cnt; i++) @@ -533,44 +526,36 @@ dpaa_xstats_get_names_by_id( static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_promiscuous_enable(dpaa_intf->fif); + fman_if_promiscuous_enable(dev->process_private); return 0; } static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_promiscuous_disable(dpaa_intf->fif); + fman_if_promiscuous_disable(dev->process_private); return 0; } static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_set_mcast_filter_table(dpaa_intf->fif); + fman_if_set_mcast_filter_table(dev->process_private); return 0; } static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_reset_mcast_filter_table(dpaa_intf->fif); + fman_if_reset_mcast_filter_table(dev->process_private); return 0; } @@ -583,6 +568,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, struct rte_mempool *mp) { struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx]; struct qm_mcc_initfq opts = {0}; u32 flags = 0; @@ -645,22 +631,22 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, icp.iciof = DEFAULT_ICIOF; icp.iceof = DEFAULT_RX_ICEOF; icp.icsz = DEFAULT_ICSZ; - fman_if_set_ic_params(dpaa_intf->fif, &icp); + fman_if_set_ic_params(fif, &icp); fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE; - fman_if_set_fdoff(dpaa_intf->fif, fd_offset); + fman_if_set_fdoff(fif, fd_offset); /* Buffer pool size should be equal to Dataroom Size*/ bp_size = rte_pktmbuf_data_room_size(mp); - fman_if_set_bp(dpaa_intf->fif, mp->size, + fman_if_set_bp(fif, mp->size, dpaa_intf->bp_info->bpid, bp_size); dpaa_intf->valid = 1; DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d", dpaa_intf->name, fd_offset, - fman_if_get_fdoff(dpaa_intf->fif)); + fman_if_get_fdoff(fif)); } DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name, - fman_if_get_sg_enable(dpaa_intf->fif), + fman_if_get_sg_enable(fif), dev->data->dev_conf.rxmode.max_rx_pkt_len); /* checking if push mode only, no error check for now */ if (!rxq->is_static && @@ -952,11 +938,12 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev, return 0; } else if (fc_conf->mode == RTE_FC_TX_PAUSE || fc_conf->mode == RTE_FC_FULL) { - fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water, + fman_if_set_fc_threshold(dev->process_private, + fc_conf->high_water, fc_conf->low_water, - dpaa_intf->bp_info->bpid); + dpaa_intf->bp_info->bpid); if (fc_conf->pause_time) - fman_if_set_fc_quanta(dpaa_intf->fif, + fman_if_set_fc_quanta(dev->process_private, fc_conf->pause_time); } @@ -992,10 +979,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev, fc_conf->autoneg = net_fc->autoneg; return 0; } - ret = fman_if_get_fc_threshold(dpaa_intf->fif); + ret = fman_if_get_fc_threshold(dev->process_private); if (ret) { fc_conf->mode = RTE_FC_TX_PAUSE; - fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif); + fc_conf->pause_time = + fman_if_get_fc_quanta(dev->process_private); } else { fc_conf->mode = RTE_FC_NONE; } @@ -1010,11 +998,11 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev, __rte_unused uint32_t pool) { int ret; - struct dpaa_if *dpaa_intf = dev->data->dev_private; PMD_INIT_FUNC_TRACE(); - ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index); + ret = fman_if_add_mac_addr(dev->process_private, + addr->addr_bytes, index); if (ret) RTE_LOG(ERR, PMD, "error: Adding the MAC ADDR failed:" @@ -1026,11 +1014,9 @@ static void dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_clear_mac_addr(dpaa_intf->fif, index); + fman_if_clear_mac_addr(dev->process_private, index); } static int @@ -1038,11 +1024,10 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr) { int ret; - struct dpaa_if *dpaa_intf = dev->data->dev_private; PMD_INIT_FUNC_TRACE(); - ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0); + ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0); if (ret) RTE_LOG(ERR, PMD, "error: Setting the MAC ADDR failed %d", ret); @@ -1145,7 +1130,6 @@ int rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on) { struct rte_eth_dev *dev; - struct dpaa_if *dpaa_intf; RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); @@ -1154,17 +1138,16 @@ rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on) if (!is_dpaa_supported(dev)) return -ENOTSUP; - dpaa_intf = dev->data->dev_private; - if (on) - fman_if_loopback_enable(dpaa_intf->fif); + fman_if_loopback_enable(dev->process_private); else - fman_if_loopback_disable(dpaa_intf->fif); + fman_if_loopback_disable(dev->process_private); return 0; } -static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf) +static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf, + struct fman_if *fman_intf) { struct rte_eth_fc_conf *fc_conf; int ret; @@ -1180,10 +1163,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf) } } fc_conf = dpaa_intf->fc_conf; - ret = fman_if_get_fc_threshold(dpaa_intf->fif); + ret = fman_if_get_fc_threshold(fman_intf); if (ret) { fc_conf->mode = RTE_FC_TX_PAUSE; - fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif); + fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf); } else { fc_conf->mode = RTE_FC_NONE; } @@ -1345,6 +1328,39 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid) } #endif +/* Initialise a network interface */ +static int +dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev) +{ + struct rte_dpaa_device *dpaa_device; + struct fm_eth_port_cfg *cfg; + struct dpaa_if *dpaa_intf; + struct fman_if *fman_intf; + int dev_id; + + PMD_INIT_FUNC_TRACE(); + + dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device); + dev_id = dpaa_device->id.dev_id; + cfg = &dpaa_netcfg->port_cfg[dev_id]; + fman_intf = cfg->fman_if; + eth_dev->process_private = fman_intf; + + /* Plugging of UCODE burst API not supported in Secondary */ + dpaa_intf = eth_dev->data->dev_private; + eth_dev->rx_pkt_burst = dpaa_eth_queue_rx; + if (dpaa_intf->cgr_tx) + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; + else + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; +#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP + qman_set_fq_lookup_table( + dpaa_intf->rx_queues->qman_fq_lookup_table); +#endif + + return 0; +} + /* Initialise a network interface */ static int dpaa_dev_init(struct rte_eth_dev *eth_dev) @@ -1362,23 +1378,6 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); - dpaa_intf = eth_dev->data->dev_private; - /* For secondary processes, the primary has done all the work */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - eth_dev->dev_ops = &dpaa_devops; - /* Plugging of UCODE burst API not supported in Secondary */ - eth_dev->rx_pkt_burst = dpaa_eth_queue_rx; - if (dpaa_intf->cgr_tx) - eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; - else - eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; -#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP - qman_set_fq_lookup_table( - dpaa_intf->rx_queues->qman_fq_lookup_table); -#endif - return 0; - } - dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device); dev_id = dpaa_device->id.dev_id; dpaa_intf = eth_dev->data->dev_private; @@ -1388,7 +1387,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) dpaa_intf->name = dpaa_device->name; /* save fman_if & cfg in the interface struture */ - dpaa_intf->fif = fman_intf; + eth_dev->process_private = fman_intf; dpaa_intf->ifid = dev_id; dpaa_intf->cfg = cfg; @@ -1457,7 +1456,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) if (default_q) fqid = cfg->rx_def; else - fqid = DPAA_PCD_FQID_START + dpaa_intf->fif->mac_idx * + fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx * DPAA_PCD_FQID_MULTIPLIER + loop; if (dpaa_intf->cgr_rx) @@ -1529,7 +1528,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) DPAA_PMD_DEBUG("All frame queues created"); /* Get the initial configuration for flow control */ - dpaa_fc_set_default(dpaa_intf); + dpaa_fc_set_default(dpaa_intf, fman_intf); /* reset bpool list, initialize bpool dynamically */ list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) { @@ -1682,6 +1681,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, return -ENOMEM; eth_dev->device = &dpaa_dev->device; eth_dev->dev_ops = &dpaa_devops; + + ret = dpaa_dev_init_secondary(eth_dev); + if (ret != 0) { + RTE_LOG(ERR, PMD, "secondary dev init failed\n"); + return ret; + } + rte_eth_dev_probing_finish(eth_dev); return 0; } @@ -1718,29 +1724,20 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, } } - /* In case of secondary process, the device is already configured - * and no further action is required, except portal initialization - * and verifying secondary attachment to port name. - */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name); - if (!eth_dev) - return -ENOMEM; - } else { - eth_dev = rte_eth_dev_allocate(dpaa_dev->name); - if (eth_dev == NULL) - return -ENOMEM; + eth_dev = rte_eth_dev_allocate(dpaa_dev->name); + if (!eth_dev) + return -ENOMEM; - eth_dev->data->dev_private = rte_zmalloc( - "ethdev private structure", - sizeof(struct dpaa_if), - RTE_CACHE_LINE_SIZE); - if (!eth_dev->data->dev_private) { - DPAA_PMD_ERR("Cannot allocate memzone for port data"); - rte_eth_dev_release_port(eth_dev); - return -ENOMEM; - } + eth_dev->data->dev_private = rte_zmalloc( + "ethdev private structure", + sizeof(struct dpaa_if), + RTE_CACHE_LINE_SIZE); + if (!eth_dev->data->dev_private) { + DPAA_PMD_ERR("Cannot allocate memzone for port data"); + rte_eth_dev_release_port(eth_dev); + return -ENOMEM; } + eth_dev->device = &dpaa_dev->device; dpaa_dev->eth_dev = eth_dev; diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 3eab029fd..72a9c5910 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -115,7 +115,6 @@ struct dpaa_if { uint16_t nb_rx_queues; uint16_t nb_tx_queues; uint32_t ifid; - struct fman_if *fif; struct dpaa_bp_info *bp_info; struct rte_eth_fc_conf *fc_conf; }; From patchwork Mon Mar 2 14:58:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184086 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2193646ile; Mon, 2 Mar 2020 01:29:09 -0800 (PST) X-Google-Smtp-Source: APXvYqyiULJQigeF7UJDCx4zkNIKWAimBAxGFTh38rpxvQiRqY0kHhXSb/PnvMvTz1Q3O//HVD1I X-Received: by 2002:a17:906:f915:: with SMTP id lc21mr13732621ejb.360.1583141349740; Mon, 02 Mar 2020 01:29:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141349; cv=none; d=google.com; s=arc-20160816; b=QIU3GKbAqv83ZxpsEOodxvuGWPDbMMwqDJzC6XhXz64+tzGsI+Ee+zipWWEHDGd1sP k10qpOyAXY7+1YfAXt99k3r5cQBs9Uud1H25MM4St1IdnbYlLL6hpUZyBM9FNgYiWouQ cFE2kwUIyPQgRFC3YqAgaP/yJ8dpLRk/cjZThsJxSb5vPYCOQQvaFOmiyfmlCDdZtobv nJnLcBvSGvuUG3qKQH2Z9gm/b1g3bhIyVQz1Ex1RJHNiMEjudwOitA797qC0PRjkOKz6 8PsxiSVD6XBzQclxwIhoB/cbeS/vYpgNqcGLBy6gbt2BQY5zp6ahZLggy+p3/yH28iya GoIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=0T+I96q2rlpihZe3ZFXdGqWLMNYTf/UG82vsCid4xKM=; b=SQFhrrRq3qTYvPgUSEFVz5VWLGCLP6L6rAE3QjGmyojZwqXddufN0m5LuPvBg+9BkK EOoBlNJPxNGHD/yi7lz+8tOknBfXgz4aP795M3i0TDDxvbauYq+v5jQKJtmbMCpuMeA7 xDArUP30ZT2XnjlzYpON7D0L9/bhb1h8uIkXcZZLNkdKjSizaxMTuRikj65igmgoCRDb OR5p/4kd5wwMTdfv6DIv/e3UeKFQScEs7rZmwUb2LTxDcA7gifemwo60WVLvpDdoNozG dJFKFb+LU/YXDV458w+uebKk3uBF9BFp9zzPSq/yitr3ja4e/4BPEADHlQHsCHKDKtAw woPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id n25si6710922edo.485.2020.03.02.01.29.09; Mon, 02 Mar 2020 01:29:09 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BBB4E1C11B; Mon, 2 Mar 2020 10:26:55 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 033121C0B4 for ; Mon, 2 Mar 2020 10:26:46 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id D8594200F5C; Mon, 2 Mar 2020 10:26:45 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 8DE16200F4D; Mon, 2 Mar 2020 10:26:43 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id B74F5402FA; Mon, 2 Mar 2020 17:26:40 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Rohit Raj Date: Mon, 2 Mar 2020 20:28:27 +0530 Message-Id: <20200302145829.27808-15-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 14/16] bus/dpaa: enable link state interrupt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj Enable/disable link state interrupt and get link state api is defined using IOCTL calls. Signed-off-by: Rohit Raj --- drivers/bus/dpaa/base/fman/fman.c | 4 +- drivers/bus/dpaa/base/qbman/process.c | 68 ++++++++++++++++++- drivers/bus/dpaa/dpaa_bus.c | 28 +++++++- drivers/bus/dpaa/include/fman.h | 2 + drivers/bus/dpaa/include/process.h | 20 ++++++ drivers/bus/dpaa/rte_bus_dpaa_version.map | 3 + drivers/bus/dpaa/rte_dpaa_bus.h | 6 +- drivers/common/dpaax/compat.h | 5 +- drivers/net/dpaa/dpaa_ethdev.c | 82 ++++++++++++++++++++++- 9 files changed, 212 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index ae26041ca..33be9e5d7 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2010-2016 Freescale Semiconductor Inc. - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ @@ -185,6 +185,8 @@ fman_if_init(const struct device_node *dpa_node) } memset(__if, 0, sizeof(*__if)); INIT_LIST_HEAD(&__if->__if.bpool_list); + strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1); + __if->node_name[IF_NAME_MAX_LEN - 1] = '\0'; strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1); __if->node_path[PATH_MAX - 1] = '\0'; diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c index 2c23c98df..598b10661 100644 --- a/drivers/bus/dpaa/base/qbman/process.c +++ b/drivers/bus/dpaa/base/qbman/process.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2011-2016 Freescale Semiconductor Inc. - * Copyright 2017 NXP + * Copyright 2017,2020 NXP * */ #include @@ -296,3 +296,69 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal) return process_portal_free(&input); } + +#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \ + _IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status) + +#define DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT \ + _IOW(DPAA_IOCTL_MAGIC, 0x0F, char*) + +int rte_dpaa_intr_enable(char *if_name, int efd) +{ + struct usdpaa_ioctl_link_status args; + + int ret = check_fd(); + + if (ret) + return ret; + + args.efd = (uint32_t)efd; + strcpy(args.if_name, if_name); + + ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args); + if (ret) { + perror("Failed to enable interrupt\n"); + return ret; + } + + return 0; +} + +int rte_dpaa_intr_disable(char *if_name) +{ + int ret = check_fd(); + + if (ret) + return ret; + + ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name); + if (ret) { + perror("Failed to disable interrupt\n"); + return ret; + } + + return 0; +} + +#define DPAA_IOCTL_GET_LINK_STATUS \ + _IOWR(DPAA_IOCTL_MAGIC, 0x10, struct usdpaa_ioctl_link_status_args) + +int rte_dpaa_get_link_status(char *if_name) +{ + int ret = check_fd(); + struct usdpaa_ioctl_link_status_args args; + + if (ret) + return ret; + + strcpy(args.if_name, if_name); + args.link_status = 0; + + ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args); + if (ret) { + perror("Failed to get link status\n"); + return ret; + } + + return args.link_status; +} diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index f27820db3..2dedb138d 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ /* System headers */ @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -545,6 +546,23 @@ rte_dpaa_bus_dev_build(void) return 0; } +static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle) +{ + int fd; + + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (fd < 0) { + DPAA_BUS_ERR("Cannot set up eventfd, error %i (%s)", + errno, strerror(errno)); + return errno; + } + + intr_handle->fd = fd; + intr_handle->type = RTE_INTR_HANDLE_EXT; + + return 0; +} + static int rte_dpaa_bus_probe(void) { @@ -592,6 +610,14 @@ rte_dpaa_bus_probe(void) fclose(svr_file); } + TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) { + if (dev->device_type == FSL_DPAA_ETH) { + ret = rte_dpaa_setup_intr(&dev->intr_handle); + if (ret) + DPAA_PMD_ERR("Error setting up interrupt.\n"); + } + } + /* And initialize the PA->VA translation table */ dpaax_iova_table_populate(); diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h index 12e598b2d..d90f2f5fc 100644 --- a/drivers/bus/dpaa/include/fman.h +++ b/drivers/bus/dpaa/include/fman.h @@ -2,6 +2,7 @@ * * Copyright 2010-2012 Freescale Semiconductor, Inc. * All rights reserved. + * Copyright 2019-2020 NXP * */ @@ -361,6 +362,7 @@ struct fman_if_ic_params { */ struct __fman_if { struct fman_if __if; + char node_name[IF_NAME_MAX_LEN]; char node_path[PATH_MAX]; uint64_t regs_size; void *ccsr_map; diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index d9ec94ee2..312da1245 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -2,6 +2,7 @@ * * Copyright 2010-2011 Freescale Semiconductor, Inc. * All rights reserved. + * Copyright 2020 NXP * */ @@ -74,4 +75,23 @@ struct dpaa_ioctl_irq_map { int process_portal_irq_map(int fd, struct dpaa_ioctl_irq_map *irq); int process_portal_irq_unmap(int fd); +struct usdpaa_ioctl_link_status { + char if_name[IF_NAME_MAX_LEN]; + uint32_t efd; +}; + +__rte_experimental +int rte_dpaa_intr_enable(char *if_name, int efd); + +__rte_experimental +int rte_dpaa_intr_disable(char *if_name); + +struct usdpaa_ioctl_link_status_args { + /* network device node name */ + char if_name[IF_NAME_MAX_LEN]; + int link_status; +}; +__rte_experimental +int rte_dpaa_get_link_status(char *if_name); + #endif /* __PROCESS_H */ diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index 86f5811b0..e8fa8f569 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -100,4 +100,7 @@ EXPERIMENTAL { qman_ern_poll_free; qman_ern_register_cb; + rte_dpaa_get_link_status; + rte_dpaa_intr_disable; + rte_dpaa_intr_enable; }; diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index 373aca978..f385f6de8 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ #ifndef __RTE_DPAA_BUS_H__ @@ -30,6 +30,9 @@ #define SVR_LS1046A_FAMILY 0x87070000 #define SVR_MASK 0xffff0000 +/** Device driver supports link state interrupt */ +#define RTE_DPAA_DRV_INTR_LSC 0x0008 + #define RTE_DEV_TO_DPAA_CONST(ptr) \ container_of(ptr, const struct rte_dpaa_device, device) @@ -88,6 +91,7 @@ struct rte_dpaa_driver { TAILQ_ENTRY(rte_dpaa_driver) next; struct rte_driver driver; struct rte_dpaa_bus *dpaa_bus; + uint32_t drv_flags; /**< Flags for controlling device.*/ enum rte_dpaa_type drv_type; rte_dpaa_probe_t probe; rte_dpaa_remove_t remove; diff --git a/drivers/common/dpaax/compat.h b/drivers/common/dpaax/compat.h index 12c9d9917..78e16fa2f 100644 --- a/drivers/common/dpaax/compat.h +++ b/drivers/common/dpaax/compat.h @@ -2,7 +2,7 @@ * * Copyright 2011 Freescale Semiconductor, Inc. * All rights reserved. - * Copyright 2019 NXP + * Copyright 2019-2020 NXP * */ @@ -390,4 +390,7 @@ static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused) #define atomic_dec_return(v) rte_atomic32_sub_return(v, 1) #define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0) +/* Interface name len*/ +#define IF_NAME_MAX_LEN 16 + #endif /* __COMPAT_H */ diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index abe247acd..28c6b1c17 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -45,6 +45,7 @@ #include #include #include +#include /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = @@ -131,6 +132,11 @@ static struct rte_dpaa_driver rte_dpaa_pmd; static int dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); +static int dpaa_eth_link_update(struct rte_eth_dev *dev, + int wait_to_complete __rte_unused); + +static void dpaa_interrupt_handler(void *param); + static inline void dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts) { @@ -195,9 +201,18 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) struct rte_eth_conf *eth_conf = &dev->data->dev_conf; uint64_t rx_offloads = eth_conf->rxmode.offloads; uint64_t tx_offloads = eth_conf->txmode.offloads; + struct rte_device *rdev = dev->device; + struct rte_dpaa_device *dpaa_dev; + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + struct rte_intr_handle *intr_handle; PMD_INIT_FUNC_TRACE(); + dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); + intr_handle = &dpaa_dev->intr_handle; + __fif = container_of(fif, struct __fman_if, __if); + /* Rx offloads which are enabled by default */ if (dev_rx_offloads_nodis & ~rx_offloads) { DPAA_PMD_INFO( @@ -241,6 +256,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) dev->data->scattered_rx = 1; } + /* if the interrupts were configured on this devices*/ + if (intr_handle && intr_handle->fd && + dev->data->dev_conf.intr_conf.lsc != 0) + rte_intr_callback_register(intr_handle, dpaa_interrupt_handler, + (void *)dev); + + rte_dpaa_intr_enable(__fif->node_name, intr_handle->fd); + return 0; } @@ -269,6 +292,25 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev) return NULL; } +static void dpaa_interrupt_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct rte_device *rdev = dev->device; + struct rte_dpaa_device *dpaa_dev; + struct rte_intr_handle *intr_handle; + uint64_t buf; + int bytes_read; + + dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); + intr_handle = &dpaa_dev->intr_handle; + + bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t)); + if (bytes_read < 0) + DPAA_PMD_ERR("Error reading eventfd\n"); + dpaa_eth_link_update(dev, 0); + _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); +} + static int dpaa_eth_dev_start(struct rte_eth_dev *dev) { struct dpaa_if *dpaa_intf = dev->data->dev_private; @@ -298,9 +340,27 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev) static void dpaa_eth_dev_close(struct rte_eth_dev *dev) { + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + struct rte_device *rdev = dev->device; + struct rte_dpaa_device *dpaa_dev; + struct rte_intr_handle *intr_handle; + PMD_INIT_FUNC_TRACE(); + dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); + intr_handle = &dpaa_dev->intr_handle; + __fif = container_of(fif, struct __fman_if, __if); + dpaa_eth_dev_stop(dev); + + rte_dpaa_intr_disable(__fif->node_name); + + if (intr_handle && intr_handle->fd && + dev->data->dev_conf.intr_conf.lsc != 0) + rte_intr_callback_unregister(intr_handle, + dpaa_interrupt_handler, + (void *)dev); } static int @@ -384,6 +444,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_link *link = &dev->data->dev_link; struct fman_if *fif = dev->process_private; + struct __fman_if *__fif = container_of(fif, struct __fman_if, __if); + int ret; PMD_INIT_FUNC_TRACE(); @@ -397,9 +459,23 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, DPAA_PMD_ERR("invalid link_speed: %s, %d", dpaa_intf->name, fif->mac_type); - link->link_status = dpaa_intf->valid; + ret = rte_dpaa_get_link_status(__fif->node_name); + if (ret < 0) { + if (ret == -EINVAL) { + DPAA_PMD_DEBUG("Using default link status-No Support"); + ret = 1; + } else { + DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret); + return ret; + } + } + + link->link_status = ret; link->link_duplex = ETH_LINK_FULL_DUPLEX; link->link_autoneg = ETH_LINK_AUTONEG; + + DPAA_PMD_INFO("Port %d Link is %s\n", dev->data->port_id, + link->link_status ? "Up" : "Down"); return 0; } @@ -1743,6 +1819,9 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, qman_ern_register_cb(dpaa_free_mbuf); + if (dpaa_drv->drv_flags & RTE_DPAA_DRV_INTR_LSC) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + /* Invoke PMD device initialization function */ diag = dpaa_dev_init(eth_dev); if (diag == 0) { @@ -1770,6 +1849,7 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev) } static struct rte_dpaa_driver rte_dpaa_pmd = { + .drv_flags = RTE_DPAA_DRV_INTR_LSC, .drv_type = FSL_DPAA_ETH, .probe = rte_dpaa_probe, .remove = rte_dpaa_remove, From patchwork Mon Mar 2 14:58:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184087 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2193797ile; Mon, 2 Mar 2020 01:29:20 -0800 (PST) X-Google-Smtp-Source: ADFU+vsZxJePu1xYOamsT7xhG9voqcee4m63xu8P7ou90xdFpn1VJBW/tLb5IOODQ64G7DPJzqgs X-Received: by 2002:a17:906:a3c8:: with SMTP id ca8mr1981603ejb.342.1583141360220; Mon, 02 Mar 2020 01:29:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141360; cv=none; d=google.com; s=arc-20160816; b=z9A147/Tef63SrROUXL0hNAdrbwyg0ImCGUx7Aocz+92RI77uYJjI1z7em8JNBIV3+ LMBEB7TR8uwLStZE20V704KOeeeZvhuL70895HNfMkpoa/UFsoUwmoQxq69htMuDZ93J nSMBtVjobtLEVNozZmXFT1yImjQDmaLmbCmQikr5uK3D5CEy2SXReFz01Ymq+EQpo9Fe cRrvFD243qDP0snewnHNl9CB8IfpgniytI2k3dMPLo+y51P6PkX6jRI2WWH2KAANEzfN jg5O6P5ejFMcVYYEzEfBs2Ps/yor4+Pm+R3Oxhc9JVXEOYgj96QUnuokhe1SKgeKx9XC 05Iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=GpMgUb5gISaZq/zCivcDTb/IxM8HTex+ow0GnVAiwHA=; b=Znc1AgpZl2E8CprFFosAhqs2L0GU1Y8jjIL1VrqU9h+PPr5r/8yDRp4Z2oAl6jhXlF WdetEqQBLBX8HXGp1VJhHYbA88cFwIAkMCgyPQCDfUXRfzijYUczd1zy/KDZUG3IzQ7y 6X5l9X/p2cvTYD4+fxxLAPvmHxidiRIbrYmOA/0NZPYam2XA6dgYGI6akZIZZVjSXE2z KasD6TpYgvh7iKtFDzG9mlfRAkoqXEt/QEJQFVDl/UP7lj8dIZFYFeNgvPuDQCI+0gDV kmmT3Fg8KqvRtcdSrufXzd3uoe/VED6cJ7JP62xa+lMmmeUdARfVwN5mxKKgi9IzvnrP 8XFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id y14si1569646eju.344.2020.03.02.01.29.20; Mon, 02 Mar 2020 01:29:20 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D96171C120; Mon, 2 Mar 2020 10:26:56 +0100 (CET) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id DEAD61C0B4 for ; Mon, 2 Mar 2020 10:26:46 +0100 (CET) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id A711E200F4C; Mon, 2 Mar 2020 10:26:46 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 5439A200F58; Mon, 2 Mar 2020 10:26:44 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id A37BE4029E; Mon, 2 Mar 2020 17:26:41 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Rohit Raj Date: Mon, 2 Mar 2020 20:28:28 +0530 Message-Id: <20200302145829.27808-16-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 15/16] bus/dpaa: enable set link status X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj Enabled set link status API to start/stop phy device from application. Signed-off-by: Rohit Raj --- drivers/bus/dpaa/base/qbman/process.c | 35 ++++++++++++++++++++--- drivers/bus/dpaa/include/process.h | 3 ++ drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 + drivers/net/dpaa/dpaa_ethdev.c | 14 +++++++-- 4 files changed, 47 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c index 598b10661..8ab57f105 100644 --- a/drivers/bus/dpaa/base/qbman/process.c +++ b/drivers/bus/dpaa/base/qbman/process.c @@ -317,7 +317,7 @@ int rte_dpaa_intr_enable(char *if_name, int efd) ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args); if (ret) { - perror("Failed to enable interrupt\n"); + printf("Failed to enable interrupt: Not Supported\n"); return ret; } @@ -333,7 +333,7 @@ int rte_dpaa_intr_disable(char *if_name) ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name); if (ret) { - perror("Failed to disable interrupt\n"); + printf("Failed to disable interrupt: Not Supported\n"); return ret; } @@ -356,9 +356,36 @@ int rte_dpaa_get_link_status(char *if_name) ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args); if (ret) { - perror("Failed to get link status\n"); - return ret; + printf("Failed to get link status: Not Supported\n"); + return -errno; } return args.link_status; } + +#define DPAA_IOCTL_UPDATE_LINK_STATUS \ + _IOW(DPAA_IOCTL_MAGIC, 0x11, struct usdpaa_ioctl_link_status_args) + +int rte_dpaa_update_link_status(char *if_name, int link_status) +{ + struct usdpaa_ioctl_link_status_args args; + int ret; + + ret = check_fd(); + if (ret) + return ret; + + strcpy(args.if_name, if_name); + args.link_status = link_status; + + ret = ioctl(fd, DPAA_IOCTL_UPDATE_LINK_STATUS, &args); + if (ret) { + if (errno == EINVAL) + printf("Failed to set link status: Not Supported\n"); + else + perror("Failed to set link status"); + return ret; + } + + return 0; +} diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index 312da1245..9f8c85895 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -94,4 +94,7 @@ struct usdpaa_ioctl_link_status_args { __rte_experimental int rte_dpaa_get_link_status(char *if_name); +__rte_experimental +int rte_dpaa_update_link_status(char *if_name, int link_status); + #endif /* __PROCESS_H */ diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index e8fa8f569..3eac85429 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -103,4 +103,5 @@ EXPERIMENTAL { rte_dpaa_get_link_status; rte_dpaa_intr_disable; rte_dpaa_intr_enable; + rte_dpaa_update_link_status; }; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 28c6b1c17..c0a96dd47 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -972,17 +972,27 @@ dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) static int dpaa_link_down(struct rte_eth_dev *dev) { + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + PMD_INIT_FUNC_TRACE(); - dpaa_eth_dev_stop(dev); + __fif = container_of(fif, struct __fman_if, __if); + + rte_dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN); return 0; } static int dpaa_link_up(struct rte_eth_dev *dev) { + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + PMD_INIT_FUNC_TRACE(); - dpaa_eth_dev_start(dev); + __fif = container_of(fif, struct __fman_if, __if); + + rte_dpaa_update_link_status(__fif->node_name, ETH_LINK_UP); return 0; } From patchwork Mon Mar 2 14:58:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 184088 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp2193944ile; Mon, 2 Mar 2020 01:29:29 -0800 (PST) X-Google-Smtp-Source: APXvYqzDPiTz3qkH7axuFNJJpj1Yog4OC8oPtrKpC2SqZ3+gQKhMBXRXaegZqWD48ynXSFnunock X-Received: by 2002:aa7:ce95:: with SMTP id y21mr15388559edv.148.1583141369784; Mon, 02 Mar 2020 01:29:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583141369; cv=none; d=google.com; s=arc-20160816; b=xEZp0+svrEiSTEBOrlSqDYtocx/RlxIgWJcqgCsoIfF2Ujvxd8Ahy4M+4RYdiUo3jC HLxnQ7XZbDF0hLGXRY9BPrDrUHaYO0r7dj7HoHncUMbtiVRl+v1RpF8rMOhpheqAwkTq sFj0/8NuIA9UdWhHO+Xlk2+llwvog6UxXOV5JtBWst6rUO7/s6ZSqR+r/TL0y6j/Uwiv HyJC7GufCmDtEEAKKKDmeTo91emmJnx47IACAfUIUSdmb2tCsJ9DC51e3CYggXV+Oc+A +n44Uox++Vl9jM1KbSX1Im2pvtmahtjl6XGdkigDHX4kSukdwssGLV65tDPE8EtuofmS lTpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=VATRe443o+q9WoJ3ONpV1zTHyYHwDQhh08Gg5gdcpA8=; b=rvp25AJYiXsvxSNb5SkgxFGRYPqIbmWbfcSTIQFKPFn5IW4BY3lfmJxRhpPNxB+6M+ hN4g9F+/1JGQEaj6n/4vtyF6qdQm8KvMmOGDU3kcGWpnTT1MqKR8LhUkvJ+r3Q7jZ+IZ Y2OwkvzTpeGyUFfq6ZzBrXSqES+bzoP+keAOOGB01fmB0EK5egGADAlNvNz7IfpMquDl B0fmsOJem6/Pmy4SOo+TbzfxhkzcZ8/YxTAoeitLcYppAF/zuUUeNy1jYAfP+A3VUf2Q c06we/eiPRM5QcEjj8FM52W0nUj+BL0AuVOJL5y72El8HpDEcB/QkBqXytlqFXge64If 9c3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g7si5887590ejk.483.2020.03.02.01.29.29; Mon, 02 Mar 2020 01:29:29 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 36BC51C125; Mon, 2 Mar 2020 10:26:58 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 912591C0C3 for ; Mon, 2 Mar 2020 10:26:47 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 73EE91A0FB4; Mon, 2 Mar 2020 10:26:47 +0100 (CET) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1E37C1A02D7; Mon, 2 Mar 2020 10:26:45 +0100 (CET) Received: from bf-netperf1.ap.com (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 66279402A7; Mon, 2 Mar 2020 17:26:42 +0800 (SGT) From: Hemant Agrawal To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Nipun Gupta Date: Mon, 2 Mar 2020 20:28:29 +0530 Message-Id: <20200302145829.27808-17-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200302145829.27808-1-hemant.agrawal@nxp.com> References: <20200302145829.27808-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 16/16] net/dpaa2: do not prefetch annotaion for physical mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta When IOVA is physical address do not prefetch the annotation of the next frame, as there is a cost involved there to convert the physical address to virtual address. Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 6 ++-- drivers/net/dpaa2/dpaa2_rxtx.c | 40 +++++++++++++++---------- 2 files changed, 28 insertions(+), 18 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index bde1441f4..6b07b628a 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ @@ -403,8 +403,8 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr) #else /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */ #define DPAA2_MBUF_VADDR_TO_IOVA(mbuf) ((mbuf)->buf_addr) -#define DPAA2_VADDR_TO_IOVA(_vaddr) (_vaddr) -#define DPAA2_IOVA_TO_VADDR(_iova) (_iova) +#define DPAA2_VADDR_TO_IOVA(_vaddr) (phys_addr_t)(_vaddr) +#define DPAA2_IOVA_TO_VADDR(_iova) (void *)(_iova) #define DPAA2_MODIFY_IOVA_TO_VADDR(_mem, _type) #endif /* RTE_LIBRTE_DPAA2_USE_PHYS_IOVA */ diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index d809e0f4b..4d024a85f 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ @@ -324,8 +324,8 @@ static inline struct rte_mbuf *__attribute__((hot)) eth_fd_to_mbuf(const struct qbman_fd *fd, int port_id) { - struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF( - DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)), + void *iova_addr = DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)); + struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(iova_addr, rte_dpaa2_bpid_info[DPAA2_GET_FD_BPID(fd)].meta_data_size); /* need to repopulated some of the fields, @@ -350,8 +350,7 @@ eth_fd_to_mbuf(const struct qbman_fd *fd, dpaa2_dev_rx_parse_new(mbuf, fd); else mbuf->packet_type = dpaa2_dev_rx_parse(mbuf, - (void *)((size_t)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)) - + DPAA2_FD_PTA_SIZE)); + (void *)((size_t)iova_addr + DPAA2_FD_PTA_SIZE)); DPAA2_PMD_DP_DEBUG("to mbuf - mbuf =%p, mbuf->buf_addr =%p, off = %d," "fd_off=%d fd =%" PRIx64 ", meta = %d bpid =%d, len=%d\n", @@ -518,7 +517,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) int ret, num_rx = 0, pull_size; uint8_t pending, status; struct qbman_swp *swp; - const struct qbman_fd *fd, *next_fd; + const struct qbman_fd *fd; struct qbman_pull_desc pulldesc; struct queue_storage_info_t *q_storage = dpaa2_q->q_storage; struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; @@ -617,12 +616,15 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } fd = qbman_result_DQ_fd(dq_storage); +#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA if (dpaa2_svr_family != SVR_LX2160A) { - next_fd = qbman_result_DQ_fd(dq_storage + 1); + const struct qbman_fd *next_fd = + qbman_result_DQ_fd(dq_storage + 1); /* Prefetch Annotation address for the parse results */ - rte_prefetch0((void *)(size_t)(DPAA2_GET_FD_ADDR( - next_fd) + DPAA2_FD_PTA_SIZE + 16)); + rte_prefetch0(DPAA2_IOVA_TO_VADDR((DPAA2_GET_FD_ADDR( + next_fd) + DPAA2_FD_PTA_SIZE + 16))); } +#endif if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg)) bufs[num_rx] = eth_sg_fd_to_mbuf(fd, eth_data->port_id); @@ -753,7 +755,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) int ret, num_rx = 0, next_pull = nb_pkts, num_pulled; uint8_t pending, status; struct qbman_swp *swp; - const struct qbman_fd *fd, *next_fd; + const struct qbman_fd *fd; struct qbman_pull_desc pulldesc; struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; @@ -821,11 +823,19 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } fd = qbman_result_DQ_fd(dq_storage); - next_fd = qbman_result_DQ_fd(dq_storage + 1); - /* Prefetch Annotation address for the parse results */ - rte_prefetch0( - (void *)(size_t)(DPAA2_GET_FD_ADDR(next_fd) - + DPAA2_FD_PTA_SIZE + 16)); +#ifndef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + if (dpaa2_svr_family != SVR_LX2160A) { + const struct qbman_fd *next_fd = + qbman_result_DQ_fd(dq_storage + 1); + + /* Prefetch Annotation address for the parse + * results. + */ + rte_prefetch0((DPAA2_IOVA_TO_VADDR( + DPAA2_GET_FD_ADDR(next_fd) + + DPAA2_FD_PTA_SIZE + 16))); + } +#endif if (unlikely(DPAA2_FD_GET_FORMAT(fd) == qbman_fd_sg)) bufs[num_rx] = eth_sg_fd_to_mbuf(fd,