From patchwork Tue Jul 7 09:22:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234954 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp735558ilg; Tue, 7 Jul 2020 02:27:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxVh+Sr4SCpkzV90C/pbEHanRvrQ73CREyXgwShx0mp4fTJ6VEI0H2Ldp8YSALbHKIjf4uP X-Received: by 2002:a25:ef4c:: with SMTP id w12mr90892898ybm.40.1594114026695; Tue, 07 Jul 2020 02:27:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114026; cv=none; d=google.com; s=arc-20160816; b=VD57UEasv/Ng73BjBoZBqpZ7vo8ZODmfAA0XHKLteNR1LVRo1UinZc67CHtALpU2mz 8hv4EfO27ozdf/V3vVChL5w55A1tk1+Ms6sq6g+MPl2TCS/vFuhmWjiFvjcQSCL6Mm7t zv1+I0u4Ka0ISrW1p71WOGFsMN4gNTCn5VfG3TtItuRECeRFLmre15OckbAX6FlOOEPu VxSm7F6U06nNO1AjQEXZVgU87Gzb/IVbt8Xcfofoz6EHETkTgeaAjh0D4eRkPqYN+nyp XFm0/0k01ln1wag9Xld1rPx4XZBArMflKI6UN3COzX0ohT4oN8QVFHCDWsYwMZoic0n7 lcag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=CX3fGvtd6MO7v1FQinMyILkHC5lXEuQzR79lbGTgF/c=; b=RsGP3YmvLjbo/yNaWdlOJFqz4LvMthg727eU4p5q2UGNoAXzfsmdzqyE1mtXnmKOny j9ZiX7D9gE44WVHGx7C+H5CwlVOIwaEkxiOZvZzbkJXUu1hR7sMM2n2lsgwLLYmyYTc2 l9wwxdvijOm5XRF8CaHjP7iXZEdp/MOgr6qsn/oD7sN2wTcS6mtXJpDkWchuBKvIDduP U8lTQhzLj+gB1ktuOHo3ohiQa9G1pqGLt+8zBeu2tahN8rt+JLk6mQ6v8WSyQ3EnrEJT LSslsIKK0Kzzd1nV9WusmLfSjLRp2wkyswgB7bA2JR1vjxGSeqhJ45S3ThEgTpO8gy38 trAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id z14si20836258ybm.366.2020.07.07.02.27.06; Tue, 07 Jul 2020 02:27:06 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 303851DC0F; Tue, 7 Jul 2020 11:27:00 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 5BCC31DBD4; Tue, 7 Jul 2020 11:26:58 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 2A0EB2008CE; Tue, 7 Jul 2020 11:26:58 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 174E02008DD; Tue, 7 Jul 2020 11:26:56 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id F01BE402F0; Tue, 7 Jul 2020 17:26:52 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:16 +0530 Message-Id: <20200707092244.12791-2-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta Fix the incorrect register for getting error Fixes: 03e36408b9fb ("bus/fslmc: add macros required by QDMA for FLE and FD") Cc: stable@dpdk.org Signed-off-by: Nipun Gupta Acked-by: Akhil Goyal --- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index 4682a5299..f1c70251a 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -286,7 +286,7 @@ enum qbman_fd_format { #define DPAA2_GET_FD_FRC(fd) ((fd)->simple.frc) #define DPAA2_GET_FD_FLC(fd) \ (((uint64_t)((fd)->simple.flc_hi) << 32) + (fd)->simple.flc_lo) -#define DPAA2_GET_FD_ERR(fd) ((fd)->simple.bpid_offset & 0x000000FF) +#define DPAA2_GET_FD_ERR(fd) ((fd)->simple.ctrl & 0x000000FF) #define DPAA2_GET_FLE_OFFSET(fle) (((fle)->fin_bpid_offset & 0x0FFF0000) >> 16) #define DPAA2_SET_FLE_SG_EXT(fle) ((fle)->fin_bpid_offset |= (uint64_t)1 << 29) #define DPAA2_IS_SET_FLE_SG_EXT(fle) \ From patchwork Tue Jul 7 09:22:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234956 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp735764ilg; Tue, 7 Jul 2020 02:27:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxJmuakZwHGNi5XI6AWBBkFDE/PwmzPifmYFld+blh22bO4F1nsMNpEev+XOSHz6IelBEN2 X-Received: by 2002:aa7:d4ca:: with SMTP id t10mr60372828edr.244.1594114046878; Tue, 07 Jul 2020 02:27:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114046; cv=none; d=google.com; s=arc-20160816; b=E8/Dg/TMGmCSGEdbzTyKsyCASSd1mpPcWxD/rEhF7rCGgDJhGN4jGmligg1mJrXwaG DY5MP3WatPiBcK9JoZOuJ3b8lK3FsOvG1A/H4gyPRgW4lVLK8noYsrZd+NdEQxgg1amK fGY9x2js0TJmYkc0VLoyQOwuTWGViTxJhe/wh583gXF6O/W3hm8FvXgpyPH069UaOS5N yQIuCK4831PSsoS1SmxEao4WmvM6DYrCqq5KJLKzrrvL0ho2VglN23kw1zi3YC8/D29F D53+O9vIURoOmcewQ+ghAOea7Ptn7OxzjTPFdEuiQhTNJ1aj3HGagJEYK06sKe+qwhuw f6GQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=w14Z71Q3irEPpMi6ftRvemLlzkmPReiEHFUZTubg7ec=; b=mw155oofCiG5VwnznUPBdfhrh4s/CoRn22g3kzlD4NuEP8Q36uOiDc2ZgquecwluoQ fP5Z6tRO6WqI8VcCYBmilO3qlgi6QHoHiNatT/MOiNoaTWbhRAN+ShNNdrbvg4ghzfYu FYB/cshglDgGM4EkAmoYqy0goRmfKp1EngHv26r+mrBqveTCKwyf6m8x2WIX5BUe7ObH zlAANco1lrO/XaxU1YtNPWV5wOpi+d+jX2hEl88PQozJ5/wnhw89Asi+zxMf8VvUEZ2L uN6nXpv4Cn2wqbA0LKfSlTj7UzBpK5VHoFdHv3KwTiApTfilXV4Yu6iH4npWkcdvljgS d8PA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b11si9647627edy.539.2020.07.07.02.27.26; Tue, 07 Jul 2020 02:27:26 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0A6411DC7E; Tue, 7 Jul 2020 11:27:03 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 329A81DC0C; Tue, 7 Jul 2020 11:26:59 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id EE8DF1A0A57; Tue, 7 Jul 2020 11:26:58 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id DB95C1A0A30; Tue, 7 Jul 2020 11:26:56 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id C028F402F7; Tue, 7 Jul 2020 17:26:53 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:17 +0530 Message-Id: <20200707092244.12791-3-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 02/29] net/dpaa: fix fd offset data type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta On DPAA fd offset is 9 bits, but we are using uint8_t in the SG case. This patch fixes the same. Fixes: 8cffdcbe85aa ("net/dpaa: support scattered Rx") Cc: stable@dpdk.org Signed-off-by: Nipun Gupta Acked-by: Akhil Goyal --- drivers/net/dpaa/dpaa_rxtx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index 5dba1db8b..3aeecb7d2 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -305,7 +305,7 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid) struct qm_sg_entry *sgt, *sg_temp; void *vaddr, *sg_vaddr; int i = 0; - uint8_t fd_offset = fd->offset; + uint16_t fd_offset = fd->offset; vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd)); if (!vaddr) { From patchwork Tue Jul 7 09:22:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234955 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp735663ilg; Tue, 7 Jul 2020 02:27:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy1bWjyrmVP8leM9bJ2m/oxWA2ALGbw5gDbTem7zFv6RXeUg9w/YEBJ6rwgkPA/FAHYOo0N X-Received: by 2002:a25:84cf:: with SMTP id x15mr76591420ybm.2.1594114036417; Tue, 07 Jul 2020 02:27:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114036; cv=none; d=google.com; s=arc-20160816; b=J+reOFENxn0/mgDFonDfXn+g2h6BE+mrKgKYEcUEwfGkAUkc+lCEQ6PjRDStQdrNTd 6iL8CUNqfyFfjL5BKhP8desn8kE1AGT538Vnkhu9FdDci3HGotN7i2eTH6ZfxFRAAN4Z gTIOEt4FniPdu1SDU7xwv5EYrqnVI9fW7GXR7p5OTPOuinwCtv1ZkWoea2ZEJX8DaAB9 kuZwaZVq6RGctT9dw0pCubiasXhq/8HpIKTLe5HwYlEPC0clSBh4YObkSAIYZpJGoRf2 aBHHVc66MgNtB9KnWWO8WIdcB0Gc4gytrDO1x4psz0UXDiG0DPzsAt8FUhLdy0+o59d+ UCwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=hZm4y2u1u6HJBhNx+7w5CV3PhxW/mzlWBdpLmPgizIk=; b=ZQc1PkwnqtN40wQbBufH3fZvneVOI3J7rnU5l2m+YzSGIhbu6ZbCUj/oNBJScikY8X pixvA+/K3IrQuEhYNc97MjdK+vp6ZyjjtYhBL+v3Jm8G3X9kuTLbBfs4WXG+pENg38Bs oQpgnRZV62paMj202knWgKUCToW0zA0aFiWFsfB+HWBgTlDlhVPeSr00K5rApv0SuxAy /k1BhRa7YNkuaIUkAabCejxQG6GEqCil9iHtHRABSMA8mr7alCcTHGUiLimT/znTC6bc gUnmc7afdXJgrCm+e/SefdDdOE1GwEBvqXpcrYBgX5fYcSszFYdaX8tGwVpjKA4/Z7xY 7jIA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id t106si21368260ybi.421.2020.07.07.02.27.16; Tue, 07 Jul 2020 02:27:16 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ABDB31DC50; Tue, 7 Jul 2020 11:27:01 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 16DB91DC0A for ; Tue, 7 Jul 2020 11:26:59 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E98011A0A54; Tue, 7 Jul 2020 11:26:58 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 3CC821A0A4E; Tue, 7 Jul 2020 11:26:57 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 9229B402FA; Tue, 7 Jul 2020 17:26:54 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Gagandeep Singh Date: Tue, 7 Jul 2020 14:52:18 +0530 Message-Id: <20200707092244.12791-4-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh This patch enables the packet timestamping conditionally when Rx offload is enabled for timestamp. Signed-off-by: Gagandeep Singh --- drivers/net/dpaa2/dpaa2_ethdev.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index a1f19194d..8edd4b3cd 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -524,8 +524,10 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) return ret; } +#if !defined(RTE_LIBRTE_IEEE1588) if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) - dpaa2_enable_ts = true; +#endif + dpaa2_enable_ts = true; if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) tx_l3_csum_offload = true; From patchwork Tue Jul 7 09:22:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234957 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp735900ilg; Tue, 7 Jul 2020 02:27:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJym651VdQnur+5klUxa2Xrb+/gnSsyeQFwUaGlj1vdz0lQYBuTydWl+zzrZtDtWZTaFq9VA X-Received: by 2002:a05:6402:2064:: with SMTP id bd4mr58821202edb.180.1594114059409; Tue, 07 Jul 2020 02:27:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114059; cv=none; d=google.com; s=arc-20160816; b=KHigo4sXO7lQP/J76M63f9OIk9cp34I7J6iNitvbsnu67aY0rW7GxkBIhWfSHDmdms OWe43LezYr8ZZcB1QS4yD0f885s38KfUWBDX5LM95LXemN+B5Elt5d87Y57YvMsxE/o4 JNIpVVlMWSpRqrvahmGxMGZsNF9yG8BD3YPgE4484UpoyGnyz4QJf9PfoO6KXD12GzYZ JmtCwO3LowWyf1ZbeJDWPI7Bkw/vhQYqs4IcnEhjM13c7TODcLIjFFiRQoeDQOfrYEd3 LdVw5c1GhkrNlPCDIomCHhYBd2fo6zKUnrs7zkwVYpW0zRrbFIFhO8FTbqUKaBvz8BgW dcnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=GMQ5yHx+nvZZsibTvAc85yrPprOnO/WwUwtq/y1j2P4=; b=QQUqz5iwwFqxwgKzVNtoGOSEa5ARPpadHPguqfWfcQ6umH/DU/2ufyn6nRdbrQngCa M7c8CYST0/8P/cyPtXq3q7ZALWLNGzPURskwsW+hw/OIrnxV1bT6oZ9+/s/UUvMxpE72 NwKkoHxKh/RX3ZgvJPfo1RdTozxTFi55IMAEpwXSf3YhEewgfGc9BN5oLhZUA6yO1cIu uj7W8rJDpLHTgXn6BnWYuC1BhPSNCxV2k5Hisn/xBrGMm4L+4a8pButcm0ToqENmvprp Gq3zpMxQtYD+cHghPV0HhjYjWjXXUBkZXnkPjj405rzp9k7fKHN00S/DeAnCbFb/P/z2 TBPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id m20si15372794eds.126.2020.07.07.02.27.39; Tue, 07 Jul 2020 02:27:39 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23E811DCD9; Tue, 7 Jul 2020 11:27:05 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id EBD031DC0A for ; Tue, 7 Jul 2020 11:26:59 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id CDF3D2008EB; Tue, 7 Jul 2020 11:26:59 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id E7EAA20060D; Tue, 7 Jul 2020 11:26:57 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 4CB15402A8; Tue, 7 Jul 2020 17:26:55 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Gagandeep Singh Date: Tue, 7 Jul 2020 14:52:19 +0530 Message-Id: <20200707092244.12791-5-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 04/29] bus/fslmc: combine thread specific variables X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh This is to reduce the thread local storage Signed-off-by: Gagandeep Singh --- drivers/bus/fslmc/fslmc_bus.c | 2 -- drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 7 +++++++ drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 8 ++++++++ drivers/bus/fslmc/rte_bus_fslmc_version.map | 1 - drivers/bus/fslmc/rte_fslmc.h | 18 ------------------ 5 files changed, 15 insertions(+), 21 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index 25d364e81..beb3dd008 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -35,8 +35,6 @@ rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type) return rte_fslmc_bus.device_count[device_type]; } -RTE_DEFINE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs); - static void cleanup_fslmc_device_list(void) { diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h index 7c5966241..f6436f2e5 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h @@ -28,6 +28,13 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io); #define DPAA2_PER_LCORE_ETHRX_DPIO RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev #define DPAA2_PER_LCORE_ETHRX_PORTAL DPAA2_PER_LCORE_ETHRX_DPIO->sw_portal +#define DPAA2_PER_LCORE_DQRR_SIZE \ + RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_size +#define DPAA2_PER_LCORE_DQRR_HELD \ + RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.dqrr_held +#define DPAA2_PER_LCORE_DQRR_MBUF(i) \ + RTE_PER_LCORE(_dpaa2_io).dpio_dev->dpaa2_held_bufs.mbuf[i] + /* Variable to store DPAA2 DQRR size */ extern uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index f1c70251a..be48462dd 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -87,6 +87,13 @@ struct eqresp_metadata { struct rte_mempool *mp; }; +#define DPAA2_PORTAL_DEQUEUE_DEPTH 32 +struct dpaa2_portal_dqrr { + struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH]; + uint64_t dqrr_held; + uint8_t dqrr_size; +}; + struct dpaa2_dpio_dev { TAILQ_ENTRY(dpaa2_dpio_dev) next; /**< Pointer to Next device instance */ @@ -112,6 +119,7 @@ struct dpaa2_dpio_dev { struct rte_intr_handle intr_handle; /* Interrupt related info */ int32_t epoll_fd; /**< File descriptor created for interrupt polling */ int32_t hw_id; /**< An unique ID of this DPIO device instance */ + struct dpaa2_portal_dqrr dpaa2_held_bufs; }; struct dpaa2_dpbp_dev { diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map index 69e7dc6ad..2a79f4518 100644 --- a/drivers/bus/fslmc/rte_bus_fslmc_version.map +++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map @@ -57,7 +57,6 @@ INTERNAL { mc_get_version; mc_send_command; per_lcore__dpaa2_io; - per_lcore_dpaa2_held_bufs; qbman_check_command_complete; qbman_check_new_result; qbman_eq_desc_clear; diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h index 5078b48ee..80873fffc 100644 --- a/drivers/bus/fslmc/rte_fslmc.h +++ b/drivers/bus/fslmc/rte_fslmc.h @@ -137,24 +137,6 @@ struct rte_fslmc_bus { /**< Count of all devices scanned */ }; -#define DPAA2_PORTAL_DEQUEUE_DEPTH 32 - -/* Create storage for dqrr entries per lcore */ -struct dpaa2_portal_dqrr { - struct rte_mbuf *mbuf[DPAA2_PORTAL_DEQUEUE_DEPTH]; - uint64_t dqrr_held; - uint8_t dqrr_size; -}; - -RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs); - -#define DPAA2_PER_LCORE_DQRR_SIZE \ - RTE_PER_LCORE(dpaa2_held_bufs).dqrr_size -#define DPAA2_PER_LCORE_DQRR_HELD \ - RTE_PER_LCORE(dpaa2_held_bufs).dqrr_held -#define DPAA2_PER_LCORE_DQRR_MBUF(i) \ - RTE_PER_LCORE(dpaa2_held_bufs).mbuf[i] - /** * Register a DPAA2 driver. * From patchwork Tue Jul 7 09:22:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234958 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736062ilg; Tue, 7 Jul 2020 02:27:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySp3rGvTEGHyNaWf2GFtHm9l6RdLheihoTS7KwcBWnAXRmU19YamsDcKWX2tiUI/XpQ6yr X-Received: by 2002:a17:906:2e4b:: with SMTP id r11mr46312498eji.227.1594114070984; Tue, 07 Jul 2020 02:27:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114070; cv=none; d=google.com; s=arc-20160816; b=dxqM1dMIxjHdnbP5M1HPbeHpWuaR6M8TpH5ElHsa4C+2wCssngsstPUFqjaaovwchj pV/KtQMyDkNx3QpX8xpI60o+QgrF3h2pDTvcIjR2Hg2dYH6Ee52Gb2iawoAQgEQblwFL HpkEVDn3ujT1r+yl3O0fDF6VjDPyTJ/eQnDSxXR+vy8XnI7WKGCXL1UO0xWNKIDAR+al ovjetT2ZDRpVQ+xRGbgn3ifc4Hf3qgKdk1cxtckozjYtbu/qqNlIapJrsnGLsFZuM9DJ l+UeIgZmm4l2yhGXDZPHPJaim3ZoLnJoRlRbkLM1pfMNckUbsKavO1f0AXSQUhPnnH2F WmxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=r8oTXWyN9oC/p7Xedt4hZixHf6Y6iQGgxqVHZXddzBA=; b=ks2H9L9OAHh4YTskB+0Ik/lTEdXnK6Inx2m/Qtpin9ZU3nlKfiO1qfOsrWj1AGby8U KZQCEkeVAeRH61BMVqLYu5hrOggyjEQOrrI/WusbVgaaQcNH/DvhQHgGyKC0WrYrkvw/ +comg+VA3txmYqsEymub6qR2bQDC2DQuaBFhSMuP2K8+2eKyW0c5sPLMRsj0NQLuakZA SSAaeOMSKm1DnzcuwtFA6U5YuZ3H6qSekIGvC2+BF6OoUco03fh2ZPv23XPxx0ojTM4P fffiNCTHPUNymImDhbriXxuB7JNOu+7+g7CqMB9e9wGyAwr4O64iHY/GsKnqMMeyPrME PKew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id t1si14803752ejd.639.2020.07.07.02.27.50; Tue, 07 Jul 2020 02:27:50 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9E78D1DCF5; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id D5C1C1DC2E for ; Tue, 7 Jul 2020 11:27:00 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id B77B12008E1; Tue, 7 Jul 2020 11:27:00 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 9889D2008DD; Tue, 7 Jul 2020 11:26:58 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 02520402FC; Tue, 7 Jul 2020 17:26:55 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:20 +0530 Message-Id: <20200707092244.12791-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 05/29] bus/fslmc: rework portal allocation to a per thread basis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The patch reworks the portal allocation which was previously being done on per lcore basis to a per thread basis. Now user can also create its own threads and use DPAA2 portals for packet I/O. Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/Makefile | 1 + drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 220 +++++++++++++---------- drivers/bus/fslmc/portal/dpaa2_hw_dpio.h | 3 - 3 files changed, 124 insertions(+), 100 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/Makefile b/drivers/bus/fslmc/Makefile index c70e359c8..b98d758ee 100644 --- a/drivers/bus/fslmc/Makefile +++ b/drivers/bus/fslmc/Makefile @@ -17,6 +17,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/mc CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include CFLAGS += -I$(RTE_SDK)/drivers/common/dpaax CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common +LDLIBS += -lpthread LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring LDLIBS += -lrte_ethdev LDLIBS += -lrte_common_dpaax diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 21c535f2f..47ae72749 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -62,6 +62,9 @@ uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ uint8_t dpaa2_eqcr_size; +/* Variable to hold the portal_key, once created.*/ +static pthread_key_t dpaa2_portal_key; + /*Stashing Macros default for LS208x*/ static int dpaa2_core_cluster_base = 0x04; static int dpaa2_cluster_sz = 2; @@ -87,6 +90,32 @@ static int dpaa2_cluster_sz = 2; * Cluster 4 (ID = x07) : CPU14, CPU15; */ +static int +dpaa2_get_core_id(void) +{ + rte_cpuset_t cpuset; + int i, ret, cpu_id = -1; + + ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t), + &cpuset); + if (ret) { + DPAA2_BUS_ERR("pthread_getaffinity_np() failed"); + return ret; + } + + for (i = 0; i < RTE_MAX_LCORE; i++) { + if (CPU_ISSET(i, &cpuset)) { + if (cpu_id == -1) + cpu_id = i; + else + /* Multiple cpus are affined */ + return -1; + } + } + + return cpu_id; +} + static int dpaa2_core_cluster_sdest(int cpu_id) { @@ -97,7 +126,7 @@ dpaa2_core_cluster_sdest(int cpu_id) #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV static void -dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) +dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id) { #define STRING_LEN 28 #define COMMAND_LEN 50 @@ -130,7 +159,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) return; } - cpu_mask = cpu_mask << dpaa2_cpu[lcoreid]; + cpu_mask = cpu_mask << dpaa2_cpu[cpu_id]; snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity", cpu_mask, token); ret = system(command); @@ -144,7 +173,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int lcoreid) fclose(file); } -static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) +static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int cpu_id) { struct epoll_event epoll_ev; int eventfd, dpio_epoll_fd, ret; @@ -181,36 +210,42 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) } dpio_dev->epoll_fd = dpio_epoll_fd; - dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, lcoreid); + dpaa2_affine_dpio_intr_to_respective_core(dpio_dev->hw_id, cpu_id); return 0; } + +static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev) +{ + int ret; + + ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0); + if (ret) + DPAA2_BUS_ERR("DPIO interrupt disable failed"); + + close(dpio_dev->epoll_fd); +} #endif static int -dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) +dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev) { int sdest, ret; int cpu_id; /* Set the Stashing Destination */ - if (lcoreid < 0) { - lcoreid = rte_get_master_lcore(); - if (lcoreid < 0) { - DPAA2_BUS_ERR("Getting CPU Index failed"); - return -1; - } + cpu_id = dpaa2_get_core_id(); + if (cpu_id < 0) { + DPAA2_BUS_ERR("Thread not affined to a single core"); + return -1; } - cpu_id = dpaa2_cpu[lcoreid]; - /* Set the STASH Destination depending on Current CPU ID. * Valid values of SDEST are 4,5,6,7. Where, */ - sdest = dpaa2_core_cluster_sdest(cpu_id); - DPAA2_BUS_DEBUG("Portal= %d CPU= %u lcore id =%u SDEST= %d", - dpio_dev->index, cpu_id, lcoreid, sdest); + DPAA2_BUS_DEBUG("Portal= %d CPU= %u SDEST= %d", + dpio_dev->index, cpu_id, sdest); ret = dpio_set_stashing_destination(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token, sdest); @@ -220,7 +255,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) } #ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV - if (dpaa2_dpio_intr_init(dpio_dev, lcoreid)) { + if (dpaa2_dpio_intr_init(dpio_dev, cpu_id)) { DPAA2_BUS_ERR("Interrupt registration failed for dpio"); return -1; } @@ -229,7 +264,17 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) return 0; } -static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) +static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev) +{ + if (dpio_dev) { +#ifdef RTE_LIBRTE_PMD_DPAA2_EVENTDEV + dpaa2_dpio_intr_deinit(dpio_dev); +#endif + rte_atomic16_clear(&dpio_dev->ref_count); + } +} + +static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) { struct dpaa2_dpio_dev *dpio_dev = NULL; int ret; @@ -245,9 +290,18 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); - ret = dpaa2_configure_stashing(dpio_dev, lcoreid); - if (ret) + ret = dpaa2_configure_stashing(dpio_dev); + if (ret) { DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); + return NULL; + } + + ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev); + if (ret) { + DPAA2_BUS_ERR("pthread_setspecific failed with ret: %d", ret); + dpaa2_put_qbman_swp(dpio_dev); + return NULL; + } return dpio_dev; } @@ -255,98 +309,55 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(int lcoreid) int dpaa2_affine_qbman_swp(void) { - unsigned int lcore_id = rte_lcore_id(); + struct dpaa2_dpio_dev *dpio_dev; uint64_t tid = syscall(SYS_gettid); - if (lcore_id == LCORE_ID_ANY) - lcore_id = rte_get_master_lcore(); - /* if the core id is not supported */ - else if (lcore_id >= RTE_MAX_LCORE) - return -1; - - if (dpaa2_io_portal[lcore_id].dpio_dev) { - DPAA2_BUS_DP_INFO("DPAA Portal=%p (%d) is being shared" - " between thread %" PRIu64 " and current " - "%" PRIu64 "\n", - dpaa2_io_portal[lcore_id].dpio_dev, - dpaa2_io_portal[lcore_id].dpio_dev->index, - dpaa2_io_portal[lcore_id].net_tid, - tid); - RTE_PER_LCORE(_dpaa2_io).dpio_dev - = dpaa2_io_portal[lcore_id].dpio_dev; - rte_atomic16_inc(&dpaa2_io_portal - [lcore_id].dpio_dev->ref_count); - dpaa2_io_portal[lcore_id].net_tid = tid; - - DPAA2_BUS_DP_DEBUG("Old Portal=%p (%d) affined thread - " - "%" PRIu64 "\n", - dpaa2_io_portal[lcore_id].dpio_dev, - dpaa2_io_portal[lcore_id].dpio_dev->index, - tid); - return 0; - } - /* Populate the dpaa2_io_portal structure */ - dpaa2_io_portal[lcore_id].dpio_dev = dpaa2_get_qbman_swp(lcore_id); - - if (dpaa2_io_portal[lcore_id].dpio_dev) { - RTE_PER_LCORE(_dpaa2_io).dpio_dev - = dpaa2_io_portal[lcore_id].dpio_dev; - dpaa2_io_portal[lcore_id].net_tid = tid; + if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) { + dpio_dev = dpaa2_get_qbman_swp(); + if (!dpio_dev) { + DPAA2_BUS_ERR("No software portal resource left"); + return -1; + } + RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev; - return 0; - } else { - return -1; + DPAA2_BUS_INFO( + "DPAA Portal=%p (%d) is affined to thread %" PRIu64, + dpio_dev, dpio_dev->index, tid); } + return 0; } int dpaa2_affine_qbman_ethrx_swp(void) { - unsigned int lcore_id = rte_lcore_id(); + struct dpaa2_dpio_dev *dpio_dev; uint64_t tid = syscall(SYS_gettid); - if (lcore_id == LCORE_ID_ANY) - lcore_id = rte_get_master_lcore(); - /* if the core id is not supported */ - else if (lcore_id >= RTE_MAX_LCORE) - return -1; + /* Populate the dpaa2_io_portal structure */ + if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) { + dpio_dev = dpaa2_get_qbman_swp(); + if (!dpio_dev) { + DPAA2_BUS_ERR("No software portal resource left"); + return -1; + } + RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev; - if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) { - DPAA2_BUS_DP_INFO( - "DPAA Portal=%p (%d) is being shared between thread" - " %" PRIu64 " and current %" PRIu64 "\n", - dpaa2_io_portal[lcore_id].ethrx_dpio_dev, - dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index, - dpaa2_io_portal[lcore_id].sec_tid, - tid); - RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev - = dpaa2_io_portal[lcore_id].ethrx_dpio_dev; - rte_atomic16_inc(&dpaa2_io_portal - [lcore_id].ethrx_dpio_dev->ref_count); - dpaa2_io_portal[lcore_id].sec_tid = tid; - - DPAA2_BUS_DP_DEBUG( - "Old Portal=%p (%d) affined thread" - " - %" PRIu64 "\n", - dpaa2_io_portal[lcore_id].ethrx_dpio_dev, - dpaa2_io_portal[lcore_id].ethrx_dpio_dev->index, - tid); - return 0; + DPAA2_BUS_INFO( + "DPAA Portal=%p (%d) is affined for eth rx to thread %" + PRIu64, dpio_dev, dpio_dev->index, tid); } + return 0; +} - /* Populate the dpaa2_io_portal structure */ - dpaa2_io_portal[lcore_id].ethrx_dpio_dev = - dpaa2_get_qbman_swp(lcore_id); - - if (dpaa2_io_portal[lcore_id].ethrx_dpio_dev) { - RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev - = dpaa2_io_portal[lcore_id].ethrx_dpio_dev; - dpaa2_io_portal[lcore_id].sec_tid = tid; - return 0; - } else { - return -1; - } +static void dpaa2_portal_finish(void *arg) +{ + RTE_SET_USED(arg); + + dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).dpio_dev); + dpaa2_put_qbman_swp(RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev); + + pthread_setspecific(dpaa2_portal_key, NULL); } /* @@ -398,6 +409,7 @@ dpaa2_create_dpio_device(int vdev_fd, struct vfio_region_info reg_info = { .argsz = sizeof(reg_info)}; struct qbman_swp_desc p_des; struct dpio_attr attr; + int ret; static int check_lcore_cpuset; if (obj_info->num_regions < NUM_DPIO_REGIONS) { @@ -547,12 +559,26 @@ dpaa2_create_dpio_device(int vdev_fd, TAILQ_INSERT_TAIL(&dpio_dev_list, dpio_dev, next); + if (!dpaa2_portal_key) { + /* create the key, supplying a function that'll be invoked + * when a portal affined thread will be deleted. + */ + ret = pthread_key_create(&dpaa2_portal_key, + dpaa2_portal_finish); + if (ret) { + DPAA2_BUS_DEBUG("Unable to create pthread key (%d)", + ret); + goto err; + } + } + return 0; err: if (dpio_dev->dpio) { dpio_disable(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token); dpio_close(dpio_dev->dpio, CMD_PRI_LOW, dpio_dev->token); + rte_free(dpio_dev->eqresp); rte_free(dpio_dev->dpio); } diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h index f6436f2e5..b8eb8ee0a 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h @@ -14,9 +14,6 @@ struct dpaa2_io_portal_t { struct dpaa2_dpio_dev *dpio_dev; struct dpaa2_dpio_dev *ethrx_dpio_dev; - uint64_t net_tid; - uint64_t sec_tid; - void *eventdev; }; /*! Global per thread DPIO portal */ From patchwork Tue Jul 7 09:22:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234959 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736179ilg; Tue, 7 Jul 2020 02:28:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJysUVY+ApK1PvJX9p0XHVjMQZj28ZUdj5fgh9fZkcEB6xuVYk1ZvN4mARTMsAjSwqVilesM X-Received: by 2002:a17:906:4e87:: with SMTP id v7mr34359492eju.242.1594114080206; Tue, 07 Jul 2020 02:28:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114080; cv=none; d=google.com; s=arc-20160816; b=dHE9MNhgvkKfS3gjrmQaeEGZyro8QlI5YZs3G6oFJcB1TJtFSqZ1c8ASYneICUNLga 5C0ba1A2AjlRdEmTg9niS6S7zZndRzMwe0Ar+QtL37adGUqM/lc6olTD8Masta3kn2ZL Ekj1LVgIkeAhcLYb6CiSrXRQBdhNUrlLN7o5QrZrtMeYs8X2Qke5oOSLOoa1kPQ+dmo2 CaVXwriLmSf9tomE7iGuM9ix4ZxtFB8xrjSZ3zdc3UgafanN3NEHDLLiNqONBiIG86HM OK3WY4tr94t1dYANl/yYkfZkuH2fbOKPLeRL6Mb+00ADQsJq708fJu2azBoRHCTuSjUT 2oCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=PVGuqKQ2lnWRtR0mVCl12o3+NDkkxPfqdqIJRLrivVs=; b=nQINbvdf+tBv29/UR1rGZQMUm/8U4SV9QbAfK/9dCWCz4V5pI6FLzfJ3toUIukQz8s 85r2TEWSRXXmMC9x4d1GUQnePGN6r5+K7R5ARI9Zdy8zBcROgRa21yXrvgdpkgQk0j3N ox7uqTaQL8Az82SXctixnqU+GyiREo84hRajJ0jQ2TY2uZ6Ec7a+f4Vwef0taybhaEMB yJdPLJLE7e7giRTIKuEDgYcIxg9KbrSpwznv2rH3TvHfN3CxqnPKvJNQGoJdtGtnAycS XiOhN1+HjhzoOPz7BLRgEyIfF4C1GqmtrwjMWzArG/dML8lBYTmjALMfbDalwJ3R/wDe D7zw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id gj5si15254690ejb.49.2020.07.07.02.28.00; Tue, 07 Jul 2020 02:28:00 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 25AB91DCFB; Tue, 7 Jul 2020 11:27:08 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 47F751DC61 for ; Tue, 7 Jul 2020 11:27:02 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 291151A0A4E; Tue, 7 Jul 2020 11:27:02 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 165221A0A56; Tue, 7 Jul 2020 11:27:00 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 02275402C8; Tue, 7 Jul 2020 17:26:56 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Nipun Gupta , Hemant Agrawal Date: Tue, 7 Jul 2020 14:52:21 +0530 Message-Id: <20200707092244.12791-7-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 06/29] bus/fslmc: support handle portal alloc failure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the error handling on failure. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 47ae72749..5a12ff35d 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -284,8 +284,10 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) if (dpio_dev && rte_atomic16_test_and_set(&dpio_dev->ref_count)) break; } - if (!dpio_dev) + if (!dpio_dev) { + DPAA2_BUS_ERR("No software portal resource left"); return NULL; + } DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); @@ -293,6 +295,7 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) ret = dpaa2_configure_stashing(dpio_dev); if (ret) { DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); + rte_atomic16_clear(&dpio_dev->ref_count); return NULL; } @@ -316,7 +319,7 @@ dpaa2_affine_qbman_swp(void) if (!RTE_PER_LCORE(_dpaa2_io).dpio_dev) { dpio_dev = dpaa2_get_qbman_swp(); if (!dpio_dev) { - DPAA2_BUS_ERR("No software portal resource left"); + DPAA2_BUS_ERR("Error in software portal allocation"); return -1; } RTE_PER_LCORE(_dpaa2_io).dpio_dev = dpio_dev; @@ -338,7 +341,7 @@ dpaa2_affine_qbman_ethrx_swp(void) if (!RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev) { dpio_dev = dpaa2_get_qbman_swp(); if (!dpio_dev) { - DPAA2_BUS_ERR("No software portal resource left"); + DPAA2_BUS_ERR("Error in software portal allocation"); return -1; } RTE_PER_LCORE(_dpaa2_io).ethrx_dpio_dev = dpio_dev; From patchwork Tue Jul 7 09:22:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234960 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736374ilg; Tue, 7 Jul 2020 02:28:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6LcVWK1/fKeTqrgRN7HS4Q8vr0AjSvk1Hd5Nws/WKIFCCk3by+GOg4FmtqXvyVpAKYAWs X-Received: by 2002:a25:bf8f:: with SMTP id l15mr84706890ybk.453.1594114094532; Tue, 07 Jul 2020 02:28:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114094; cv=none; d=google.com; s=arc-20160816; b=oZScQu3tQUamUCtPcWL/0owWpVeV/p/i4ErGSFXPHQ2Hmn4yBzU2BJwXUFobAPRBO2 dKcuvPOV3vAmFJzWuuh+oSsPex8rEjFKUE8hMjesZLGQ0aA4swBs5KBD2rwXhCY6baB2 KxeqiA/fK0W9RuWnSmzyLuPifPiUmwvGQktP1SdJAR1oMJgznV42SofN1x4yX2djNjOn 6/po8NrGcQgW+bSPLKIol34UYIBQb5X3hIT4pp0ejK19IRmfSom353apg1IwQwP/o9nW GxI3jiYQz4Zqit4xkNFrg6dAJZZXlf0Gjq53B2Z2xrlV3gXCmShlYQllDYC5ibdifG5d f/SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=G/vbaMOjfv7gKPCZTXlvGatGiY6n+IzD3vhJZyFoTE0=; b=Ed5T7+AA3AA6ftf5XWi/q8gxlWACaC/jEmH5S3w46VJ6zdJBMRERKZIIlCdeMCTM8v mKkHrnLuplkH4w9mtUr1iZLoowgeqwJjwNdaZQ07PB4ZMTkh32SniJ1n05Oq2wJZJ36m XUVkz7PSTRwnFYBtG/k55AQEeM8LjkOkC1x50Zn92hyxfkOvu01M16Eqg0ZWNF7SHDp7 JRCujcgRzpvqfrBhB6otCjcAnbAwqWhGPojio2vPqqhZx7bHBCs7S3JCCDm9IZ5+Pm1P V/JERYHalFYaRvibsp3UumJvRszI9DIR9xZfwsytofI8lhoxXlzy05aCGzuqJ3vvU7dV ndKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id a12si6418348ybp.210.2020.07.07.02.28.14; Tue, 07 Jul 2020 02:28:14 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A6BFE1DD12; Tue, 7 Jul 2020 11:27:09 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 3D9A81DC86 for ; Tue, 7 Jul 2020 11:27:03 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1FC5A1A0A58; Tue, 7 Jul 2020 11:27:03 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 088021A0A30; Tue, 7 Jul 2020 11:27:01 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id BEE2640305; Tue, 7 Jul 2020 17:26:57 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:22 +0530 Message-Id: <20200707092244.12791-8-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 07/29] bus/fslmc: support portal migration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The patch adds support for portal migration by disabling stashing for the portals which is used in the non-affined threads, or on threads affined to multiple cores Signed-off-by: Nipun Gupta --- doc/guides/rel_notes/release_20_08.rst | 5 + drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 83 +---- .../bus/fslmc/qbman/include/fsl_qbman_debug.h | 1 + .../fslmc/qbman/include/fsl_qbman_portal.h | 8 +- drivers/bus/fslmc/qbman/qbman_portal.c | 340 +++++++++++++++++- drivers/bus/fslmc/qbman/qbman_portal.h | 19 +- drivers/bus/fslmc/qbman/qbman_sys.h | 135 ++++++- 7 files changed, 508 insertions(+), 83 deletions(-) -- 2.17.1 diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index ffae463f4..d915fce12 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -119,6 +119,11 @@ New Features See the :doc:`../sample_app_ug/l2_forward_real_virtual` for more details of this parameter usage. +* **Updated NXP dpaa2 ethdev PMD.** + + Updated the NXP dpaa2 ethdev with new features and improvements, including: + + * Added support to use datapath APIs from non-EAL pthread Removed Items ------------- diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 5a12ff35d..97be76116 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -53,10 +53,6 @@ static uint32_t io_space_count; /* Variable to store DPAA2 platform type */ uint32_t dpaa2_svr_family; -/* Physical core id for lcores running on dpaa2. */ -/* DPAA2 only support 1 lcore to 1 phy cpu mapping */ -static unsigned int dpaa2_cpu[RTE_MAX_LCORE]; - /* Variable to store DPAA2 DQRR size */ uint8_t dpaa2_dqrr_size; /* Variable to store DPAA2 EQCR size */ @@ -159,7 +155,7 @@ dpaa2_affine_dpio_intr_to_respective_core(int32_t dpio_id, int cpu_id) return; } - cpu_mask = cpu_mask << dpaa2_cpu[cpu_id]; + cpu_mask = cpu_mask << cpu_id; snprintf(command, COMMAND_LEN, "echo %X > /proc/irq/%s/smp_affinity", cpu_mask, token); ret = system(command); @@ -228,17 +224,9 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev) #endif static int -dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev) +dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int cpu_id) { int sdest, ret; - int cpu_id; - - /* Set the Stashing Destination */ - cpu_id = dpaa2_get_core_id(); - if (cpu_id < 0) { - DPAA2_BUS_ERR("Thread not affined to a single core"); - return -1; - } /* Set the STASH Destination depending on Current CPU ID. * Valid values of SDEST are 4,5,6,7. Where, @@ -277,6 +265,7 @@ static void dpaa2_put_qbman_swp(struct dpaa2_dpio_dev *dpio_dev) static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) { struct dpaa2_dpio_dev *dpio_dev = NULL; + int cpu_id; int ret; /* Get DPIO dev handle from list using index */ @@ -292,11 +281,19 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void) DPAA2_BUS_DEBUG("New Portal %p (%d) affined thread - %lu", dpio_dev, dpio_dev->index, syscall(SYS_gettid)); - ret = dpaa2_configure_stashing(dpio_dev); - if (ret) { - DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); - rte_atomic16_clear(&dpio_dev->ref_count); - return NULL; + /* Set the Stashing Destination */ + cpu_id = dpaa2_get_core_id(); + if (cpu_id < 0) { + DPAA2_BUS_WARN("Thread not affined to a single core"); + if (dpaa2_svr_family != SVR_LX2160A) + qbman_swp_update(dpio_dev->sw_portal, 1); + } else { + ret = dpaa2_configure_stashing(dpio_dev, cpu_id); + if (ret) { + DPAA2_BUS_ERR("dpaa2_configure_stashing failed"); + rte_atomic16_clear(&dpio_dev->ref_count); + return NULL; + } } ret = pthread_setspecific(dpaa2_portal_key, (void *)dpio_dev); @@ -363,46 +360,6 @@ static void dpaa2_portal_finish(void *arg) pthread_setspecific(dpaa2_portal_key, NULL); } -/* - * This checks for not supported lcore mappings as well as get the physical - * cpuid for the lcore. - * one lcore can only map to 1 cpu i.e. 1@10-14 not supported. - * one cpu can be mapped to more than one lcores. - */ -static int -dpaa2_check_lcore_cpuset(void) -{ - unsigned int lcore_id, i; - int ret = 0; - - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) - dpaa2_cpu[lcore_id] = 0xffffffff; - - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { - rte_cpuset_t cpuset = rte_lcore_cpuset(lcore_id); - - for (i = 0; i < CPU_SETSIZE; i++) { - if (!CPU_ISSET(i, &cpuset)) - continue; - if (i >= RTE_MAX_LCORE) { - DPAA2_BUS_ERR("ERR:lcore map to core %u (>= %u) not supported", - i, RTE_MAX_LCORE); - ret = -1; - continue; - } - RTE_LOG(DEBUG, EAL, "lcore id = %u cpu=%u\n", - lcore_id, i); - if (dpaa2_cpu[lcore_id] != 0xffffffff) { - DPAA2_BUS_ERR("ERR:lcore map to multi-cpu not supported"); - ret = -1; - continue; - } - dpaa2_cpu[lcore_id] = i; - } - } - return ret; -} - static int dpaa2_create_dpio_device(int vdev_fd, struct vfio_device_info *obj_info, @@ -413,7 +370,6 @@ dpaa2_create_dpio_device(int vdev_fd, struct qbman_swp_desc p_des; struct dpio_attr attr; int ret; - static int check_lcore_cpuset; if (obj_info->num_regions < NUM_DPIO_REGIONS) { DPAA2_BUS_ERR("Not sufficient number of DPIO regions"); @@ -433,13 +389,6 @@ dpaa2_create_dpio_device(int vdev_fd, /* Using single portal for all devices */ dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX); - if (!check_lcore_cpuset) { - check_lcore_cpuset = 1; - - if (dpaa2_check_lcore_cpuset() < 0) - goto err; - } - dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io), RTE_CACHE_LINE_SIZE); if (!dpio_dev->dpio) { diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h index 11267d439..54096e877 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2015 Freescale Semiconductor, Inc. + * Copyright 2020 NXP */ #ifndef _FSL_QBMAN_DEBUG_H #define _FSL_QBMAN_DEBUG_H diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h index f820077d2..eb68c9cab 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014 Freescale Semiconductor, Inc. - * Copyright 2015-2019 NXP + * Copyright 2015-2020 NXP * */ #ifndef _FSL_QBMAN_PORTAL_H @@ -44,6 +44,12 @@ extern uint32_t dpaa2_svr_family; */ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d); +/** + * qbman_swp_update() - Update portal cacheability attributes. + * @p: the given qbman swp portal + */ +int qbman_swp_update(struct qbman_swp *p, int stash_off); + /** * qbman_swp_finish() - Create and destroy a functional object representing * the given QBMan portal descriptor. diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index d7ff74c7a..57f50b0d8 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2020 NXP * */ @@ -82,6 +82,10 @@ qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); static int +qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd); +static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); @@ -99,6 +103,12 @@ qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int +qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + uint32_t *flags, + int num_frames); +static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -118,6 +128,12 @@ qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int +qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames); +static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, @@ -135,6 +151,11 @@ qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s, const struct qbman_fd *fd, int num_frames); static int +qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + int num_frames); +static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -143,9 +164,12 @@ qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, static int qbman_swp_pull_direct(struct qbman_swp *s, struct qbman_pull_desc *d); static int +qbman_swp_pull_cinh_direct(struct qbman_swp *s, struct qbman_pull_desc *d); +static int qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d); const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s); +const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s); const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s); static int @@ -153,6 +177,10 @@ qbman_swp_release_direct(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, unsigned int num_buffers); static int +qbman_swp_release_cinh_direct(struct qbman_swp *s, + const struct qbman_release_desc *d, + const uint64_t *buffers, unsigned int num_buffers); +static int qbman_swp_release_mem_back(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, unsigned int num_buffers); @@ -327,6 +355,28 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) return p; } +int qbman_swp_update(struct qbman_swp *p, int stash_off) +{ + const struct qbman_swp_desc *d = &p->desc; + struct qbman_swp_sys *s = &p->sys; + int ret; + + /* Nothing needs to be done for QBMAN rev > 5000 with fast access */ + if ((qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) + return 0; + + ret = qbman_swp_sys_update(s, d, p->dqrr.dqrr_size, stash_off); + if (ret) { + pr_err("qbman_swp_sys_init() failed %d\n", ret); + return ret; + } + + p->stash_off = stash_off; + + return 0; +} + void qbman_swp_finish(struct qbman_swp *p) { #ifdef QBMAN_CHECKING @@ -462,6 +512,27 @@ void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb) #endif } +void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb) +{ + uint8_t *v = cmd; +#ifdef QBMAN_CHECKING + QBMAN_BUG_ON(!(p->mc.check != swp_mc_can_submit)); +#endif + /* TBD: "|=" is going to hurt performance. Need to move as many fields + * out of word zero, and for those that remain, the "OR" needs to occur + * at the caller side. This debug check helps to catch cases where the + * caller wants to OR but has forgotten to do so. + */ + QBMAN_BUG_ON((*v & cmd_verb) != *v); + dma_wmb(); + *v = cmd_verb | p->mc.valid_bit; + qbman_cinh_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd); + clean(cmd); +#ifdef QBMAN_CHECKING + p->mc.check = swp_mc_can_poll; +#endif +} + void *qbman_swp_mc_result(struct qbman_swp *p) { uint32_t *ret, verb; @@ -500,6 +571,27 @@ void *qbman_swp_mc_result(struct qbman_swp *p) return ret; } +void *qbman_swp_mc_result_cinh(struct qbman_swp *p) +{ + uint32_t *ret, verb; +#ifdef QBMAN_CHECKING + QBMAN_BUG_ON(p->mc.check != swp_mc_can_poll); +#endif + ret = qbman_cinh_read_shadow(&p->sys, + QBMAN_CENA_SWP_RR(p->mc.valid_bit)); + /* Remove the valid-bit - + * command completed iff the rest is non-zero + */ + verb = ret[0] & ~QB_VALID_BIT; + if (!verb) + return NULL; + p->mc.valid_bit ^= QB_VALID_BIT; +#ifdef QBMAN_CHECKING + p->mc.check = swp_mc_can_start; +#endif + return ret; +} + /***********/ /* Enqueue */ /***********/ @@ -640,6 +732,16 @@ static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p, QMAN_RT_MODE); } +static void memcpy_byte_by_byte(void *to, const void *from, size_t n) +{ + const uint8_t *src = from; + volatile uint8_t *dest = to; + size_t i; + + for (i = 0; i < n; i++) + dest[i] = src[i]; +} + static int qbman_swp_enqueue_array_mode_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, @@ -754,7 +856,7 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct( return -EBUSY; } - p = qbman_cena_write_start_wo_shadow(&s->sys, + p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask)); memcpy(&p[1], &cl[1], 28); memcpy(&p[8], fd, sizeof(*fd)); @@ -762,8 +864,6 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct( /* Set the verb byte, have to substitute in the valid-bit */ p[0] = cl[0] | s->eqcr.pi_vb; - qbman_cena_write_complete_wo_shadow(&s->sys, - QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask)); s->eqcr.pi++; s->eqcr.pi &= full_mask; s->eqcr.available--; @@ -815,7 +915,10 @@ static int qbman_swp_enqueue_ring_mode(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd) { - return qbman_swp_enqueue_ring_mode_ptr(s, d, fd); + if (!s->stash_off) + return qbman_swp_enqueue_ring_mode_ptr(s, d, fd); + else + return qbman_swp_enqueue_ring_mode_cinh_direct(s, d, fd); } int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d, @@ -1025,7 +1128,12 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s, uint32_t *flags, int num_frames) { - return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames); + if (!s->stash_off) + return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, + num_frames); + else + return qbman_swp_enqueue_multiple_cinh_direct(s, d, fd, flags, + num_frames); } static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, @@ -1233,7 +1341,12 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, uint32_t *flags, int num_frames) { - return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames); + if (!s->stash_off) + return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, + num_frames); + else + return qbman_swp_enqueue_multiple_fd_cinh_direct(s, d, fd, + flags, num_frames); } static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, @@ -1426,7 +1539,13 @@ int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s, const struct qbman_fd *fd, int num_frames) { - return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, num_frames); + if (!s->stash_off) + return qbman_swp_enqueue_multiple_desc_ptr(s, d, fd, + num_frames); + else + return qbman_swp_enqueue_multiple_desc_cinh_direct(s, d, fd, + num_frames); + } /*************************/ @@ -1574,6 +1693,30 @@ static int qbman_swp_pull_direct(struct qbman_swp *s, return 0; } +static int qbman_swp_pull_cinh_direct(struct qbman_swp *s, + struct qbman_pull_desc *d) +{ + uint32_t *p; + uint32_t *cl = qb_cl(d); + + if (!atomic_dec_and_test(&s->vdq.busy)) { + atomic_inc(&s->vdq.busy); + return -EBUSY; + } + + d->pull.tok = s->sys.idx + 1; + s->vdq.storage = (void *)(size_t)d->pull.rsp_addr_virt; + p = qbman_cinh_write_start_wo_shadow(&s->sys, QBMAN_CENA_SWP_VDQCR); + memcpy_byte_by_byte(&p[1], &cl[1], 12); + + /* Set the verb byte, have to substitute in the valid-bit */ + lwsync(); + p[0] = cl[0] | s->vdq.valid_bit; + s->vdq.valid_bit ^= QB_VALID_BIT; + + return 0; +} + static int qbman_swp_pull_mem_back(struct qbman_swp *s, struct qbman_pull_desc *d) { @@ -1601,7 +1744,10 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s, int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d) { - return qbman_swp_pull_ptr(s, d); + if (!s->stash_off) + return qbman_swp_pull_ptr(s, d); + else + return qbman_swp_pull_cinh_direct(s, d); } /****************/ @@ -1638,7 +1784,10 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s) */ const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s) { - return qbman_swp_dqrr_next_ptr(s); + if (!s->stash_off) + return qbman_swp_dqrr_next_ptr(s); + else + return qbman_swp_dqrr_next_cinh_direct(s); } const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s) @@ -1718,6 +1867,81 @@ const struct qbman_result *qbman_swp_dqrr_next_direct(struct qbman_swp *s) return p; } +const struct qbman_result *qbman_swp_dqrr_next_cinh_direct(struct qbman_swp *s) +{ + uint32_t verb; + uint32_t response_verb; + uint32_t flags; + const struct qbman_result *p; + + /* Before using valid-bit to detect if something is there, we have to + * handle the case of the DQRR reset bug... + */ + if (s->dqrr.reset_bug) { + /* We pick up new entries by cache-inhibited producer index, + * which means that a non-coherent mapping would require us to + * invalidate and read *only* once that PI has indicated that + * there's an entry here. The first trip around the DQRR ring + * will be much less efficient than all subsequent trips around + * it... + */ + uint8_t pi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI) & + QMAN_DQRR_PI_MASK; + + /* there are new entries if pi != next_idx */ + if (pi == s->dqrr.next_idx) + return NULL; + + /* if next_idx is/was the last ring index, and 'pi' is + * different, we can disable the workaround as all the ring + * entries have now been DMA'd to so valid-bit checking is + * repaired. Note: this logic needs to be based on next_idx + * (which increments one at a time), rather than on pi (which + * can burst and wrap-around between our snapshots of it). + */ + QBMAN_BUG_ON((s->dqrr.dqrr_size - 1) < 0); + if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1u)) { + pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n", + s->dqrr.next_idx, pi); + s->dqrr.reset_bug = 0; + } + } + p = qbman_cinh_read_wo_shadow(&s->sys, + QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx)); + + verb = p->dq.verb; + + /* If the valid-bit isn't of the expected polarity, nothing there. Note, + * in the DQRR reset bug workaround, we shouldn't need to skip these + * check, because we've already determined that a new entry is available + * and we've invalidated the cacheline before reading it, so the + * valid-bit behaviour is repaired and should tell us what we already + * knew from reading PI. + */ + if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit) + return NULL; + + /* There's something there. Move "next_idx" attention to the next ring + * entry (and prefetch it) before returning what we found. + */ + s->dqrr.next_idx++; + if (s->dqrr.next_idx == s->dqrr.dqrr_size) { + s->dqrr.next_idx = 0; + s->dqrr.valid_bit ^= QB_VALID_BIT; + } + /* If this is the final response to a volatile dequeue command + * indicate that the vdq is no longer busy + */ + flags = p->dq.stat; + response_verb = verb & QBMAN_RESPONSE_VERB_MASK; + if ((response_verb == QBMAN_RESULT_DQ) && + (flags & QBMAN_DQ_STAT_VOLATILE) && + (flags & QBMAN_DQ_STAT_EXPIRED)) + atomic_inc(&s->vdq.busy); + + return p; +} + const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s) { uint32_t verb; @@ -2096,6 +2320,37 @@ static int qbman_swp_release_direct(struct qbman_swp *s, return 0; } +static int qbman_swp_release_cinh_direct(struct qbman_swp *s, + const struct qbman_release_desc *d, + const uint64_t *buffers, + unsigned int num_buffers) +{ + uint32_t *p; + const uint32_t *cl = qb_cl(d); + uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR); + + pr_debug("RAR=%08x\n", rar); + if (!RAR_SUCCESS(rar)) + return -EBUSY; + + QBMAN_BUG_ON(!num_buffers || (num_buffers > 7)); + + /* Start the release command */ + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_RCR(RAR_IDX(rar))); + + /* Copy the caller's buffer pointers to the command */ + memcpy_byte_by_byte(&p[2], buffers, num_buffers * sizeof(uint64_t)); + + /* Set the verb byte, have to substitute in the valid-bit and the + * number of buffers. + */ + lwsync(); + p[0] = cl[0] | RAR_VB(rar) | num_buffers; + + return 0; +} + static int qbman_swp_release_mem_back(struct qbman_swp *s, const struct qbman_release_desc *d, const uint64_t *buffers, @@ -2134,7 +2389,11 @@ int qbman_swp_release(struct qbman_swp *s, const uint64_t *buffers, unsigned int num_buffers) { - return qbman_swp_release_ptr(s, d, buffers, num_buffers); + if (!s->stash_off) + return qbman_swp_release_ptr(s, d, buffers, num_buffers); + else + return qbman_swp_release_cinh_direct(s, d, buffers, + num_buffers); } /*******************/ @@ -2157,8 +2416,8 @@ struct qbman_acquire_rslt { uint64_t buf[7]; }; -int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, - unsigned int num_buffers) +static int qbman_swp_acquire_direct(struct qbman_swp *s, uint16_t bpid, + uint64_t *buffers, unsigned int num_buffers) { struct qbman_acquire_desc *p; struct qbman_acquire_rslt *r; @@ -2202,6 +2461,61 @@ int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, return (int)r->num; } +static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid, + uint64_t *buffers, unsigned int num_buffers) +{ + struct qbman_acquire_desc *p; + struct qbman_acquire_rslt *r; + + if (!num_buffers || (num_buffers > 7)) + return -EINVAL; + + /* Start the management command */ + p = qbman_swp_mc_start(s); + + if (!p) + return -EBUSY; + + /* Encode the caller-provided attributes */ + p->bpid = bpid; + p->num = num_buffers; + + /* Complete the management command */ + r = qbman_swp_mc_complete_cinh(s, p, QBMAN_MC_ACQUIRE); + if (!r) { + pr_err("qbman: acquire from BPID %d failed, no response\n", + bpid); + return -EIO; + } + + /* Decode the outcome */ + QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_MC_ACQUIRE); + + /* Determine success or failure */ + if (r->rslt != QBMAN_MC_RSLT_OK) { + pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n", + bpid, r->rslt); + return -EIO; + } + + QBMAN_BUG_ON(r->num > num_buffers); + + /* Copy the acquired buffers to the caller's array */ + u64_from_le32_copy(buffers, &r->buf[0], r->num); + + return (int)r->num; +} + +int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers, + unsigned int num_buffers) +{ + if (!s->stash_off) + return qbman_swp_acquire_direct(s, bpid, buffers, num_buffers); + else + return qbman_swp_acquire_cinh_direct(s, bpid, buffers, + num_buffers); +} + /*****************/ /* FQ management */ /*****************/ diff --git a/drivers/bus/fslmc/qbman/qbman_portal.h b/drivers/bus/fslmc/qbman/qbman_portal.h index 3aaacae52..1cf791830 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.h +++ b/drivers/bus/fslmc/qbman/qbman_portal.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. - * Copyright 2018-2019 NXP + * Copyright 2018-2020 NXP * */ @@ -102,6 +102,7 @@ struct qbman_swp { uint32_t ci; int available; } eqcr; + uint8_t stash_off; }; /* -------------------------- */ @@ -118,7 +119,9 @@ struct qbman_swp { */ void *qbman_swp_mc_start(struct qbman_swp *p); void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint8_t cmd_verb); +void qbman_swp_mc_submit_cinh(struct qbman_swp *p, void *cmd, uint8_t cmd_verb); void *qbman_swp_mc_result(struct qbman_swp *p); +void *qbman_swp_mc_result_cinh(struct qbman_swp *p); /* Wraps up submit + poll-for-result */ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd, @@ -135,6 +138,20 @@ static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd, return cmd; } +static inline void *qbman_swp_mc_complete_cinh(struct qbman_swp *swp, void *cmd, + uint8_t cmd_verb) +{ + int loopvar = 1000; + + qbman_swp_mc_submit_cinh(swp, cmd, cmd_verb); + do { + cmd = qbman_swp_mc_result_cinh(swp); + } while (!cmd && loopvar--); + QBMAN_BUG_ON(!loopvar); + + return cmd; +} + /* ---------------------- */ /* Descriptors/cachelines */ /* ---------------------- */ diff --git a/drivers/bus/fslmc/qbman/qbman_sys.h b/drivers/bus/fslmc/qbman/qbman_sys.h index 55449edf3..61f817c47 100644 --- a/drivers/bus/fslmc/qbman/qbman_sys.h +++ b/drivers/bus/fslmc/qbman/qbman_sys.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (C) 2014-2016 Freescale Semiconductor, Inc. - * Copyright 2019 NXP + * Copyright 2019-2020 NXP */ /* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the * driver. They are only included via qbman_private.h, which is itself a @@ -190,6 +190,34 @@ static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset, #endif } +static inline void *qbman_cinh_write_start_wo_shadow(struct qbman_swp_sys *s, + uint32_t offset) +{ +#ifdef QBMAN_CINH_TRACE + pr_info("qbman_cinh_write_start(%p:%d:0x%03x)\n", + s->addr_cinh, s->idx, offset); +#endif + QBMAN_BUG_ON(offset & 63); + return (s->addr_cinh + offset); +} + +static inline void qbman_cinh_write_complete(struct qbman_swp_sys *s, + uint32_t offset, void *cmd) +{ + const uint32_t *shadow = cmd; + int loop; +#ifdef QBMAN_CINH_TRACE + pr_info("qbman_cinh_write_complete(%p:%d:0x%03x) %p\n", + s->addr_cinh, s->idx, offset, shadow); + hexdump(cmd, 64); +#endif + for (loop = 15; loop >= 1; loop--) + __raw_writel(shadow[loop], s->addr_cinh + + offset + loop * 4); + lwsync(); + __raw_writel(shadow[0], s->addr_cinh + offset); +} + static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset) { uint32_t reg = __raw_readl(s->addr_cinh + offset); @@ -200,6 +228,35 @@ static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset) return reg; } +static inline void *qbman_cinh_read_shadow(struct qbman_swp_sys *s, + uint32_t offset) +{ + uint32_t *shadow = (uint32_t *)(s->cena + offset); + unsigned int loop; +#ifdef QBMAN_CINH_TRACE + pr_info(" %s (%p:%d:0x%03x) %p\n", __func__, + s->addr_cinh, s->idx, offset, shadow); +#endif + + for (loop = 0; loop < 16; loop++) + shadow[loop] = __raw_readl(s->addr_cinh + offset + + loop * 4); +#ifdef QBMAN_CINH_TRACE + hexdump(shadow, 64); +#endif + return shadow; +} + +static inline void *qbman_cinh_read_wo_shadow(struct qbman_swp_sys *s, + uint32_t offset) +{ +#ifdef QBMAN_CINH_TRACE + pr_info("qbman_cinh_read(%p:%d:0x%03x)\n", + s->addr_cinh, s->idx, offset); +#endif + return s->addr_cinh + offset; +} + static inline void *qbman_cena_write_start(struct qbman_swp_sys *s, uint32_t offset) { @@ -476,6 +533,82 @@ static inline int qbman_swp_sys_init(struct qbman_swp_sys *s, return 0; } +static inline int qbman_swp_sys_update(struct qbman_swp_sys *s, + const struct qbman_swp_desc *d, + uint8_t dqrr_size, + int stash_off) +{ + uint32_t reg; + int i; + int cena_region_size = 4*1024; + uint8_t est = 1; +#ifdef RTE_ARCH_64 + uint8_t wn = CENA_WRITE_ENABLE; +#else + uint8_t wn = CINH_WRITE_ENABLE; +#endif + + if (stash_off) + wn = CINH_WRITE_ENABLE; + + QBMAN_BUG_ON(d->idx < 0); +#ifdef QBMAN_CHECKING + /* We should never be asked to initialise for a portal that isn't in + * the power-on state. (Ie. don't forget to reset portals when they are + * decommissioned!) + */ + reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG); + QBMAN_BUG_ON(reg); +#endif + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) + memset(s->addr_cena, 0, cena_region_size); + else { + /* Invalidate the portal memory. + * This ensures no stale cache lines + */ + for (i = 0; i < cena_region_size; i += 64) + dccivac(s->addr_cena + i); + } + + if (dpaa2_svr_family == SVR_LS1080A) + est = 0; + + if (s->eqcr_mode == qman_eqcr_vb_array) { + reg = qbman_set_swp_cfg(dqrr_size, wn, + 0, 3, 2, 3, 1, 1, 1, 1, 1, 1); + } else { + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 && + (d->cena_access_mode == qman_cena_fastest_access)) + reg = qbman_set_swp_cfg(dqrr_size, wn, + 1, 3, 2, 0, 1, 1, 1, 1, 1, 1); + else + reg = qbman_set_swp_cfg(dqrr_size, wn, + est, 3, 2, 2, 1, 1, 1, 1, 1, 1); + } + + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) + reg |= 1 << SWP_CFG_CPBS_SHIFT | /* memory-backed mode */ + 1 << SWP_CFG_VPM_SHIFT | /* VDQCR read triggered mode */ + 1 << SWP_CFG_CPM_SHIFT; /* CR read triggered mode */ + + qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg); + reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG); + if (!reg) { + pr_err("The portal %d is not enabled!\n", s->idx); + return -1; + } + + if ((d->qman_version & QMAN_REV_MASK) >= QMAN_REV_5000 + && (d->cena_access_mode == qman_cena_fastest_access)) { + qbman_cinh_write(s, QBMAN_CINH_SWP_EQCR_PI, QMAN_RT_MODE); + qbman_cinh_write(s, QBMAN_CINH_SWP_RCR_PI, QMAN_RT_MODE); + } + + return 0; +} + static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s) { free(s->cena); From patchwork Tue Jul 7 09:22:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234961 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736541ilg; Tue, 7 Jul 2020 02:28:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw7AU+5u/nB9tk7Buc4X88XSouooFPD4+qrD1rkN7RbDxO+ERVS/kiqh5nyCB9WMYrNhw6q X-Received: by 2002:a5b:14a:: with SMTP id c10mr82322824ybp.493.1594114107536; Tue, 07 Jul 2020 02:28:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114107; cv=none; d=google.com; s=arc-20160816; b=ajzAyj8XBb+SmO+yqcaK8DGTS4w2sg8eWMUhObzAc5uLWKNibXoaQlRR+k63BWn7h7 0QTP+4++pyiIFfgpfJR/bwBIK6OAbRo8Nb5nyqp5ebkSpEMdbQQMl0+yWHfhcNlUQOyB dO3C2rmDN3pjRQXvcah3KqhH7GcB2A0XM5L07YhjNfwXiog0083TLXYZUvH94IDqjsQl ux5x/w+9P3c6gZ70zadtQTAsJh7RNtrrnWFd2PlvHu92bv2HAS6oevoxuOy9DebEnuu7 9u8Krqxf7NXSoheJYWxo559YQob+/E+B1PS96JXpb59HNH0CfSx+1O16201Qy06U1YpE R2yQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=tRGHxR/pNjxUbhYWwWM7e3OJmJHiJFn2OZfPmOQBOwk=; b=Jl//Vus54MzZw0+YN7FMWdeErvddYxAw+3jRQPTXhEcbH5ZQ7L5hc4G1Y6UN6fG4ci XTD9MSKmI2B4vefmfiaXz+FoPIAkq3girVNaJqS4IaqFMMvtx3WMnJfYG0M2PyFjNnCt oOuo+EMWsK62kB+JptuPW8ZxTcuT7LcmJeEGf4xjCcuykIaMGhalCjjtXbcAecynLYQP 8ml7lyBJLcCXbbu2RoonJE1mliBi03dRSPWyoPLf3qNNPUx3RbFN4OuGx9N2alFzdsIr OcN/r6Ok7P1kdP11XW6bY5WWqazF/+Ra9jEIZ6PTWvGoH7Ws6F/3SmS9EY9TTcJTUQMg KdMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id q72si22659359ybg.133.2020.07.07.02.28.27; Tue, 07 Jul 2020 02:28:27 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CF79D1DD31; Tue, 7 Jul 2020 11:27:11 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id BECE21DC3E for ; Tue, 7 Jul 2020 11:27:03 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 9E1091A0A4F; Tue, 7 Jul 2020 11:27:03 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8A5C41A0A46; Tue, 7 Jul 2020 11:27:01 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 1681E402F0; Tue, 7 Jul 2020 17:26:58 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:23 +0530 Message-Id: <20200707092244.12791-9-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 08/29] bus/fslmc: rename the cinh read functions used for ls1088 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta This patch changes the qbman I/O function names as they are only reading from cinh register, but writing to cena registers. This gives way to add functions which purely work in cinh mode Signed-off-by: Nipun Gupta --- drivers/bus/fslmc/qbman/qbman_portal.c | 250 +++++++++++++++++++++++-- 1 file changed, 233 insertions(+), 17 deletions(-) -- 2.17.1 diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index 57f50b0d8..0a2af7be4 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -78,7 +78,7 @@ qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); static int -qbman_swp_enqueue_ring_mode_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_ring_mode_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd); static int @@ -97,7 +97,7 @@ qbman_swp_enqueue_multiple_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int -qbman_swp_enqueue_multiple_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_multiple_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, uint32_t *flags, @@ -122,7 +122,7 @@ qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, uint32_t *flags, int num_frames); static int -qbman_swp_enqueue_multiple_fd_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_multiple_fd_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, uint32_t *flags, @@ -146,7 +146,7 @@ qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, const struct qbman_fd *fd, int num_frames); static int -qbman_swp_enqueue_multiple_desc_cinh_direct(struct qbman_swp *s, +qbman_swp_enqueue_multiple_desc_cinh_read_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, int num_frames); @@ -309,15 +309,15 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) && (d->cena_access_mode == qman_cena_fastest_access)) { p->eqcr.pi_ring_size = 32; qbman_swp_enqueue_array_mode_ptr = - qbman_swp_enqueue_array_mode_mem_back; + qbman_swp_enqueue_array_mode_mem_back; qbman_swp_enqueue_ring_mode_ptr = - qbman_swp_enqueue_ring_mode_mem_back; + qbman_swp_enqueue_ring_mode_mem_back; qbman_swp_enqueue_multiple_ptr = - qbman_swp_enqueue_multiple_mem_back; + qbman_swp_enqueue_multiple_mem_back; qbman_swp_enqueue_multiple_fd_ptr = - qbman_swp_enqueue_multiple_fd_mem_back; + qbman_swp_enqueue_multiple_fd_mem_back; qbman_swp_enqueue_multiple_desc_ptr = - qbman_swp_enqueue_multiple_desc_mem_back; + qbman_swp_enqueue_multiple_desc_mem_back; qbman_swp_pull_ptr = qbman_swp_pull_mem_back; qbman_swp_dqrr_next_ptr = qbman_swp_dqrr_next_mem_back; qbman_swp_release_ptr = qbman_swp_release_mem_back; @@ -325,13 +325,13 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) if (dpaa2_svr_family == SVR_LS1080A) { qbman_swp_enqueue_ring_mode_ptr = - qbman_swp_enqueue_ring_mode_cinh_direct; + qbman_swp_enqueue_ring_mode_cinh_read_direct; qbman_swp_enqueue_multiple_ptr = - qbman_swp_enqueue_multiple_cinh_direct; + qbman_swp_enqueue_multiple_cinh_read_direct; qbman_swp_enqueue_multiple_fd_ptr = - qbman_swp_enqueue_multiple_fd_cinh_direct; + qbman_swp_enqueue_multiple_fd_cinh_read_direct; qbman_swp_enqueue_multiple_desc_ptr = - qbman_swp_enqueue_multiple_desc_cinh_direct; + qbman_swp_enqueue_multiple_desc_cinh_read_direct; } for (mask_size = p->eqcr.pi_ring_size; mask_size > 0; mask_size >>= 1) @@ -835,7 +835,7 @@ static int qbman_swp_enqueue_ring_mode_direct(struct qbman_swp *s, return 0; } -static int qbman_swp_enqueue_ring_mode_cinh_direct( +static int qbman_swp_enqueue_ring_mode_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd) @@ -873,6 +873,44 @@ static int qbman_swp_enqueue_ring_mode_cinh_direct( return 0; } +static int qbman_swp_enqueue_ring_mode_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd) +{ + uint32_t *p; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, full_mask, half_mask; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return -EBUSY; + } + + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(s->eqcr.pi & half_mask)); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], fd, sizeof(*fd)); + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + p[0] = cl[0] | s->eqcr.pi_vb; + s->eqcr.pi++; + s->eqcr.pi &= full_mask; + s->eqcr.available--; + if (!(s->eqcr.pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + + return 0; +} + static int qbman_swp_enqueue_ring_mode_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd) @@ -999,7 +1037,7 @@ static int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s, return num_enqueued; } -static int qbman_swp_enqueue_multiple_cinh_direct( +static int qbman_swp_enqueue_multiple_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -1069,6 +1107,67 @@ static int qbman_swp_enqueue_multiple_cinh_direct( return num_enqueued; } +static int qbman_swp_enqueue_multiple_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + uint32_t *flags, + int num_frames) +{ + uint32_t *p = NULL; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + p[0] = cl[0] | s->eqcr.pi_vb; + if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { + struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; + + d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | + ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); + } + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -1205,7 +1304,7 @@ static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, return num_enqueued; } -static int qbman_swp_enqueue_multiple_fd_cinh_direct( +static int qbman_swp_enqueue_multiple_fd_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, @@ -1275,6 +1374,67 @@ static int qbman_swp_enqueue_multiple_fd_cinh_direct( return num_enqueued; } +static int qbman_swp_enqueue_multiple_fd_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames) +{ + uint32_t *p = NULL; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], fd[i], sizeof(struct qbman_fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + p[0] = cl[0] | s->eqcr.pi_vb; + if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { + struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; + + d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | + ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); + } + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, struct qbman_fd **fd, @@ -1413,7 +1573,7 @@ static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, return num_enqueued; } -static int qbman_swp_enqueue_multiple_desc_cinh_direct( +static int qbman_swp_enqueue_multiple_desc_cinh_read_direct( struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -1478,6 +1638,62 @@ static int qbman_swp_enqueue_multiple_desc_cinh_direct( return num_enqueued; } +static int qbman_swp_enqueue_multiple_desc_cinh_direct( + struct qbman_swp *s, + const struct qbman_eq_desc *d, + const struct qbman_fd *fd, + int num_frames) +{ + uint32_t *p; + const uint32_t *cl; + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cinh_read(&s->sys, + QBMAN_CINH_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + cl = qb_cl(&d[i]); + memcpy_byte_by_byte(&p[1], &cl[1], 28); + memcpy_byte_by_byte(&p[8], &fd[i], sizeof(*fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cinh_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + cl = qb_cl(&d[i]); + p[0] = cl[0] | s->eqcr.pi_vb; + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, From patchwork Tue Jul 7 09:22:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234962 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736719ilg; Tue, 7 Jul 2020 02:28:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw29SZ0pSF9lPrG13aAeDUgh8yzizif2N6FOit9+1GF8dGMRsuvVd+jZZtSaBiuGf61FrIF X-Received: by 2002:a50:ee07:: with SMTP id g7mr46062875eds.320.1594114120495; Tue, 07 Jul 2020 02:28:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114120; cv=none; d=google.com; s=arc-20160816; b=lLb1vOdbscd8nEUinoAb+ZM1ewr2zDfT6OjABAar483ocgxcTxKT79sP3G6yomzggv 3xHsvjt7IZrzCWY3lYQLNNbcGUlqtWvhgRFNU4EK4Jcyj8ktv08NIq1tGj7LBFMJAeN4 gib1F2l8uHRKJHtfM0uT8qBLkyBpN1JmaAnZkjN1ytU4uYtB+c+1ktdShiDkmUZJd2Hu jCRZuwpCIJjHiYf/M+HlyaKZuFJ1BJ9H+Jawgmy/FGLMkMTZPu956OejGC+mbQKo/Ln7 Ha/vUzNYHpl7HNLd2vi95vBn0UYBMhbbkkWDH/XSrcwHrM6vkKfZm0BZMZ/ceUfqqiNI x5Uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=VIKwzqzipHZUrcqzYNFWCRfU0NFe5Zhqyp5PS4mMoZ8=; b=DTJ1Wfu6KbyPQ38OlD7kZzi1v9kIvKeaRiaq0a2CppfqeibTxQhWsFKkcB1bF+OWhS pWeatf1jnJjd/GT1vuEEtyxJvx18HJqIQkXc5be50mQD1ZFMIEKD0ukoIuxBNLky3Gd6 8fH96ZuDUzdBqBauXyCY+OC+BEgROiR514x2+QRC1j6qAWIsUAuErePigrrTkt+BC63/ gCoPvA/Mk4MTGfLbrwiZxURhY9mA3fhDv1FFMEAFQkVgJSTIgkREOOLPpTx/+IT8D7Tx ilkUrHd23TxtQDfChQIMi0u/RKTEq3Fv6EMAi34KqlObiBTcrYvdUP/3+pVbrOIsSXQ8 Rdfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id bv17si14166707ejb.263.2020.07.07.02.28.40; Tue, 07 Jul 2020 02:28:40 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A75621DD39; Tue, 7 Jul 2020 11:27:13 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 5E0911DC91 for ; Tue, 7 Jul 2020 11:27:04 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 357272008E1; Tue, 7 Jul 2020 11:27:04 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 200E62008CE; Tue, 7 Jul 2020 11:27:02 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id D7863402F7; Tue, 7 Jul 2020 17:26:59 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Gagandeep Singh Date: Tue, 7 Jul 2020 14:52:24 +0530 Message-Id: <20200707092244.12791-10-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 09/29] net/dpaa: enable Tx queue taildrop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gagandeep Singh Enable congestion handling/tail drop for TX queues. Signed-off-by: Gagandeep Singh --- drivers/bus/dpaa/base/qbman/qman.c | 43 +++++++++ drivers/bus/dpaa/include/fsl_qman.h | 17 ++++ drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 + drivers/net/dpaa/dpaa_ethdev.c | 111 ++++++++++++++++++++-- drivers/net/dpaa/dpaa_ethdev.h | 1 + drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++ drivers/net/dpaa/dpaa_rxtx.h | 3 + 7 files changed, 242 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c index b596e79c2..447c09177 100644 --- a/drivers/bus/dpaa/base/qbman/qman.c +++ b/drivers/bus/dpaa/base/qbman/qman.c @@ -40,6 +40,8 @@ spin_unlock(&__fq478->fqlock); \ } while (0) +static qman_cb_free_mbuf qman_free_mbuf_cb; + static inline void fq_set(struct qman_fq *fq, u32 mask) { dpaa_set_bits(mask, &fq->flags); @@ -790,6 +792,47 @@ static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq, FQUNLOCK(fq); } +void +qman_ern_register_cb(qman_cb_free_mbuf cb) +{ + qman_free_mbuf_cb = cb; +} + + +void +qman_ern_poll_free(void) +{ + struct qman_portal *p = get_affine_portal(); + u8 verb, num = 0; + const struct qm_mr_entry *msg; + const struct qm_fd *fd; + struct qm_mr_entry swapped_msg; + + qm_mr_pvb_update(&p->p); + msg = qm_mr_current(&p->p); + + while (msg != NULL) { + swapped_msg = *msg; + hw_fd_to_cpu(&swapped_msg.ern.fd); + verb = msg->ern.verb & QM_MR_VERB_TYPE_MASK; + fd = &swapped_msg.ern.fd; + + if (unlikely(verb & 0x20)) { + printf("HW ERN notification, Nothing to do\n"); + } else { + if ((fd->bpid & 0xff) != 0xff) + qman_free_mbuf_cb(fd); + } + + num++; + qm_mr_next(&p->p); + qm_mr_pvb_update(&p->p); + msg = qm_mr_current(&p->p); + } + + qm_mr_cci_consume(&p->p, num); +} + static u32 __poll_portal_slow(struct qman_portal *p, u32 is) { const struct qm_mr_entry *msg; diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index 78b698f39..0d9cfc339 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1158,6 +1158,10 @@ typedef void (*qman_cb_mr)(struct qman_portal *qm, struct qman_fq *fq, /* This callback type is used when handling DCP ERNs */ typedef void (*qman_cb_dc_ern)(struct qman_portal *qm, const struct qm_mr_entry *msg); + +/* This callback function will be used to free mbufs of ERN */ +typedef uint16_t (*qman_cb_free_mbuf)(const struct qm_fd *fd); + /* * s/w-visible states. Ie. tentatively scheduled + truly scheduled + active + * held-active + held-suspended are just "sched". Things like "retired" will not @@ -1808,6 +1812,19 @@ __rte_internal int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags, int frames_to_send); +/** + * qman_ern_poll_free - Polling on MR and calling a callback function to free + * mbufs when SW ERNs received. + */ +__rte_internal +void qman_ern_poll_free(void); + +/** + * qman_ern_register_cb - Register a callback function to free buffers. + */ +__rte_internal +void qman_ern_register_cb(qman_cb_free_mbuf cb); + /** * qman_enqueue_multi_fq - Enqueue multiple frames to their respective frame * queues. diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index 46d42f7d6..8069b05af 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -61,6 +61,8 @@ INTERNAL { qman_enqueue; qman_enqueue_multi; qman_enqueue_multi_fq; + qman_ern_poll_free; + qman_ern_register_cb; qman_fq_fqid; qman_fq_portal_irqsource_add; qman_fq_portal_irqsource_remove; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index f1c9a7151..fd2c0c681 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ /* System headers */ @@ -86,9 +86,12 @@ static int dpaa_push_mode_max_queue = DPAA_DEFAULT_PUSH_MODE_QUEUE; static int dpaa_push_queue_idx; /* Queue index which are in push mode*/ -/* Per FQ Taildrop in frame count */ +/* Per RX FQ Taildrop in frame count */ static unsigned int td_threshold = CGR_RX_PERFQ_THRESH; +/* Per TX FQ Taildrop in frame count, disabled by default */ +static unsigned int td_tx_threshold; + struct rte_dpaa_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; uint32_t offset; @@ -275,7 +278,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); /* Change tx callback to the real one */ - dev->tx_pkt_burst = dpaa_eth_queue_tx; + if (dpaa_intf->cgr_tx) + dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; + else + dev->tx_pkt_burst = dpaa_eth_queue_tx; + fman_if_enable_rx(dpaa_intf->fif); return 0; @@ -867,6 +874,7 @@ int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, DPAA_PMD_INFO("Tx queue setup for queue index: %d fq_id (0x%x)", queue_idx, dpaa_intf->tx_queues[queue_idx].fqid); dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx]; + return 0; } @@ -1236,9 +1244,19 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx, /* Initialise a Tx FQ */ static int dpaa_tx_queue_init(struct qman_fq *fq, - struct fman_if *fman_intf) + struct fman_if *fman_intf, + struct qman_cgr *cgr_tx) { struct qm_mcc_initfq opts = {0}; + struct qm_mcc_initcgr cgr_opts = { + .we_mask = QM_CGR_WE_CS_THRES | + QM_CGR_WE_CSTD_EN | + QM_CGR_WE_MODE, + .cgr = { + .cstd_en = QM_CGR_EN, + .mode = QMAN_CGR_MODE_FRAME + } + }; int ret; ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID | @@ -1257,6 +1275,27 @@ static int dpaa_tx_queue_init(struct qman_fq *fq, opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi; opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo; DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid); + + if (cgr_tx) { + /* Enable tail drop with cgr on this queue */ + qm_cgr_cs_thres_set64(&cgr_opts.cgr.cs_thres, + td_tx_threshold, 0); + cgr_tx->cb = NULL; + ret = qman_create_cgr(cgr_tx, QMAN_CGR_FLAG_USE_INIT, + &cgr_opts); + if (ret) { + DPAA_PMD_WARN( + "rx taildrop init fail on rx fqid 0x%x(ret=%d)", + fq->fqid, ret); + goto without_cgr; + } + opts.we_mask |= QM_INITFQ_WE_CGID; + opts.fqd.cgid = cgr_tx->cgrid; + opts.fqd.fq_ctrl |= QM_FQCTRL_CGE; + DPAA_PMD_DEBUG("Tx FQ tail drop enabled, threshold = %d\n", + td_tx_threshold); + } +without_cgr: ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts); if (ret) DPAA_PMD_ERR("init tx fqid 0x%x failed %d", fq->fqid, ret); @@ -1309,6 +1348,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) struct fman_if *fman_intf; struct fman_if_bpool *bp, *tmp_bp; uint32_t cgrid[DPAA_MAX_NUM_PCD_QUEUES]; + uint32_t cgrid_tx[MAX_DPAA_CORES]; char eth_buf[RTE_ETHER_ADDR_FMT_SIZE]; PMD_INIT_FUNC_TRACE(); @@ -1319,7 +1359,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) eth_dev->dev_ops = &dpaa_devops; /* Plugging of UCODE burst API not supported in Secondary */ eth_dev->rx_pkt_burst = dpaa_eth_queue_rx; - eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; + if (dpaa_intf->cgr_tx) + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; + else + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP qman_set_fq_lookup_table( dpaa_intf->rx_queues->qman_fq_lookup_table); @@ -1366,6 +1409,21 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) return -ENOMEM; } + memset(cgrid, 0, sizeof(cgrid)); + memset(cgrid_tx, 0, sizeof(cgrid_tx)); + + /* if DPAA_TX_TAILDROP_THRESHOLD is set, use that value; if 0, it means + * Tx tail drop is disabled. + */ + if (getenv("DPAA_TX_TAILDROP_THRESHOLD")) { + td_tx_threshold = atoi(getenv("DPAA_TX_TAILDROP_THRESHOLD")); + DPAA_PMD_DEBUG("Tail drop threshold env configured: %u", + td_tx_threshold); + /* if a very large value is being configured */ + if (td_tx_threshold > UINT16_MAX) + td_tx_threshold = CGR_RX_PERFQ_THRESH; + } + /* If congestion control is enabled globally*/ if (td_threshold) { dpaa_intf->cgr_rx = rte_zmalloc(NULL, @@ -1414,9 +1472,36 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) goto free_rx; } + /* If congestion control is enabled globally*/ + if (td_tx_threshold) { + dpaa_intf->cgr_tx = rte_zmalloc(NULL, + sizeof(struct qman_cgr) * MAX_DPAA_CORES, + MAX_CACHELINE); + if (!dpaa_intf->cgr_tx) { + DPAA_PMD_ERR("Failed to alloc mem for cgr_tx\n"); + ret = -ENOMEM; + goto free_rx; + } + + ret = qman_alloc_cgrid_range(&cgrid_tx[0], MAX_DPAA_CORES, + 1, 0); + if (ret != MAX_DPAA_CORES) { + DPAA_PMD_WARN("insufficient CGRIDs available"); + ret = -EINVAL; + goto free_rx; + } + } else { + dpaa_intf->cgr_tx = NULL; + } + + for (loop = 0; loop < MAX_DPAA_CORES; loop++) { + if (dpaa_intf->cgr_tx) + dpaa_intf->cgr_tx[loop].cgrid = cgrid_tx[loop]; + ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop], - fman_intf); + fman_intf, + dpaa_intf->cgr_tx ? &dpaa_intf->cgr_tx[loop] : NULL); if (ret) goto free_tx; dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf; @@ -1487,6 +1572,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) free_rx: rte_free(dpaa_intf->cgr_rx); + rte_free(dpaa_intf->cgr_tx); rte_free(dpaa_intf->rx_queues); dpaa_intf->rx_queues = NULL; dpaa_intf->nb_rx_queues = 0; @@ -1527,6 +1613,17 @@ dpaa_dev_uninit(struct rte_eth_dev *dev) rte_free(dpaa_intf->cgr_rx); dpaa_intf->cgr_rx = NULL; + /* Release TX congestion Groups */ + if (dpaa_intf->cgr_tx) { + for (loop = 0; loop < MAX_DPAA_CORES; loop++) + qman_delete_cgr(&dpaa_intf->cgr_tx[loop]); + + qman_release_cgrid_range(dpaa_intf->cgr_tx[loop].cgrid, + MAX_DPAA_CORES); + rte_free(dpaa_intf->cgr_tx); + dpaa_intf->cgr_tx = NULL; + } + rte_free(dpaa_intf->rx_queues); dpaa_intf->rx_queues = NULL; @@ -1631,6 +1728,8 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, eth_dev->device = &dpaa_dev->device; dpaa_dev->eth_dev = eth_dev; + qman_ern_register_cb(dpaa_free_mbuf); + /* Invoke PMD device initialization function */ diag = dpaa_dev_init(eth_dev); if (diag == 0) { diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 6a6477ac8..d4261f885 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -111,6 +111,7 @@ struct dpaa_if { struct qman_fq *rx_queues; struct qman_cgr *cgr_rx; struct qman_fq *tx_queues; + struct qman_cgr *cgr_tx; struct qman_fq debug_queues[2]; uint16_t nb_rx_queues; uint16_t nb_tx_queues; diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index 3aeecb7d2..819cad7c6 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -398,6 +398,69 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid) return mbuf; } +uint16_t +dpaa_free_mbuf(const struct qm_fd *fd) +{ + struct rte_mbuf *mbuf; + struct dpaa_bp_info *bp_info; + uint8_t format; + void *ptr; + + bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid); + format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT; + if (unlikely(format == qm_fd_sg)) { + struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp; + struct qm_sg_entry *sgt, *sg_temp; + void *vaddr, *sg_vaddr; + int i = 0; + uint16_t fd_offset = fd->offset; + + vaddr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd)); + if (!vaddr) { + DPAA_PMD_ERR("unable to convert physical address"); + return -1; + } + sgt = vaddr + fd_offset; + sg_temp = &sgt[i++]; + hw_sg_to_cpu(sg_temp); + temp = (struct rte_mbuf *) + ((char *)vaddr - bp_info->meta_data_size); + sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info, + qm_sg_entry_get64(sg_temp)); + + first_seg = (struct rte_mbuf *)((char *)sg_vaddr - + bp_info->meta_data_size); + first_seg->nb_segs = 1; + prev_seg = first_seg; + while (i < DPAA_SGT_MAX_ENTRIES) { + sg_temp = &sgt[i++]; + hw_sg_to_cpu(sg_temp); + sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info, + qm_sg_entry_get64(sg_temp)); + cur_seg = (struct rte_mbuf *)((char *)sg_vaddr - + bp_info->meta_data_size); + first_seg->nb_segs += 1; + prev_seg->next = cur_seg; + if (sg_temp->final) { + cur_seg->next = NULL; + break; + } + prev_seg = cur_seg; + } + + rte_pktmbuf_free_seg(temp); + rte_pktmbuf_free_seg(first_seg); + return 0; + } + + ptr = DPAA_MEMPOOL_PTOV(bp_info, qm_fd_addr(fd)); + mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size); + + rte_pktmbuf_free(mbuf); + + return 0; +} + /* Specific for LS1043 */ void dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr, @@ -1011,6 +1074,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) return sent; } +uint16_t +dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) +{ + qman_ern_poll_free(); + + return dpaa_eth_queue_tx(q, bufs, nb_bufs); +} + uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused, struct rte_mbuf **bufs __rte_unused, uint16_t nb_bufs __rte_unused) diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h index 4f896fba1..fe8eb6dc7 100644 --- a/drivers/net/dpaa/dpaa_rxtx.h +++ b/drivers/net/dpaa/dpaa_rxtx.h @@ -254,6 +254,8 @@ struct annotations_t { uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs); +uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, + uint16_t nb_bufs); uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs); uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused, @@ -266,6 +268,7 @@ int dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf, struct qm_fd *fd, uint32_t bpid); +uint16_t dpaa_free_mbuf(const struct qm_fd *fd); void dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs); From patchwork Tue Jul 7 09:22:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234963 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736859ilg; Tue, 7 Jul 2020 02:28:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxXWZ2l3egtHjOqieHHcBRnkJD3WDHq6nEff0Y3WpuqriWef1fWh1O7bV6EZhLXWegEMsmn X-Received: by 2002:a17:906:d116:: with SMTP id b22mr22834058ejz.250.1594114131395; Tue, 07 Jul 2020 02:28:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114131; cv=none; d=google.com; s=arc-20160816; b=YRq/xwMwZ78FdUBI+6/DozMe0Z5CxQLCQZEEiWr2gwHBjZGmTceTveXzG5+UJuI9ry GZTVCKK1ytHWR6yjuH0ZAM8tMKUeFU8uFfkv/8j7W/HEvlm1nl87fave9EvA50Le6/3Y /pT6AE2zvhGKAN8LPOwuho529/bt1psdcE/OdM4nheu7DpLBXZHxdP4Ov8q9kuSYhEhp usUpEMb8LYbRC3jqz107VnrvZnOsPl7pmlthsm8e0UQvFb+yQEx1IObxVNIXUednWI5N FPZqDpacldks14bQlDJxwYe5xZMZGPxbmq84ayTWUwjz9XEiBgmIN4Ol4K8FjKKsyqgl RfuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Zga92JVegTJoA4OtrM4iEJQ5kg7LFdxNUfsnQqRTXRI=; b=JagvG1kjaq8ej3pxgI9mhk3TBUqWnKYc4lfkhoUscfCP2POcPnObqf+tEvFzImgxtK FugTTm1CveKp2f+//GwNvZMupqqwwKeq5MlziLFRuxSAlbmrSyjKvJGq344pl3/+sR57 nwxV0OFbbU2ASL57fDNT3X9yNXGNsVrrFZhCeaF1+QQvzBDKFr0wNTyxtOEXR6RpslmF xxXGGLDFymSg4ZGnjOrfiSn+TVycEZiF+NFnC7UH4P6a8WmNTwnJ411/elvZiIipUJ51 LJbphiu1bBOM1WfGbfxjw9/zh8WQb+pVSJLdHeMgPsV2FE+kjG3Ffmg/gDOEYBVinaGB crxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id u6si14656154ejt.460.2020.07.07.02.28.51; Tue, 07 Jul 2020 02:28:51 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 69FD21DD4D; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id A0E8A1DBBD for ; Tue, 7 Jul 2020 11:27:05 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 85B662008CE; Tue, 7 Jul 2020 11:27:05 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 448122008E9; Tue, 7 Jul 2020 11:27:03 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8B1F7402A8; Tue, 7 Jul 2020 17:27:00 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Sachin Saxena , Gagandeep Singh Date: Tue, 7 Jul 2020 14:52:25 +0530 Message-Id: <20200707092244.12791-11-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 10/29] net/dpaa: add 2.5G support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sachin Saxena Handle 2.5Gbps ethernet ports as well. Signed-off-by: Sachin Saxena Signed-off-by: Gagandeep Singh --- doc/guides/nics/features/dpaa.ini | 2 +- drivers/bus/dpaa/base/fman/fman.c | 6 ++++-- drivers/bus/dpaa/base/fman/netcfg_layer.c | 3 ++- drivers/bus/dpaa/include/fman.h | 1 + drivers/net/dpaa/dpaa_ethdev.c | 9 ++++++++- 5 files changed, 16 insertions(+), 5 deletions(-) -- 2.17.1 diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini index 24cfd8566..b00f46a97 100644 --- a/doc/guides/nics/features/dpaa.ini +++ b/doc/guides/nics/features/dpaa.ini @@ -4,7 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] -Speed capabilities = P +Speed capabilities = Y Link status = Y Jumbo frame = Y MTU update = Y diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index 6d77a7e39..ae26041ca 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -263,7 +263,7 @@ fman_if_init(const struct device_node *dpa_node) fman_dealloc_bufs_mask_hi = 0; fman_dealloc_bufs_mask_lo = 0; } - /* Is the MAC node 1G, 10G? */ + /* Is the MAC node 1G, 2.5G, 10G? */ __if->__if.is_memac = 0; if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac")) @@ -279,7 +279,9 @@ fman_if_init(const struct device_node *dpa_node) /* Right now forcing memac to 1g in case of error*/ __if->__if.mac_type = fman_mac_1g; } else { - if (strstr(char_prop, "sgmii")) + if (strstr(char_prop, "sgmii-2500")) + __if->__if.mac_type = fman_mac_2_5g; + else if (strstr(char_prop, "sgmii")) __if->__if.mac_type = fman_mac_1g; else if (strstr(char_prop, "rgmii")) { __if->__if.mac_type = fman_mac_1g; diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c index 36eca88cd..b7009f229 100644 --- a/drivers/bus/dpaa/base/fman/netcfg_layer.c +++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c @@ -44,7 +44,8 @@ dump_netcfg(struct netcfg_info *cfg_ptr) printf("\n+ Fman %d, MAC %d (%s);\n", __if->fman_idx, __if->mac_idx, - (__if->mac_type == fman_mac_1g) ? "1G" : "10G"); + (__if->mac_type == fman_mac_1g) ? "1G" : + (__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G"); printf("\tmac_addr: %02x:%02x:%02x:%02x:%02x:%02x\n", (&__if->mac_addr)->addr_bytes[0], diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h index c02d32d22..b6293b61c 100644 --- a/drivers/bus/dpaa/include/fman.h +++ b/drivers/bus/dpaa/include/fman.h @@ -72,6 +72,7 @@ enum fman_mac_type { fman_offline = 0, fman_mac_1g, fman_mac_10g, + fman_mac_2_5g, }; struct mac_addr { diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index fd2c0c681..c0ded9086 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -356,8 +356,13 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev, if (dpaa_intf->fif->mac_type == fman_mac_1g) { dev_info->speed_capa = ETH_LINK_SPEED_1G; + } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) { + dev_info->speed_capa = ETH_LINK_SPEED_1G + | ETH_LINK_SPEED_2_5G; } else if (dpaa_intf->fif->mac_type == fman_mac_10g) { - dev_info->speed_capa = (ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G); + dev_info->speed_capa = ETH_LINK_SPEED_1G + | ETH_LINK_SPEED_2_5G + | ETH_LINK_SPEED_10G; } else { DPAA_PMD_ERR("invalid link_speed: %s, %d", dpaa_intf->name, dpaa_intf->fif->mac_type); @@ -388,6 +393,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, if (dpaa_intf->fif->mac_type == fman_mac_1g) link->link_speed = ETH_SPEED_NUM_1G; + else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) + link->link_speed = ETH_SPEED_NUM_2_5G; else if (dpaa_intf->fif->mac_type == fman_mac_10g) link->link_speed = ETH_SPEED_NUM_10G; else From patchwork Tue Jul 7 09:22:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234964 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp736997ilg; Tue, 7 Jul 2020 02:29:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyvIRC5piKJa3R/alDN/6xgUcbxdIlNznK2FQjeVghfqsDMyPNwtVl7f094JR8cYKlmyOIu X-Received: by 2002:a17:906:1d1b:: with SMTP id n27mr48326734ejh.272.1594114140790; Tue, 07 Jul 2020 02:29:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114140; cv=none; d=google.com; s=arc-20160816; b=dC0ppeGT3Y7+RItDkuRfcG03aSRGDfh0HFCY21NNwvCYkt8FbGvRJONRFGCcuVHsy+ rfx7HIfzcRFewpDhzrcS7mBHXVQRObr+oBCF71wwwwKYHPeqSnL+v8e5++GmgbfkJnw1 MqNtvu1wUY/6R6OZDkqOoEfxxjhR7e5IXfybdTzdvtQPVCBjkiOXPL/qnBU4FP/m28FX jjku+BQKGEHUqoCs7HGskeUc5RRIZp16Olgikn2IMijNPCvI5wNGVeIo5wia7rR08LiR dGpLAxNxd3PcLGzjaMxM7XhHXmy8ztS+XbIaQ0OsSiMbQgYuCKkjNH8troXJb74NMdXY puig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=PEOycx5jV5nbluc9UnS/UItWowxmRLGvtMGeLwk2Klo=; b=vae8IAZkmO8wRJDxDWQsAihemRgjJxZIo883CCKgwllkJfgWQiDNCQ6XdVFCquR8az 4cNwnt0PIozgFFDjdo6OaotVH2aFTlUztlAoODqnj37/MhUq7Wc+PRd85nJDAS9IiRg6 VwtkPmuefrFt2fhTpdW5uWgp1yOiUr7urtoWAiYzzTRcakRDtzdVfMekwWSC33kLdRLh i0L8SOzXOQ9W+61i0JCcR0HeKG+QN4piXeGQgVl6WOzT8Slg46+GTmshGOM7cDXVtl9P 7SlbFi3knGgtWLNA/zcNY8CweAU3pMmZSBpwxXbvd+8I0NXyzUSK0ILyuDijeTJxCqIz +RgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id nw19si11501890ejb.264.2020.07.07.02.29.00; Tue, 07 Jul 2020 02:29:00 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 08A101DD61; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 3BD321DCE5 for ; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 1EAE12008EB; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 09D302008DD; Tue, 7 Jul 2020 11:27:04 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 5291F402FC; Tue, 7 Jul 2020 17:27:01 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:26 +0530 Message-Id: <20200707092244.12791-12-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 11/29] net/dpaa: update process specific device info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta For DPAA devices the memory maps stored in the FMAN interface information is per process. Store them in the device process specific area. This is required to support multi-process apps. Signed-off-by: Nipun Gupta --- drivers/net/dpaa/dpaa_ethdev.c | 207 ++++++++++++++++----------------- drivers/net/dpaa/dpaa_ethdev.h | 1 - 2 files changed, 102 insertions(+), 106 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index c0ded9086..6c94fd396 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -149,7 +149,6 @@ dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts) static int dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE; uint32_t buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; @@ -185,7 +184,7 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - fman_if_set_maxfrm(dpaa_intf->fif, frame_size); + fman_if_set_maxfrm(dev->process_private, frame_size); return 0; } @@ -193,7 +192,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) static int dpaa_eth_dev_configure(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_conf *eth_conf = &dev->data->dev_conf; uint64_t rx_offloads = eth_conf->rxmode.offloads; uint64_t tx_offloads = eth_conf->txmode.offloads; @@ -232,14 +230,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) max_len = DPAA_MAX_RX_PKT_LEN; } - fman_if_set_maxfrm(dpaa_intf->fif, max_len); + fman_if_set_maxfrm(dev->process_private, max_len); dev->data->mtu = max_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE; } if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) { DPAA_PMD_DEBUG("enabling scatter mode"); - fman_if_set_sg(dpaa_intf->fif, 1); + fman_if_set_sg(dev->process_private, 1); dev->data->scattered_rx = 1; } @@ -283,18 +281,18 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev) else dev->tx_pkt_burst = dpaa_eth_queue_tx; - fman_if_enable_rx(dpaa_intf->fif); + fman_if_enable_rx(dev->process_private); return 0; } static void dpaa_eth_dev_stop(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; PMD_INIT_FUNC_TRACE(); - fman_if_disable_rx(dpaa_intf->fif); + fman_if_disable_rx(fif); dev->tx_pkt_burst = dpaa_eth_tx_drop_all; } @@ -342,6 +340,7 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; DPAA_PMD_DEBUG(": %s", dpaa_intf->name); @@ -354,18 +353,18 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev, dev_info->max_vmdq_pools = ETH_16_POOLS; dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL; - if (dpaa_intf->fif->mac_type == fman_mac_1g) { + if (fif->mac_type == fman_mac_1g) { dev_info->speed_capa = ETH_LINK_SPEED_1G; - } else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) { + } else if (fif->mac_type == fman_mac_2_5g) { dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G; - } else if (dpaa_intf->fif->mac_type == fman_mac_10g) { + } else if (fif->mac_type == fman_mac_10g) { dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | ETH_LINK_SPEED_10G; } else { DPAA_PMD_ERR("invalid link_speed: %s, %d", - dpaa_intf->name, dpaa_intf->fif->mac_type); + dpaa_intf->name, fif->mac_type); return -EINVAL; } @@ -388,18 +387,19 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, { struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_link *link = &dev->data->dev_link; + struct fman_if *fif = dev->process_private; PMD_INIT_FUNC_TRACE(); - if (dpaa_intf->fif->mac_type == fman_mac_1g) + if (fif->mac_type == fman_mac_1g) link->link_speed = ETH_SPEED_NUM_1G; - else if (dpaa_intf->fif->mac_type == fman_mac_2_5g) + else if (fif->mac_type == fman_mac_2_5g) link->link_speed = ETH_SPEED_NUM_2_5G; - else if (dpaa_intf->fif->mac_type == fman_mac_10g) + else if (fif->mac_type == fman_mac_10g) link->link_speed = ETH_SPEED_NUM_10G; else DPAA_PMD_ERR("invalid link_speed: %s, %d", - dpaa_intf->name, dpaa_intf->fif->mac_type); + dpaa_intf->name, fif->mac_type); link->link_status = dpaa_intf->valid; link->link_duplex = ETH_LINK_FULL_DUPLEX; @@ -410,21 +410,17 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, static int dpaa_eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_stats_get(dpaa_intf->fif, stats); + fman_if_stats_get(dev->process_private, stats); return 0; } static int dpaa_eth_stats_reset(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_stats_reset(dpaa_intf->fif); + fman_if_stats_reset(dev->process_private); return 0; } @@ -433,7 +429,6 @@ static int dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings); uint64_t values[sizeof(struct dpaa_if_stats) / 8]; @@ -443,7 +438,7 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, if (xstats == NULL) return 0; - fman_if_stats_get_all(dpaa_intf->fif, values, + fman_if_stats_get_all(dev->process_private, values, sizeof(struct dpaa_if_stats) / 8); for (i = 0; i < num; i++) { @@ -480,15 +475,13 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8]; if (!ids) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - if (n < stat_cnt) return stat_cnt; if (!values) return 0; - fman_if_stats_get_all(dpaa_intf->fif, values_copy, + fman_if_stats_get_all(dev->process_private, values_copy, sizeof(struct dpaa_if_stats) / 8); for (i = 0; i < stat_cnt; i++) @@ -537,44 +530,36 @@ dpaa_xstats_get_names_by_id( static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_promiscuous_enable(dpaa_intf->fif); + fman_if_promiscuous_enable(dev->process_private); return 0; } static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_promiscuous_disable(dpaa_intf->fif); + fman_if_promiscuous_disable(dev->process_private); return 0; } static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_set_mcast_filter_table(dpaa_intf->fif); + fman_if_set_mcast_filter_table(dev->process_private); return 0; } static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_reset_mcast_filter_table(dpaa_intf->fif); + fman_if_reset_mcast_filter_table(dev->process_private); return 0; } @@ -587,6 +572,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, struct rte_mempool *mp) { struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct fman_if *fif = dev->process_private; struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx]; struct qm_mcc_initfq opts = {0}; u32 flags = 0; @@ -643,22 +629,22 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, icp.iciof = DEFAULT_ICIOF; icp.iceof = DEFAULT_RX_ICEOF; icp.icsz = DEFAULT_ICSZ; - fman_if_set_ic_params(dpaa_intf->fif, &icp); + fman_if_set_ic_params(fif, &icp); fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE; - fman_if_set_fdoff(dpaa_intf->fif, fd_offset); + fman_if_set_fdoff(fif, fd_offset); /* Buffer pool size should be equal to Dataroom Size*/ bp_size = rte_pktmbuf_data_room_size(mp); - fman_if_set_bp(dpaa_intf->fif, mp->size, + fman_if_set_bp(fif, mp->size, dpaa_intf->bp_info->bpid, bp_size); dpaa_intf->valid = 1; DPAA_PMD_DEBUG("if:%s fd_offset = %d offset = %d", dpaa_intf->name, fd_offset, - fman_if_get_fdoff(dpaa_intf->fif)); + fman_if_get_fdoff(fif)); } DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name, - fman_if_get_sg_enable(dpaa_intf->fif), + fman_if_get_sg_enable(fif), dev->data->dev_conf.rxmode.max_rx_pkt_len); /* checking if push mode only, no error check for now */ if (!rxq->is_static && @@ -950,11 +936,12 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev, return 0; } else if (fc_conf->mode == RTE_FC_TX_PAUSE || fc_conf->mode == RTE_FC_FULL) { - fman_if_set_fc_threshold(dpaa_intf->fif, fc_conf->high_water, + fman_if_set_fc_threshold(dev->process_private, + fc_conf->high_water, fc_conf->low_water, - dpaa_intf->bp_info->bpid); + dpaa_intf->bp_info->bpid); if (fc_conf->pause_time) - fman_if_set_fc_quanta(dpaa_intf->fif, + fman_if_set_fc_quanta(dev->process_private, fc_conf->pause_time); } @@ -990,10 +977,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev, fc_conf->autoneg = net_fc->autoneg; return 0; } - ret = fman_if_get_fc_threshold(dpaa_intf->fif); + ret = fman_if_get_fc_threshold(dev->process_private); if (ret) { fc_conf->mode = RTE_FC_TX_PAUSE; - fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif); + fc_conf->pause_time = + fman_if_get_fc_quanta(dev->process_private); } else { fc_conf->mode = RTE_FC_NONE; } @@ -1008,11 +996,11 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev, __rte_unused uint32_t pool) { int ret; - struct dpaa_if *dpaa_intf = dev->data->dev_private; PMD_INIT_FUNC_TRACE(); - ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, index); + ret = fman_if_add_mac_addr(dev->process_private, + addr->addr_bytes, index); if (ret) DPAA_PMD_ERR("Adding the MAC ADDR failed: err = %d", ret); @@ -1023,11 +1011,9 @@ static void dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - PMD_INIT_FUNC_TRACE(); - fman_if_clear_mac_addr(dpaa_intf->fif, index); + fman_if_clear_mac_addr(dev->process_private, index); } static int @@ -1035,11 +1021,10 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr) { int ret; - struct dpaa_if *dpaa_intf = dev->data->dev_private; PMD_INIT_FUNC_TRACE(); - ret = fman_if_add_mac_addr(dpaa_intf->fif, addr->addr_bytes, 0); + ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0); if (ret) DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret); @@ -1142,7 +1127,6 @@ int rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on) { struct rte_eth_dev *dev; - struct dpaa_if *dpaa_intf; RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); @@ -1151,17 +1135,16 @@ rte_pmd_dpaa_set_tx_loopback(uint8_t port, uint8_t on) if (!is_dpaa_supported(dev)) return -ENOTSUP; - dpaa_intf = dev->data->dev_private; - if (on) - fman_if_loopback_enable(dpaa_intf->fif); + fman_if_loopback_enable(dev->process_private); else - fman_if_loopback_disable(dpaa_intf->fif); + fman_if_loopback_disable(dev->process_private); return 0; } -static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf) +static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf, + struct fman_if *fman_intf) { struct rte_eth_fc_conf *fc_conf; int ret; @@ -1177,10 +1160,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf) } } fc_conf = dpaa_intf->fc_conf; - ret = fman_if_get_fc_threshold(dpaa_intf->fif); + ret = fman_if_get_fc_threshold(fman_intf); if (ret) { fc_conf->mode = RTE_FC_TX_PAUSE; - fc_conf->pause_time = fman_if_get_fc_quanta(dpaa_intf->fif); + fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf); } else { fc_conf->mode = RTE_FC_NONE; } @@ -1342,6 +1325,39 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid) } #endif +/* Initialise a network interface */ +static int +dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev) +{ + struct rte_dpaa_device *dpaa_device; + struct fm_eth_port_cfg *cfg; + struct dpaa_if *dpaa_intf; + struct fman_if *fman_intf; + int dev_id; + + PMD_INIT_FUNC_TRACE(); + + dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device); + dev_id = dpaa_device->id.dev_id; + cfg = dpaa_get_eth_port_cfg(dev_id); + fman_intf = cfg->fman_if; + eth_dev->process_private = fman_intf; + + /* Plugging of UCODE burst API not supported in Secondary */ + dpaa_intf = eth_dev->data->dev_private; + eth_dev->rx_pkt_burst = dpaa_eth_queue_rx; + if (dpaa_intf->cgr_tx) + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; + else + eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; +#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP + qman_set_fq_lookup_table( + dpaa_intf->rx_queues->qman_fq_lookup_table); +#endif + + return 0; +} + /* Initialise a network interface */ static int dpaa_dev_init(struct rte_eth_dev *eth_dev) @@ -1360,23 +1376,6 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); - dpaa_intf = eth_dev->data->dev_private; - /* For secondary processes, the primary has done all the work */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - eth_dev->dev_ops = &dpaa_devops; - /* Plugging of UCODE burst API not supported in Secondary */ - eth_dev->rx_pkt_burst = dpaa_eth_queue_rx; - if (dpaa_intf->cgr_tx) - eth_dev->tx_pkt_burst = dpaa_eth_queue_tx_slow; - else - eth_dev->tx_pkt_burst = dpaa_eth_queue_tx; -#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP - qman_set_fq_lookup_table( - dpaa_intf->rx_queues->qman_fq_lookup_table); -#endif - return 0; - } - dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device); dev_id = dpaa_device->id.dev_id; dpaa_intf = eth_dev->data->dev_private; @@ -1386,7 +1385,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) dpaa_intf->name = dpaa_device->name; /* save fman_if & cfg in the interface struture */ - dpaa_intf->fif = fman_intf; + eth_dev->process_private = fman_intf; dpaa_intf->ifid = dev_id; dpaa_intf->cfg = cfg; @@ -1455,7 +1454,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) if (default_q) fqid = cfg->rx_def; else - fqid = DPAA_PCD_FQID_START + dpaa_intf->fif->mac_idx * + fqid = DPAA_PCD_FQID_START + fman_intf->mac_idx * DPAA_PCD_FQID_MULTIPLIER + loop; if (dpaa_intf->cgr_rx) @@ -1527,7 +1526,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) DPAA_PMD_DEBUG("All frame queues created"); /* Get the initial configuration for flow control */ - dpaa_fc_set_default(dpaa_intf); + dpaa_fc_set_default(dpaa_intf, fman_intf); /* reset bpool list, initialize bpool dynamically */ list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) { @@ -1674,6 +1673,13 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, return -ENOMEM; eth_dev->device = &dpaa_dev->device; eth_dev->dev_ops = &dpaa_devops; + + ret = dpaa_dev_init_secondary(eth_dev); + if (ret != 0) { + RTE_LOG(ERR, PMD, "secondary dev init failed\n"); + return ret; + } + rte_eth_dev_probing_finish(eth_dev); return 0; } @@ -1709,29 +1715,20 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, } } - /* In case of secondary process, the device is already configured - * and no further action is required, except portal initialization - * and verifying secondary attachment to port name. - */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - eth_dev = rte_eth_dev_attach_secondary(dpaa_dev->name); - if (!eth_dev) - return -ENOMEM; - } else { - eth_dev = rte_eth_dev_allocate(dpaa_dev->name); - if (eth_dev == NULL) - return -ENOMEM; + eth_dev = rte_eth_dev_allocate(dpaa_dev->name); + if (!eth_dev) + return -ENOMEM; - eth_dev->data->dev_private = rte_zmalloc( - "ethdev private structure", - sizeof(struct dpaa_if), - RTE_CACHE_LINE_SIZE); - if (!eth_dev->data->dev_private) { - DPAA_PMD_ERR("Cannot allocate memzone for port data"); - rte_eth_dev_release_port(eth_dev); - return -ENOMEM; - } + eth_dev->data->dev_private = + rte_zmalloc("ethdev private structure", + sizeof(struct dpaa_if), + RTE_CACHE_LINE_SIZE); + if (!eth_dev->data->dev_private) { + DPAA_PMD_ERR("Cannot allocate memzone for port data"); + rte_eth_dev_release_port(eth_dev); + return -ENOMEM; } + eth_dev->device = &dpaa_dev->device; dpaa_dev->eth_dev = eth_dev; diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index d4261f885..4c40ff86a 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -116,7 +116,6 @@ struct dpaa_if { uint16_t nb_rx_queues; uint16_t nb_tx_queues; uint32_t ifid; - struct fman_if *fif; struct dpaa_bp_info *bp_info; struct rte_eth_fc_conf *fc_conf; }; From patchwork Tue Jul 7 09:22:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234965 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp737211ilg; Tue, 7 Jul 2020 02:29:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwVs6KZZmnv4L6OYnjSdTGyBs1fuwxwRZITDciKsD18+8HvEGJl34H1NHJLB9eiS7XkU0vq X-Received: by 2002:a25:c083:: with SMTP id c125mr36540768ybf.323.1594114156935; Tue, 07 Jul 2020 02:29:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114156; cv=none; d=google.com; s=arc-20160816; b=hcqYtMYNHPSrUAdAMs6D18btyYSZ7FAg3aTgWNgGy25rwEBb8H5tWBfzrBm6h2pKYJ rdc8IsjgWTlo0Ho5zQnavXGGF0c3M3EXBfLGj/TutvsqPamgzpyTReNU4NWxHVCr3ySc 3uJlp2ld43lSLeCSnKiXdttRBTE/LD8yZAREOoX/7rGqbhPpDHB35qbSlB4bQGUXE3V7 nghJu7jTgvxH0zdQCbVtwc4Ed+KIYH5j2wopSLcCKS6Rti67/CS6xU8CQnxAIg2zfbLE 1c67N8gbcLCgc3Xp/lZigE92XZTu44aIVnpPw1btV+MP6ByvlGdNnfaN5yGSkbtpXchm 74lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Q4iEeQsdKvzmH1AGVLWi5TaqHsYHsLfG4sw9xEqDBQI=; b=PtSTHMOtdr5xT3tPBmITbf9ySSVYd2QeDXJGdJ/9ILhhsXZUpAvXaAKOE3MJwJ3pXs rKkV1+SELjP3bi+49RX4QWcBEucNyC38USkNNfDeP02sUXc3WmhLS1xVFiFTfJojEb/s XZED0c059nWpI8sAsjIjD5oWrW3cCghPWOEPiAnfXvSgbOj/dJB9jufX8yPBD77Yv1GF uDiMFSVXzcnTEtch59bhQcpm9CLPfzhU+VzFr6BG1unEILK/tuyM9QF/mFFW3dBOIape fa+10XqQAj1JErs13yEUkWxNr9AjFLH7eY/Z0v4FvA84r96po7CQwPCMdHO33GbTDbtv CKWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id y9si6399358ybn.123.2020.07.07.02.29.16; Tue, 07 Jul 2020 02:29:16 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A83471DD73; Tue, 7 Jul 2020 11:27:18 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 5AAF21DCE9 for ; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 3C7171A0A4F; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 54BD51A0A30; Tue, 7 Jul 2020 11:27:04 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 191EE402FA; Tue, 7 Jul 2020 17:27:02 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Rohit Raj Date: Tue, 7 Jul 2020 14:52:27 +0530 Message-Id: <20200707092244.12791-13-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 12/29] drivers: optimize thread local storage for dpaa X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj Minimize the number of different thread variables Add all the thread specific variables in dpaa_portal structure to optimize TLS Usage. Signed-off-by: Rohit Raj Acked-by: Akhil Goyal --- doc/guides/rel_notes/release_20_08.rst | 6 ++++ drivers/bus/dpaa/dpaa_bus.c | 24 ++++++------- drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 - drivers/bus/dpaa/rte_dpaa_bus.h | 42 ++++++++++++++--------- drivers/crypto/dpaa_sec/dpaa_sec.c | 11 +++--- drivers/event/dpaa/dpaa_eventdev.c | 4 +-- drivers/mempool/dpaa/dpaa_mempool.c | 6 ++-- drivers/net/dpaa/dpaa_ethdev.c | 2 +- drivers/net/dpaa/dpaa_rxtx.c | 4 +-- 9 files changed, 54 insertions(+), 46 deletions(-) -- 2.17.1 diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index d915fce12..b1e039d03 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -119,6 +119,12 @@ New Features See the :doc:`../sample_app_ug/l2_forward_real_virtual` for more details of this parameter usage. +* **Updated NXP dpaa ethdev PMD.** + + Updated the NXP dpaa ethdev with new features and improvements, including: + + * Added support to use datapath APIs from non-EAL pthread + * **Updated NXP dpaa2 ethdev PMD.** Updated the NXP dpaa2 ethdev with new features and improvements, including: diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index 6770fbc52..aa906c34e 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -52,8 +52,7 @@ unsigned int dpaa_svr_family; #define FSL_DPAA_BUS_NAME dpaa_bus -RTE_DEFINE_PER_LCORE(bool, dpaa_io); -RTE_DEFINE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs); +RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io); struct fm_eth_port_cfg * dpaa_get_eth_port_cfg(int dev_id) @@ -253,7 +252,6 @@ int rte_dpaa_portal_init(void *arg) { unsigned int cpu, lcore = rte_lcore_id(); int ret; - struct dpaa_portal *dpaa_io_portal; BUS_INIT_FUNC_TRACE(); @@ -288,20 +286,21 @@ int rte_dpaa_portal_init(void *arg) DPAA_BUS_LOG(DEBUG, "QMAN thread initialized - CPU=%d lcore=%d", cpu, lcore); - dpaa_io_portal = rte_malloc(NULL, sizeof(struct dpaa_portal), + DPAA_PER_LCORE_PORTAL = rte_malloc(NULL, sizeof(struct dpaa_portal), RTE_CACHE_LINE_SIZE); - if (!dpaa_io_portal) { + if (!DPAA_PER_LCORE_PORTAL) { DPAA_BUS_LOG(ERR, "Unable to allocate memory"); bman_thread_finish(); qman_thread_finish(); return -ENOMEM; } - dpaa_io_portal->qman_idx = qman_get_portal_index(); - dpaa_io_portal->bman_idx = bman_get_portal_index(); - dpaa_io_portal->tid = syscall(SYS_gettid); + DPAA_PER_LCORE_PORTAL->qman_idx = qman_get_portal_index(); + DPAA_PER_LCORE_PORTAL->bman_idx = bman_get_portal_index(); + DPAA_PER_LCORE_PORTAL->tid = syscall(SYS_gettid); - ret = pthread_setspecific(dpaa_portal_key, (void *)dpaa_io_portal); + ret = pthread_setspecific(dpaa_portal_key, + (void *)DPAA_PER_LCORE_PORTAL); if (ret) { DPAA_BUS_LOG(ERR, "pthread_setspecific failed on core %u" " (lcore=%u) with ret: %d", cpu, lcore, ret); @@ -310,8 +309,6 @@ int rte_dpaa_portal_init(void *arg) return ret; } - RTE_PER_LCORE(dpaa_io) = true; - DPAA_BUS_LOG(DEBUG, "QMAN thread initialized"); return 0; @@ -324,7 +321,7 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq) u32 sdqcr; int ret; - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init(arg); if (ret < 0) { DPAA_BUS_LOG(ERR, "portal initialization failure"); @@ -367,8 +364,7 @@ dpaa_portal_finish(void *arg) rte_free(dpaa_io_portal); dpaa_io_portal = NULL; - - RTE_PER_LCORE(dpaa_io) = false; + DPAA_PER_LCORE_PORTAL = NULL; } static int diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index 8069b05af..2defa7992 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -48,7 +48,6 @@ INTERNAL { netcfg_acquire; netcfg_release; per_lcore_dpaa_io; - per_lcore_held_bufs; qman_alloc_cgrid_range; qman_alloc_pool_range; qman_clear_irq; diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index 2a186d83f..25aff2d30 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -35,8 +35,6 @@ extern unsigned int dpaa_svr_family; -extern RTE_DEFINE_PER_LCORE(bool, dpaa_io); - struct rte_dpaa_device; struct rte_dpaa_driver; @@ -90,12 +88,38 @@ struct rte_dpaa_driver { rte_dpaa_remove_t remove; }; +/* Create storage for dqrr entries per lcore */ +#define DPAA_PORTAL_DEQUEUE_DEPTH 16 +struct dpaa_portal_dqrr { + void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH]; + uint64_t dqrr_held; + uint8_t dqrr_size; +}; + struct dpaa_portal { uint32_t bman_idx; /**< BMAN Portal ID*/ uint32_t qman_idx; /**< QMAN Portal ID*/ + struct dpaa_portal_dqrr dpaa_held_bufs; + struct rte_crypto_op **dpaa_sec_ops; + int dpaa_sec_op_nb; uint64_t tid;/**< Parent Thread id for this portal */ }; +RTE_DECLARE_PER_LCORE(struct dpaa_portal *, dpaa_io); + +#define DPAA_PER_LCORE_PORTAL \ + RTE_PER_LCORE(dpaa_io) +#define DPAA_PER_LCORE_DQRR_SIZE \ + RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.dqrr_size +#define DPAA_PER_LCORE_DQRR_HELD \ + RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.dqrr_held +#define DPAA_PER_LCORE_DQRR_MBUF(i) \ + RTE_PER_LCORE(dpaa_io)->dpaa_held_bufs.mbuf[i] +#define DPAA_PER_LCORE_RTE_CRYPTO_OP \ + RTE_PER_LCORE(dpaa_io)->dpaa_sec_ops +#define DPAA_PER_LCORE_DPAA_SEC_OP_NB \ + RTE_PER_LCORE(dpaa_io)->dpaa_sec_op_nb + /* Various structures representing contiguous memory maps */ struct dpaa_memseg { TAILQ_ENTRY(dpaa_memseg) next; @@ -200,20 +224,6 @@ RTE_INIT(dpaainitfn_ ##nm) \ } \ RTE_PMD_EXPORT_NAME(nm, __COUNTER__) -/* Create storage for dqrr entries per lcore */ -#define DPAA_PORTAL_DEQUEUE_DEPTH 16 -struct dpaa_portal_dqrr { - void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH]; - uint64_t dqrr_held; - uint8_t dqrr_size; -}; - -RTE_DECLARE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs); - -#define DPAA_PER_LCORE_DQRR_SIZE RTE_PER_LCORE(held_bufs).dqrr_size -#define DPAA_PER_LCORE_DQRR_HELD RTE_PER_LCORE(held_bufs).dqrr_held -#define DPAA_PER_LCORE_DQRR_MBUF(i) RTE_PER_LCORE(held_bufs).mbuf[i] - __rte_internal struct fm_eth_port_cfg *dpaa_get_eth_port_cfg(int dev_id); diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index d9fa8bb36..8fcd57373 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -45,9 +45,6 @@ static uint8_t cryptodev_driver_id; -static __thread struct rte_crypto_op **dpaa_sec_ops; -static __thread int dpaa_sec_op_nb; - static int dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess); @@ -143,7 +140,7 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused, struct dpaa_sec_job *job; struct dpaa_sec_op_ctx *ctx; - if (dpaa_sec_op_nb >= DPAA_SEC_BURST) + if (DPAA_PER_LCORE_DPAA_SEC_OP_NB >= DPAA_SEC_BURST) return qman_cb_dqrr_defer; if (!(dqrr->stat & QM_DQRR_STAT_FD_VALID)) @@ -174,7 +171,7 @@ dqrr_out_fq_cb_rx(struct qman_portal *qm __always_unused, } mbuf->data_len = len; } - dpaa_sec_ops[dpaa_sec_op_nb++] = ctx->op; + DPAA_PER_LCORE_RTE_CRYPTO_OP[DPAA_PER_LCORE_DPAA_SEC_OP_NB++] = ctx->op; dpaa_sec_op_ending(ctx); return qman_cb_dqrr_consume; @@ -2301,7 +2298,7 @@ dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess) DPAA_SEC_ERR("Unable to prepare sec cdb"); return ret; } - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); if (ret) { DPAA_SEC_ERR("Failure in affining portal"); @@ -3463,7 +3460,7 @@ cryptodev_dpaa_sec_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, } } - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { retval = rte_dpaa_portal_init((void *)1); if (retval) { DPAA_SEC_ERR("Unable to initialize portal"); diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index e78728b7e..a3c138b7a 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -179,7 +179,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[], struct dpaa_port *portal = (struct dpaa_port *)port; struct rte_mbuf *mbuf; - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { /* Affine current thread context to a qman portal */ ret = rte_dpaa_portal_init((void *)0); if (ret) { @@ -251,7 +251,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], struct dpaa_port *portal = (struct dpaa_port *)port; struct rte_mbuf *mbuf; - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { /* Affine current thread context to a qman portal */ ret = rte_dpaa_portal_init((void *)0); if (ret) { diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index 8d1da8028..e6b06f057 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -53,7 +53,7 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp) MEMPOOL_INIT_FUNC_TRACE(); - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); if (ret) { DPAA_MEMPOOL_ERR( @@ -169,7 +169,7 @@ dpaa_mbuf_free_bulk(struct rte_mempool *pool, DPAA_MEMPOOL_DPDEBUG("Request to free %d buffers in bpid = %d", n, bp_info->bpid); - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); if (ret) { DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d", @@ -224,7 +224,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool, return -1; } - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); if (ret) { DPAA_MEMPOOL_ERR("rte_dpaa_portal_init failed with ret: %d", diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 6c94fd396..c9f828a7c 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1707,7 +1707,7 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, is_global_init = 1; } - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)1); if (ret) { DPAA_PMD_ERR("Unable to initialize portal"); diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index 819cad7c6..5303c9b76 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -670,7 +670,7 @@ uint16_t dpaa_eth_queue_rx(void *q, if (likely(fq->is_static)) return dpaa_eth_queue_portal_rx(fq, bufs, nb_bufs); - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); if (ret) { DPAA_PMD_ERR("Failure in affining portal"); @@ -970,7 +970,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) int ret, realloc_mbuf = 0; uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0}; - if (unlikely(!RTE_PER_LCORE(dpaa_io))) { + if (unlikely(!DPAA_PER_LCORE_PORTAL)) { ret = rte_dpaa_portal_init((void *)0); if (ret) { DPAA_PMD_ERR("Failure in affining portal"); From patchwork Tue Jul 7 09:22:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234966 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp737395ilg; Tue, 7 Jul 2020 02:29:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxkswCOf5qqwwr83ipvRdUoZxkSKL2ySlsFiITEYFHZIracyN1EsXmiynX9Zp2rUdQa5j3A X-Received: by 2002:a25:6910:: with SMTP id e16mr20855809ybc.207.1594114169943; Tue, 07 Jul 2020 02:29:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114169; cv=none; d=google.com; s=arc-20160816; b=ONlfkBvMwoTi0V2Rrk7w+4X1TZCh8r/Ld8VFafsRRZJCw/CHbJ6+nJ4CNIe+c9nrge YbzTRCXh1KXfBs7ZkKiSz1C2wOavZNcjQKPA555O1hYUMbFbySScogilFJ7sw/KbQP9B yA5B96oYusbLhAbxpM0e7VZ762Ve23aFD4H2gyhqHlqStpIfaAfcdOl4dFAY3Eny0CfY iPanAOC7nI+PuPviRzUQ36pQ7saSpkwGXvXW0EtaeDf0heVUyyqEuXV8UzmzgeQV/iij QAofQ5MzrdLYK9SLaVK2YTo7W1osvcxBYxbz/N0Kca8kR9gjsF5ytbIuYmU70jHFzBX3 c7Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=CBdJ/SwjN3xQatg2dN7pX/phFLHjWWIt3A2tjIrcfyE=; b=DTgYv4mcqJy62icLJEi89pTeQcrvk820G1u/BJD1DwplQ0wF0KLV2Sd0jpF8zbTaQL hvXGqlsMLhnTI5VCUsxAnU65qKgI4XKAuHDFaUioZSchmqE7lyZlHygTZqIuLl6E7KhX HinyoMJPQqL3EOMLK6Ti2GTu9EdQCVpskSPAQ+7vy2o4bzRLp7jUFDjNw1DjuweoC2R4 vKaWesp9KJA00PCi9EJVgB2J5VdBBBh7GSV9cokDs/URuYatg6R50wPVbbPCPCfWWvXg PTEFCww5CFtkx+7DD0lIi1hkaoJ9Uz6MRIzoPpu/V5dbPA+svRs81WbRM2FwaeFtJsyk /5vA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id h3si14490499ybh.33.2020.07.07.02.29.29; Tue, 07 Jul 2020 02:29:29 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4FE611DD88; Tue, 7 Jul 2020 11:27:20 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 09A391DCF8 for ; Tue, 7 Jul 2020 11:27:07 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E20171A0A54; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 05C1C1A0A4E; Tue, 7 Jul 2020 11:27:05 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id BD486402C8; Tue, 7 Jul 2020 17:27:02 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Rohit Raj Date: Tue, 7 Jul 2020 14:52:28 +0530 Message-Id: <20200707092244.12791-14-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 13/29] bus/dpaa: enable link state interrupt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj Enable/disable link state interrupt and get link state api is defined using IOCTL calls from kernel driver Signed-off-by: Rohit Raj --- doc/guides/nics/features/dpaa.ini | 1 + doc/guides/rel_notes/release_20_08.rst | 1 + drivers/bus/dpaa/base/fman/fman.c | 4 +- drivers/bus/dpaa/base/qbman/process.c | 72 ++++++++++++++++- drivers/bus/dpaa/dpaa_bus.c | 28 ++++++- drivers/bus/dpaa/include/fman.h | 2 + drivers/bus/dpaa/include/process.h | 20 +++++ drivers/bus/dpaa/rte_bus_dpaa_version.map | 3 + drivers/bus/dpaa/rte_dpaa_bus.h | 6 +- drivers/common/dpaax/compat.h | 5 +- drivers/net/dpaa/dpaa_ethdev.c | 97 ++++++++++++++++++++++- 11 files changed, 233 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini index b00f46a97..816a6e08e 100644 --- a/doc/guides/nics/features/dpaa.ini +++ b/doc/guides/nics/features/dpaa.ini @@ -6,6 +6,7 @@ [Features] Speed capabilities = Y Link status = Y +Link status event = Y Jumbo frame = Y MTU update = Y Scattered Rx = Y diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index b1e039d03..e5bc5cfd8 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -123,6 +123,7 @@ New Features Updated the NXP dpaa ethdev with new features and improvements, including: + * Added support for link status and interrupt * Added support to use datapath APIs from non-EAL pthread * **Updated NXP dpaa2 ethdev PMD.** diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c index ae26041ca..33be9e5d7 100644 --- a/drivers/bus/dpaa/base/fman/fman.c +++ b/drivers/bus/dpaa/base/fman/fman.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2010-2016 Freescale Semiconductor Inc. - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ @@ -185,6 +185,8 @@ fman_if_init(const struct device_node *dpa_node) } memset(__if, 0, sizeof(*__if)); INIT_LIST_HEAD(&__if->__if.bpool_list); + strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1); + __if->node_name[IF_NAME_MAX_LEN - 1] = '\0'; strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1); __if->node_path[PATH_MAX - 1] = '\0'; diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c index 2c23c98df..68b7af243 100644 --- a/drivers/bus/dpaa/base/qbman/process.c +++ b/drivers/bus/dpaa/base/qbman/process.c @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) * * Copyright 2011-2016 Freescale Semiconductor Inc. - * Copyright 2017 NXP + * Copyright 2017,2020 NXP * */ #include @@ -296,3 +296,73 @@ int bman_free_raw_portal(struct dpaa_raw_portal *portal) return process_portal_free(&input); } + +#define DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT \ + _IOW(DPAA_IOCTL_MAGIC, 0x0E, struct usdpaa_ioctl_link_status) + +#define DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT \ + _IOW(DPAA_IOCTL_MAGIC, 0x0F, char*) + +int dpaa_intr_enable(char *if_name, int efd) +{ + struct usdpaa_ioctl_link_status args; + + int ret = check_fd(); + + if (ret) + return ret; + + args.efd = (uint32_t)efd; + strcpy(args.if_name, if_name); + + ret = ioctl(fd, DPAA_IOCTL_ENABLE_LINK_STATUS_INTERRUPT, &args); + if (ret) + return errno; + + return 0; +} + +int dpaa_intr_disable(char *if_name) +{ + int ret = check_fd(); + + if (ret) + return ret; + + ret = ioctl(fd, DPAA_IOCTL_DISABLE_LINK_STATUS_INTERRUPT, &if_name); + if (ret) { + if (errno == EINVAL) + printf("Failed to disable interrupt: Not Supported\n"); + else + printf("Failed to disable interrupt\n"); + return ret; + } + + return 0; +} + +#define DPAA_IOCTL_GET_LINK_STATUS \ + _IOWR(DPAA_IOCTL_MAGIC, 0x10, struct usdpaa_ioctl_link_status_args) + +int dpaa_get_link_status(char *if_name) +{ + int ret = check_fd(); + struct usdpaa_ioctl_link_status_args args; + + if (ret) + return ret; + + strcpy(args.if_name, if_name); + args.link_status = 0; + + ret = ioctl(fd, DPAA_IOCTL_GET_LINK_STATUS, &args); + if (ret) { + if (errno == EINVAL) + printf("Failed to get link status: Not Supported\n"); + else + printf("Failed to get link status\n"); + return ret; + } + + return args.link_status; +} diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index aa906c34e..32e872da5 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ /* System headers */ @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -542,6 +543,23 @@ rte_dpaa_bus_dev_build(void) return 0; } +static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle) +{ + int fd; + + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (fd < 0) { + DPAA_BUS_ERR("Cannot set up eventfd, error %i (%s)", + errno, strerror(errno)); + return errno; + } + + intr_handle->fd = fd; + intr_handle->type = RTE_INTR_HANDLE_EXT; + + return 0; +} + static int rte_dpaa_bus_probe(void) { @@ -589,6 +607,14 @@ rte_dpaa_bus_probe(void) fclose(svr_file); } + TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) { + if (dev->device_type == FSL_DPAA_ETH) { + ret = rte_dpaa_setup_intr(&dev->intr_handle); + if (ret) + DPAA_BUS_ERR("Error setting up interrupt.\n"); + } + } + /* And initialize the PA->VA translation table */ dpaax_iova_table_populate(); diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h index b6293b61c..7a0a7d405 100644 --- a/drivers/bus/dpaa/include/fman.h +++ b/drivers/bus/dpaa/include/fman.h @@ -2,6 +2,7 @@ * * Copyright 2010-2012 Freescale Semiconductor, Inc. * All rights reserved. + * Copyright 2019-2020 NXP * */ @@ -361,6 +362,7 @@ struct fman_if_ic_params { */ struct __fman_if { struct fman_if __if; + char node_name[IF_NAME_MAX_LEN]; char node_path[PATH_MAX]; uint64_t regs_size; void *ccsr_map; diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index d9ec94ee2..7305762c2 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -2,6 +2,7 @@ * * Copyright 2010-2011 Freescale Semiconductor, Inc. * All rights reserved. + * Copyright 2020 NXP * */ @@ -74,4 +75,23 @@ struct dpaa_ioctl_irq_map { int process_portal_irq_map(int fd, struct dpaa_ioctl_irq_map *irq); int process_portal_irq_unmap(int fd); +struct usdpaa_ioctl_link_status { + char if_name[IF_NAME_MAX_LEN]; + uint32_t efd; +}; + +__rte_internal +int dpaa_intr_enable(char *if_name, int efd); + +__rte_internal +int dpaa_intr_disable(char *if_name); + +struct usdpaa_ioctl_link_status_args { + /* network device node name */ + char if_name[IF_NAME_MAX_LEN]; + int link_status; +}; +__rte_internal +int dpaa_get_link_status(char *if_name); + #endif /* __PROCESS_H */ diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index 2defa7992..96662d7be 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -15,6 +15,9 @@ INTERNAL { dpaa_get_eth_port_cfg; dpaa_get_qm_channel_caam; dpaa_get_qm_channel_pool; + dpaa_get_link_status; + dpaa_intr_disable; + dpaa_intr_enable; dpaa_svr_family; fman_dealloc_bufs_mask_hi; fman_dealloc_bufs_mask_lo; diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index 25aff2d30..fdaa63a09 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright 2017-2019 NXP + * Copyright 2017-2020 NXP * */ #ifndef __RTE_DPAA_BUS_H__ @@ -30,6 +30,9 @@ #define SVR_LS1046A_FAMILY 0x87070000 #define SVR_MASK 0xffff0000 +/** Device driver supports link state interrupt */ +#define RTE_DPAA_DRV_INTR_LSC 0x0008 + #define RTE_DEV_TO_DPAA_CONST(ptr) \ container_of(ptr, const struct rte_dpaa_device, device) @@ -86,6 +89,7 @@ struct rte_dpaa_driver { enum rte_dpaa_type drv_type; rte_dpaa_probe_t probe; rte_dpaa_remove_t remove; + uint32_t drv_flags; /**< Flags for controlling device.*/ }; /* Create storage for dqrr entries per lcore */ diff --git a/drivers/common/dpaax/compat.h b/drivers/common/dpaax/compat.h index 90db68ce7..6793cb256 100644 --- a/drivers/common/dpaax/compat.h +++ b/drivers/common/dpaax/compat.h @@ -2,7 +2,7 @@ * * Copyright 2011 Freescale Semiconductor, Inc. * All rights reserved. - * Copyright 2019 NXP + * Copyright 2019-2020 NXP * */ @@ -390,4 +390,7 @@ static inline unsigned long get_zeroed_page(gfp_t __foo __rte_unused) #define atomic_dec_return(v) rte_atomic32_sub_return(v, 1) #define atomic_sub_and_test(i, v) (rte_atomic32_sub_return(v, i) == 0) +/* Interface name len*/ +#define IF_NAME_MAX_LEN 16 + #endif /* __COMPAT_H */ diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index c9f828a7c..3f805b2b0 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -45,6 +45,7 @@ #include #include #include +#include /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = @@ -131,6 +132,11 @@ static struct rte_dpaa_driver rte_dpaa_pmd; static int dpaa_eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); +static int dpaa_eth_link_update(struct rte_eth_dev *dev, + int wait_to_complete __rte_unused); + +static void dpaa_interrupt_handler(void *param); + static inline void dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts) { @@ -195,9 +201,19 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) struct rte_eth_conf *eth_conf = &dev->data->dev_conf; uint64_t rx_offloads = eth_conf->rxmode.offloads; uint64_t tx_offloads = eth_conf->txmode.offloads; + struct rte_device *rdev = dev->device; + struct rte_dpaa_device *dpaa_dev; + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + struct rte_intr_handle *intr_handle; + int ret; PMD_INIT_FUNC_TRACE(); + dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); + intr_handle = &dpaa_dev->intr_handle; + __fif = container_of(fif, struct __fman_if, __if); + /* Rx offloads which are enabled by default */ if (dev_rx_offloads_nodis & ~rx_offloads) { DPAA_PMD_INFO( @@ -241,6 +257,28 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) dev->data->scattered_rx = 1; } + /* if the interrupts were configured on this devices*/ + if (intr_handle && intr_handle->fd) { + if (dev->data->dev_conf.intr_conf.lsc != 0) + rte_intr_callback_register(intr_handle, + dpaa_interrupt_handler, + (void *)dev); + + ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd); + if (ret) { + if (dev->data->dev_conf.intr_conf.lsc != 0) { + rte_intr_callback_unregister(intr_handle, + dpaa_interrupt_handler, + (void *)dev); + if (ret == EINVAL) + printf("Failed to enable interrupt: Not Supported\n"); + else + printf("Failed to enable interrupt\n"); + } + dev->data->dev_conf.intr_conf.lsc = 0; + dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + } + } return 0; } @@ -269,6 +307,25 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev) return NULL; } +static void dpaa_interrupt_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct rte_device *rdev = dev->device; + struct rte_dpaa_device *dpaa_dev; + struct rte_intr_handle *intr_handle; + uint64_t buf; + int bytes_read; + + dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); + intr_handle = &dpaa_dev->intr_handle; + + bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t)); + if (bytes_read < 0) + DPAA_PMD_ERR("Error reading eventfd\n"); + dpaa_eth_link_update(dev, 0); + _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); +} + static int dpaa_eth_dev_start(struct rte_eth_dev *dev) { struct dpaa_if *dpaa_intf = dev->data->dev_private; @@ -298,9 +355,27 @@ static void dpaa_eth_dev_stop(struct rte_eth_dev *dev) static void dpaa_eth_dev_close(struct rte_eth_dev *dev) { + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + struct rte_device *rdev = dev->device; + struct rte_dpaa_device *dpaa_dev; + struct rte_intr_handle *intr_handle; + PMD_INIT_FUNC_TRACE(); + dpaa_dev = container_of(rdev, struct rte_dpaa_device, device); + intr_handle = &dpaa_dev->intr_handle; + __fif = container_of(fif, struct __fman_if, __if); + dpaa_eth_dev_stop(dev); + + if (intr_handle && intr_handle->fd && + dev->data->dev_conf.intr_conf.lsc != 0) { + dpaa_intr_disable(__fif->node_name); + rte_intr_callback_unregister(intr_handle, + dpaa_interrupt_handler, + (void *)dev); + } } static int @@ -388,6 +463,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, struct dpaa_if *dpaa_intf = dev->data->dev_private; struct rte_eth_link *link = &dev->data->dev_link; struct fman_if *fif = dev->process_private; + struct __fman_if *__fif = container_of(fif, struct __fman_if, __if); + int ret; PMD_INIT_FUNC_TRACE(); @@ -401,9 +478,23 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, DPAA_PMD_ERR("invalid link_speed: %s, %d", dpaa_intf->name, fif->mac_type); - link->link_status = dpaa_intf->valid; + ret = dpaa_get_link_status(__fif->node_name); + if (ret < 0) { + if (ret == -EINVAL) { + DPAA_PMD_DEBUG("Using default link status-No Support"); + ret = 1; + } else { + DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret); + return ret; + } + } + + link->link_status = ret; link->link_duplex = ETH_LINK_FULL_DUPLEX; link->link_autoneg = ETH_LINK_AUTONEG; + + DPAA_PMD_INFO("Port %d Link is %s\n", dev->data->port_id, + link->link_status ? "Up" : "Down"); return 0; } @@ -1734,6 +1825,9 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv __rte_unused, qman_ern_register_cb(dpaa_free_mbuf); + if (dpaa_drv->drv_flags & RTE_DPAA_DRV_INTR_LSC) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + /* Invoke PMD device initialization function */ diag = dpaa_dev_init(eth_dev); if (diag == 0) { @@ -1761,6 +1855,7 @@ rte_dpaa_remove(struct rte_dpaa_device *dpaa_dev) } static struct rte_dpaa_driver rte_dpaa_pmd = { + .drv_flags = RTE_DPAA_DRV_INTR_LSC, .drv_type = FSL_DPAA_ETH, .probe = rte_dpaa_probe, .remove = rte_dpaa_remove, From patchwork Tue Jul 7 09:22:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234967 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp737545ilg; Tue, 7 Jul 2020 02:29:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJypafUl5owISi47Go9PvxO8GYu+JeuFkaVvm8V7bnOUqLIBm8+gkHkBiX5LVU/+HeIXwYge X-Received: by 2002:a25:c711:: with SMTP id w17mr3887742ybe.465.1594114182045; Tue, 07 Jul 2020 02:29:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114182; cv=none; d=google.com; s=arc-20160816; b=sAND5HM5d/SEiJDcgA9LuaMRLgfrpt7kQqdB3XK4xW/AcbbzKjgt3p36deJFbWBd7V 4l7ta8sF2z0wehaWVLx12TJme+ejIJgMOAr9QVbjMFdgscFNnyPa/AmqNClqr95edtcy h6X9FVYhee5IMGFZrGF83sxEcYlIyHmUP7NNKpA5ANlwHvIuBU8yCGxKuD3tPJgclRzv e0pJttcDKw49zA+XkgSZX/p5YJONt9DwXiiufj94Bt8STWafvva/6hVKMPgo40YRXh5n R+oouUqu/9dl+FZIKE0HXU4Y58b6WglJ6JrUIObaiq5m0ljsv/NY/tykg188S41cK1T9 dBYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Kwi5olL4Pq6fFaxYHyWM50LdeYUh2UzlCPXokGkn3gc=; b=RRXE1ChIEGRKLDZ2Ap9vscVcVH5ehn7eZ0XcKIiouKLYoXkigh+lCIY/xFz+FziUr8 nwUeRLJWDSUznsN+KcsLSGaeVRIKKTpnlrNoCYgP1xYHSCxwah8CO6T1CEcllwOQOpx9 NMNA1tv1e/JnkfyNn73gwTCUrcjmxIyB4BjzeN5Kf4UbWLYx7fgSniMSV8MmEmHnUmIt EljNO7QRZ9L8feVPVlw9dVVdudqb+lFaaspcfqadyRE0/t94WKRw3QI5HzWD0Rj7JIMg io1mmjS8XvDvZbspuf/SnSw7o3F0WuA95sAv1nttS4tQgHPvEot7ISNA31lnam7r0z4K EutA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id 3si21695040ybo.350.2020.07.07.02.29.41; Tue, 07 Jul 2020 02:29:42 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A2CDD1DD92; Tue, 7 Jul 2020 11:27:21 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id B88221DC9B for ; Tue, 7 Jul 2020 11:27:07 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 993421A0A4E; Tue, 7 Jul 2020 11:27:07 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A98CF1A0A46; Tue, 7 Jul 2020 11:27:05 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 6E31240305; Tue, 7 Jul 2020 17:27:03 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Rohit Raj Date: Tue, 7 Jul 2020 14:52:29 +0530 Message-Id: <20200707092244.12791-15-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 14/29] bus/dpaa: enable set link status X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rohit Raj Enabled set link status API to start/stop phy device from application. Signed-off-by: Rohit Raj --- drivers/bus/dpaa/base/qbman/process.c | 27 +++++++++++++++++ drivers/bus/dpaa/include/process.h | 11 +++++++ drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 + drivers/net/dpaa/dpaa_ethdev.c | 35 ++++++++++++++++------- 4 files changed, 63 insertions(+), 11 deletions(-) -- 2.17.1 diff --git a/drivers/bus/dpaa/base/qbman/process.c b/drivers/bus/dpaa/base/qbman/process.c index 68b7af243..6f7e37957 100644 --- a/drivers/bus/dpaa/base/qbman/process.c +++ b/drivers/bus/dpaa/base/qbman/process.c @@ -366,3 +366,30 @@ int dpaa_get_link_status(char *if_name) return args.link_status; } + +#define DPAA_IOCTL_UPDATE_LINK_STATUS \ + _IOW(DPAA_IOCTL_MAGIC, 0x11, struct usdpaa_ioctl_update_link_status_args) + +int dpaa_update_link_status(char *if_name, int link_status) +{ + struct usdpaa_ioctl_update_link_status_args args; + int ret; + + ret = check_fd(); + if (ret) + return ret; + + strcpy(args.if_name, if_name); + args.link_status = link_status; + + ret = ioctl(fd, DPAA_IOCTL_UPDATE_LINK_STATUS, &args); + if (ret) { + if (errno == EINVAL) + printf("Failed to set link status: Not Supported\n"); + else + printf("Failed to set link status"); + return ret; + } + + return 0; +} diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index 7305762c2..f52ea1635 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -91,7 +91,18 @@ struct usdpaa_ioctl_link_status_args { char if_name[IF_NAME_MAX_LEN]; int link_status; }; + +struct usdpaa_ioctl_update_link_status_args { + /* network device node name */ + char if_name[IF_NAME_MAX_LEN]; + /* link status(ETH_LINK_UP/DOWN) */ + int link_status; +}; + __rte_internal int dpaa_get_link_status(char *if_name); +__rte_internal +int dpaa_update_link_status(char *if_name, int link_status); + #endif /* __PROCESS_H */ diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index 96662d7be..5dec8d9e5 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -19,6 +19,7 @@ INTERNAL { dpaa_intr_disable; dpaa_intr_enable; dpaa_svr_family; + dpaa_update_link_status; fman_dealloc_bufs_mask_hi; fman_dealloc_bufs_mask_lo; fman_if_add_mac_addr; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 3f805b2b0..3a5b319d4 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -478,18 +478,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev, DPAA_PMD_ERR("invalid link_speed: %s, %d", dpaa_intf->name, fif->mac_type); - ret = dpaa_get_link_status(__fif->node_name); - if (ret < 0) { - if (ret == -EINVAL) { - DPAA_PMD_DEBUG("Using default link status-No Support"); - ret = 1; - } else { - DPAA_PMD_ERR("rte_dpaa_get_link_status %d", ret); + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) { + ret = dpaa_get_link_status(__fif->node_name); + if (ret < 0) return ret; - } + link->link_status = ret; + } else { + link->link_status = dpaa_intf->valid; } - link->link_status = ret; link->link_duplex = ETH_LINK_FULL_DUPLEX; link->link_autoneg = ETH_LINK_AUTONEG; @@ -985,17 +982,33 @@ dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) static int dpaa_link_down(struct rte_eth_dev *dev) { + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + PMD_INIT_FUNC_TRACE(); - dpaa_eth_dev_stop(dev); + __fif = container_of(fif, struct __fman_if, __if); + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN); + else + dpaa_eth_dev_stop(dev); return 0; } static int dpaa_link_up(struct rte_eth_dev *dev) { + struct fman_if *fif = dev->process_private; + struct __fman_if *__fif; + PMD_INIT_FUNC_TRACE(); - dpaa_eth_dev_start(dev); + __fif = container_of(fif, struct __fman_if, __if); + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + dpaa_update_link_status(__fif->node_name, ETH_LINK_UP); + else + dpaa_eth_dev_start(dev); return 0; } From patchwork Tue Jul 7 09:22:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234968 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp737707ilg; Tue, 7 Jul 2020 02:29:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwms/HDEjHzyNtNDauxo7UpS45bBFKbPizYvd6ThXmDICfbF5ljv2kG+nNox68Q6giS4Psn X-Received: by 2002:a25:6912:: with SMTP id e18mr21429729ybc.121.1594114194800; Tue, 07 Jul 2020 02:29:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114194; cv=none; d=google.com; s=arc-20160816; b=Zg+iun/bWw1mQ49cPpYDbOKa2Y0Hksd1VnPwBnh9S9W5IqntwOSdoTXtH1o7J5/Dz0 hZ5NgMR28iE4uSeETTc1fAOdEzde2qhS2u7RSsTXp6S1KFruuWjXXvvsbO0g1N8Lg0fw 1g3hqZvMlfo6Nog8iciyFTcA74o0IzDA8sy3NUuyQMk7HFD7TSPBOJdVajbn7iMpR8tB Fduxxx+ly0xiGyOrutjZChdbDnggESthHAWrDwRMOqHWirTWkuCft3CJCKKl2Og36yHk 9unriwcbRTwQTJtE8LqsneQPwo5ruUQox62W8bRg9fwPM0w8VVdNIM7JmS63HZxqQ8ga 70AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=IKZR6iWSz+PDr/fsgxZuv7GA4f8jkoyW2gaJO1Ru9o4=; b=fx6NKPXdwqcURXpFqBO+sS48ze7+JbzxRA3OnPu6E4bnr+k7KU/txoYozWLJaSFtwp Thu06DTmh5IbxDvpN/efS8ep7nXDOUXR5Xh4uRk2yUHfqgWbCiZkx9SnzHQkZUgEmfzV P9M5xKgrDyA1npFwiKyYjHyFtoiLMIV1gpXKA2FNOPPE+xv4E4RQFDsbKbTgN12pf+Do DS6h0WbC6Tt6gOJsK5L+QF0C24zMEFPyu8yiRpmnVMZMq2Fa9LVpf3g+TwSfvhSDlEsj 1np7086tJTmbM9YrUYbmI9qOPC3j1SKiFEkxfo4e8Uu73/98ZKN+x2056gi4B3O+42yO FI3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id e197si12189959ybh.328.2020.07.07.02.29.54; Tue, 07 Jul 2020 02:29:54 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D7A101DDA3; Tue, 7 Jul 2020 11:27:22 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 8E4851DD02 for ; Tue, 7 Jul 2020 11:27:08 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 6ABDF2008EB; Tue, 7 Jul 2020 11:27:08 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 80C022008ED; Tue, 7 Jul 2020 11:27:06 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 1E603402F0; Tue, 7 Jul 2020 17:27:04 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:30 +0530 Message-Id: <20200707092244.12791-16-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 15/29] net/dpaa2: support dynamic flow control X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Dynamic flow used instead of layout defined. The actual key/mask size depends on protocols and(or) fields of patterns specified. Also, the key and mask should start from the beginning of IOVA. Signed-off-by: Jun Yang --- doc/guides/nics/features/dpaa2.ini | 1 + doc/guides/rel_notes/release_20_08.rst | 1 + drivers/net/dpaa2/dpaa2_flow.c | 146 ++++++------------------- 3 files changed, 36 insertions(+), 112 deletions(-) -- 2.17.1 diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini index c2214fbd5..3685e2e02 100644 --- a/doc/guides/nics/features/dpaa2.ini +++ b/doc/guides/nics/features/dpaa2.ini @@ -16,6 +16,7 @@ Unicast MAC filter = Y RSS hash = Y VLAN filter = Y Flow control = Y +Flow API = Y VLAN offload = Y L3 checksum offload = Y L4 checksum offload = Y diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index e5bc5cfd8..97267f7b7 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -131,6 +131,7 @@ New Features Updated the NXP dpaa2 ethdev with new features and improvements, including: * Added support to use datapath APIs from non-EAL pthread + * Added support for dynamic flow management Removed Items ------------- diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 8aa65db30..05d115c78 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -33,29 +33,6 @@ struct rte_flow { uint16_t flow_id; }; -/* Layout for rule compositions for supported patterns */ -/* TODO: Current design only supports Ethernet + IPv4 based classification. */ -/* So corresponding offset macros are valid only. Rest are placeholder for */ -/* now. Once support for other netwrok headers will be added then */ -/* corresponding macros will be updated with correct values*/ -#define DPAA2_CLS_RULE_OFFSET_ETH 0 /*Start of buffer*/ -#define DPAA2_CLS_RULE_OFFSET_VLAN 14 /* DPAA2_CLS_RULE_OFFSET_ETH */ - /* + Sizeof Eth fields */ -#define DPAA2_CLS_RULE_OFFSET_IPV4 14 /* DPAA2_CLS_RULE_OFFSET_VLAN */ - /* + Sizeof VLAN fields */ -#define DPAA2_CLS_RULE_OFFSET_IPV6 25 /* DPAA2_CLS_RULE_OFFSET_IPV4 */ - /* + Sizeof IPV4 fields */ -#define DPAA2_CLS_RULE_OFFSET_ICMP 58 /* DPAA2_CLS_RULE_OFFSET_IPV6 */ - /* + Sizeof IPV6 fields */ -#define DPAA2_CLS_RULE_OFFSET_UDP 60 /* DPAA2_CLS_RULE_OFFSET_ICMP */ - /* + Sizeof ICMP fields */ -#define DPAA2_CLS_RULE_OFFSET_TCP 64 /* DPAA2_CLS_RULE_OFFSET_UDP */ - /* + Sizeof UDP fields */ -#define DPAA2_CLS_RULE_OFFSET_SCTP 68 /* DPAA2_CLS_RULE_OFFSET_TCP */ - /* + Sizeof TCP fields */ -#define DPAA2_CLS_RULE_OFFSET_GRE 72 /* DPAA2_CLS_RULE_OFFSET_SCTP */ - /* + Sizeof SCTP fields */ - static const enum rte_flow_item_type dpaa2_supported_pattern_type[] = { RTE_FLOW_ITEM_TYPE_END, @@ -212,7 +189,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, (pattern->mask ? pattern->mask : default_mask); /* Key rule */ - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ETH; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes), sizeof(struct rte_ether_addr)); key_iova += sizeof(struct rte_ether_addr); @@ -223,7 +200,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, sizeof(rte_be16_t)); /* Key mask */ - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ETH; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes), sizeof(struct rte_ether_addr)); mask_iova += sizeof(struct rte_ether_addr); @@ -233,9 +210,9 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, memcpy((void *)mask_iova, (const void *)(&mask->type), sizeof(rte_be16_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ETH + - ((2 * sizeof(struct rte_ether_addr)) + - sizeof(rte_be16_t))); + flow->key_size += ((2 * sizeof(struct rte_ether_addr)) + + sizeof(rte_be16_t)); + return device_configured; } @@ -335,15 +312,15 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, mask = (const struct rte_flow_item_vlan *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_VLAN; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(&spec->tci), sizeof(rte_be16_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_VLAN; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(&mask->tci), sizeof(rte_be16_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_VLAN + sizeof(rte_be16_t)); + flow->key_size += sizeof(rte_be16_t); return device_configured; } @@ -474,7 +451,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow, mask = (const struct rte_flow_item_ipv4 *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr, sizeof(uint32_t)); key_iova += sizeof(uint32_t); @@ -484,7 +461,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow, memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id, sizeof(uint8_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV4; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr, sizeof(uint32_t)); mask_iova += sizeof(uint32_t); @@ -494,9 +471,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow, memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id, sizeof(uint8_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV4 + - (2 * sizeof(uint32_t)) + sizeof(uint8_t)); - + flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t); return device_configured; } @@ -613,23 +588,22 @@ dpaa2_configure_flow_ipv6(struct rte_flow *flow, mask = (const struct rte_flow_item_ipv6 *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV6; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr), sizeof(spec->hdr.src_addr)); key_iova += sizeof(spec->hdr.src_addr); memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr), sizeof(spec->hdr.dst_addr)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV6; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr), sizeof(mask->hdr.src_addr)); mask_iova += sizeof(mask->hdr.src_addr); memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr), sizeof(mask->hdr.dst_addr)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV6 + - sizeof(spec->hdr.src_addr) + - sizeof(mask->hdr.dst_addr)); + flow->key_size += sizeof(spec->hdr.src_addr) + + sizeof(mask->hdr.dst_addr); return device_configured; } @@ -746,22 +720,21 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow, mask = (const struct rte_flow_item_icmp *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ICMP; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type, sizeof(uint8_t)); key_iova += sizeof(uint8_t); memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code, sizeof(uint8_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ICMP; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type, sizeof(uint8_t)); key_iova += sizeof(uint8_t); memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code, sizeof(uint8_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ICMP + - (2 * sizeof(uint8_t))); + flow->key_size += 2 * sizeof(uint8_t); return device_configured; } @@ -837,13 +810,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - priv->extract.qos_key_cfg.extracts[index].type = DPKG_EXTRACT_FROM_HDR; priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; @@ -862,13 +828,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - priv->extract.fs_key_cfg[group].extracts[index].type = DPKG_EXTRACT_FROM_HDR; priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; @@ -892,25 +851,21 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, mask = (const struct rte_flow_item_udp *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 + - (2 * sizeof(uint32_t)); - memset((void *)key_iova, 0x11, sizeof(uint8_t)); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_UDP; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port), sizeof(uint16_t)); key_iova += sizeof(uint16_t); memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port), sizeof(uint16_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_UDP; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port), sizeof(uint16_t)); mask_iova += sizeof(uint16_t); memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port), sizeof(uint16_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_UDP + - (2 * sizeof(uint16_t))); + flow->key_size += (2 * sizeof(uint16_t)); return device_configured; } @@ -986,13 +941,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow, if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - priv->extract.qos_key_cfg.extracts[index].type = DPKG_EXTRACT_FROM_HDR; priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; @@ -1012,13 +960,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow, if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - priv->extract.fs_key_cfg[group].extracts[index].type = DPKG_EXTRACT_FROM_HDR; priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; @@ -1042,25 +983,21 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow, mask = (const struct rte_flow_item_tcp *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 + - (2 * sizeof(uint32_t)); - memset((void *)key_iova, 0x06, sizeof(uint8_t)); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_TCP; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port), sizeof(uint16_t)); key_iova += sizeof(uint16_t); memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port), sizeof(uint16_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_TCP; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port), sizeof(uint16_t)); mask_iova += sizeof(uint16_t); memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port), sizeof(uint16_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_TCP + - (2 * sizeof(uint16_t))); + flow->key_size += 2 * sizeof(uint16_t); return device_configured; } @@ -1136,13 +1073,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - priv->extract.qos_key_cfg.extracts[index].type = DPKG_EXTRACT_FROM_HDR; priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; @@ -1162,13 +1092,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - priv->extract.fs_key_cfg[group].extracts[index].type = DPKG_EXTRACT_FROM_HDR; priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; @@ -1192,25 +1115,22 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, mask = (const struct rte_flow_item_sctp *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 + - (2 * sizeof(uint32_t)); - memset((void *)key_iova, 0x84, sizeof(uint8_t)); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_SCTP; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port), sizeof(uint16_t)); key_iova += sizeof(uint16_t); memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port), sizeof(uint16_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_SCTP; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port), sizeof(uint16_t)); mask_iova += sizeof(uint16_t); memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port), sizeof(uint16_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_SCTP + - (2 * sizeof(uint16_t))); + flow->key_size += 2 * sizeof(uint16_t); + return device_configured; } @@ -1313,15 +1233,15 @@ dpaa2_configure_flow_gre(struct rte_flow *flow, mask = (const struct rte_flow_item_gre *) (pattern->mask ? pattern->mask : default_mask); - key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_GRE; + key_iova = flow->rule.key_iova + flow->key_size; memcpy((void *)key_iova, (const void *)(&spec->protocol), sizeof(rte_be16_t)); - mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_GRE; + mask_iova = flow->rule.mask_iova + flow->key_size; memcpy((void *)mask_iova, (const void *)(&mask->protocol), sizeof(rte_be16_t)); - flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_GRE + sizeof(rte_be16_t)); + flow->key_size += sizeof(rte_be16_t); return device_configured; } @@ -1503,6 +1423,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, action.flow_id = action.flow_id % nic_attr.num_rx_tcs; index = flow->index + (flow->tc_id * nic_attr.fs_entries); + flow->rule.key_size = flow->key_size; ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->rule, flow->tc_id, index, @@ -1606,6 +1527,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, /* Add Rule into QoS table */ index = flow->index + (flow->tc_id * nic_attr.fs_entries); + flow->rule.key_size = flow->key_size; ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->rule, flow->tc_id, index, 0, 0); @@ -1862,7 +1784,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev, flow->rule.key_iova = key_iova; flow->rule.mask_iova = mask_iova; - flow->rule.key_size = 0; + flow->key_size = 0; switch (dpaa2_filter_type) { case RTE_ETH_FILTER_GENERIC: From patchwork Tue Jul 7 09:22:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234970 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp738223ilg; Tue, 7 Jul 2020 02:30:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxHutEJsbJ4FxPDYRy9VrCXy309hF5xsblGKJhPEaayEN4HEQIXdnbMLQcLfPLqDnO1oU4l X-Received: by 2002:a25:d807:: with SMTP id p7mr32713237ybg.229.1594114235661; Tue, 07 Jul 2020 02:30:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114235; cv=none; d=google.com; s=arc-20160816; b=l1rydeYQOusYUnyQSpc0oRJ7eHYDIb3X08r+ebd6U1aRqW9claUSdIuJiml22ET8IR seqdPUwFuccXBoHLHpAxKRsdfoeCXPf09LAWzdeam4VubdDl83HnxNMNUi2KRjAIoh4I tjtZKAOb1uYumKDzQSTJ4D5EfRnI/XmN2z9B95Q54ToFUz1MeB4ChjDZ7F0pGwy2DP1y 7Uv+nDXass9XF+xe22A0TC7lD42M5JQz/XZlFPtsq9IJU0dqc6dBNgQShW5rXLEsA7eS l11oDW8dJkAYW+decxBCbyA3Mly19ETX9GobZbRREcGKMO/GY0V9POvBgUggU4bBaS0j z4mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=xQrldFTSr3TgetYnBsm89ppKDs9D4mFtLReclV+RWKM=; b=lYqprYY2/dgAwzN9ujs9MnhDaoiy28ZuzEQjgXm3AoyJdx/yLzb+uPA88NnsxMzevW hkFSj2obbqzRjaugLhwb8/vSTkBoJKZnHvWmD9NuWdEOuTfrBprQtLiRGlE/RZWT3azY So/cibi2qVtTQVcJPL63SNwJKecXihB5iXTY4rtFf3puKcCf/pz9KSc5TcJ5eJdn5NTU VP0tvhw7Yuk0vFip6Or6I2JnzdZPMDzNaIxl/hBNHpCevnU2FAz0JRDqJYfy6rX6yoi/ gQ00OFhwcpriDaECTmTzczHUcz/8Hp9RUjWtDvjY8PhOlVTXMJGKUyFtoA7SiPeCA8d0 SMuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id a11si60500ybl.309.2020.07.07.02.30.34; Tue, 07 Jul 2020 02:30:35 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F20601DDDA; Tue, 7 Jul 2020 11:27:26 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 1846D1DC74 for ; Tue, 7 Jul 2020 11:27:11 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id DA3D22008DD; Tue, 7 Jul 2020 11:27:10 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id ECB7F2008CE; Tue, 7 Jul 2020 11:27:07 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id EA107402F7; Tue, 7 Jul 2020 17:27:04 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:31 +0530 Message-Id: <20200707092244.12791-17-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 16/29] net/dpaa2: support key extracts of flow API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang 1) Support QoS extracts and TC extracts for multiple TCs. 2) Protocol type of L2 extract is used to parse L3. Next protocol of L3 extract is used to parse L4. 3) generic IP key extracts instead of IPv4 and IPv6 respectively. 4) Special for IP address extracts: Put IP(v4/v6) address extract(s)/rule(s) at the end of extracts array to make rest fields at fixed poisition. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_ethdev.c | 35 +- drivers/net/dpaa2/dpaa2_ethdev.h | 43 +- drivers/net/dpaa2/dpaa2_flow.c | 3628 +++++++++++++++++++++--------- 3 files changed, 2665 insertions(+), 1041 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 8edd4b3cd..492b65840 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1,7 +1,7 @@ /* * SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016 NXP + * Copyright 2016-2020 NXP * */ @@ -2501,23 +2501,41 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_burst = dpaa2_dev_tx; /*Init fields w.r.t. classficaition*/ - memset(&priv->extract.qos_key_cfg, 0, sizeof(struct dpkg_profile_cfg)); + memset(&priv->extract.qos_key_extract, 0, + sizeof(struct dpaa2_key_extract)); priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64); if (!priv->extract.qos_extract_param) { DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow " " classificaiton ", ret); goto init_err; } + priv->extract.qos_key_extract.key_info.ipv4_src_offset = + IP_ADDRESS_OFFSET_INVALID; + priv->extract.qos_key_extract.key_info.ipv4_dst_offset = + IP_ADDRESS_OFFSET_INVALID; + priv->extract.qos_key_extract.key_info.ipv6_src_offset = + IP_ADDRESS_OFFSET_INVALID; + priv->extract.qos_key_extract.key_info.ipv6_dst_offset = + IP_ADDRESS_OFFSET_INVALID; + for (i = 0; i < MAX_TCS; i++) { - memset(&priv->extract.fs_key_cfg[i], 0, - sizeof(struct dpkg_profile_cfg)); - priv->extract.fs_extract_param[i] = + memset(&priv->extract.tc_key_extract[i], 0, + sizeof(struct dpaa2_key_extract)); + priv->extract.tc_extract_param[i] = (size_t)rte_malloc(NULL, 256, 64); - if (!priv->extract.fs_extract_param[i]) { + if (!priv->extract.tc_extract_param[i]) { DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classificaiton", ret); goto init_err; } + priv->extract.tc_key_extract[i].key_info.ipv4_src_offset = + IP_ADDRESS_OFFSET_INVALID; + priv->extract.tc_key_extract[i].key_info.ipv4_dst_offset = + IP_ADDRESS_OFFSET_INVALID; + priv->extract.tc_key_extract[i].key_info.ipv6_src_offset = + IP_ADDRESS_OFFSET_INVALID; + priv->extract.tc_key_extract[i].key_info.ipv6_dst_offset = + IP_ADDRESS_OFFSET_INVALID; } ret = dpni_set_max_frame_length(dpni_dev, CMD_PRI_LOW, priv->token, @@ -2593,8 +2611,9 @@ dpaa2_dev_uninit(struct rte_eth_dev *eth_dev) rte_free(dpni); for (i = 0; i < MAX_TCS; i++) { - if (priv->extract.fs_extract_param[i]) - rte_free((void *)(size_t)priv->extract.fs_extract_param[i]); + if (priv->extract.tc_extract_param[i]) + rte_free((void *) + (size_t)priv->extract.tc_extract_param[i]); } if (priv->extract.qos_extract_param) diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index c7fb6539f..030c625e3 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -96,10 +96,39 @@ extern enum pmd_dpaa2_ts dpaa2_enable_ts; #define DPAA2_QOS_TABLE_RECONFIGURE 1 #define DPAA2_FS_TABLE_RECONFIGURE 2 +#define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4 +#define DPAA2_FS_TABLE_IPADDR_EXTRACT 8 + + /*Externaly defined*/ extern const struct rte_flow_ops dpaa2_flow_ops; extern enum rte_filter_type dpaa2_filter_type; +#define IP_ADDRESS_OFFSET_INVALID (-1) + +struct dpaa2_key_info { + uint8_t key_offset[DPKG_MAX_NUM_OF_EXTRACTS]; + uint8_t key_size[DPKG_MAX_NUM_OF_EXTRACTS]; + /* Special for IP address. */ + int ipv4_src_offset; + int ipv4_dst_offset; + int ipv6_src_offset; + int ipv6_dst_offset; + uint8_t key_total_size; +}; + +struct dpaa2_key_extract { + struct dpkg_profile_cfg dpkg; + struct dpaa2_key_info key_info; +}; + +struct extract_s { + struct dpaa2_key_extract qos_key_extract; + struct dpaa2_key_extract tc_key_extract[MAX_TCS]; + uint64_t qos_extract_param; + uint64_t tc_extract_param[MAX_TCS]; +}; + struct dpaa2_dev_priv { void *hw; int32_t hw_id; @@ -122,17 +151,9 @@ struct dpaa2_dev_priv { uint8_t max_cgs; uint8_t cgid_in_use[MAX_RX_QUEUES]; - struct pattern_s { - uint8_t item_count; - uint8_t pattern_type[DPKG_MAX_NUM_OF_EXTRACTS]; - } pattern[MAX_TCS + 1]; - - struct extract_s { - struct dpkg_profile_cfg qos_key_cfg; - struct dpkg_profile_cfg fs_key_cfg[MAX_TCS]; - uint64_t qos_extract_param; - uint64_t fs_extract_param[MAX_TCS]; - } extract; + struct extract_s extract; + uint8_t *qos_index; + uint8_t *fs_index; uint16_t ss_offset; uint64_t ss_iova; diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 05d115c78..779cb64ab 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -1,5 +1,5 @@ -/* * SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018 NXP +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2018-2020 NXP */ #include @@ -22,15 +22,44 @@ #include #include +/* Workaround to discriminate the UDP/TCP/SCTP + * with next protocol of l3. + * MC/WRIOP are not able to identify + * the l4 protocol with l4 ports. + */ +int mc_l4_port_identification; + +enum flow_rule_ipaddr_type { + FLOW_NONE_IPADDR, + FLOW_IPV4_ADDR, + FLOW_IPV6_ADDR +}; + +struct flow_rule_ipaddr { + enum flow_rule_ipaddr_type ipaddr_type; + int qos_ipsrc_offset; + int qos_ipdst_offset; + int fs_ipsrc_offset; + int fs_ipdst_offset; +}; + struct rte_flow { LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ - struct dpni_rule_cfg rule; + struct dpni_rule_cfg qos_rule; + struct dpni_rule_cfg fs_rule; + uint16_t qos_index; + uint16_t fs_index; uint8_t key_size; - uint8_t tc_id; + uint8_t tc_id; /** Traffic Class ID. */ uint8_t flow_type; - uint8_t index; + uint8_t tc_index; /** index within this Traffic Class. */ enum rte_flow_action_type action; uint16_t flow_id; + /* Special for IP address to specify the offset + * in key/mask. + */ + struct flow_rule_ipaddr ipaddr_rule; + struct dpni_fs_action_cfg action_cfg; }; static const @@ -54,166 +83,717 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = { RTE_FLOW_ACTION_TYPE_RSS }; +/* Max of enum rte_flow_item_type + 1, for both IPv4 and IPv6*/ +#define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1) + enum rte_filter_type dpaa2_filter_type = RTE_ETH_FILTER_NONE; static const void *default_mask; +static inline void dpaa2_flow_extract_key_set( + struct dpaa2_key_info *key_info, int index, uint8_t size) +{ + key_info->key_size[index] = size; + if (index > 0) { + key_info->key_offset[index] = + key_info->key_offset[index - 1] + + key_info->key_size[index - 1]; + } else { + key_info->key_offset[index] = 0; + } + key_info->key_total_size += size; +} + +static int dpaa2_flow_extract_add( + struct dpaa2_key_extract *key_extract, + enum net_prot prot, + uint32_t field, uint8_t field_size) +{ + int index, ip_src = -1, ip_dst = -1; + struct dpkg_profile_cfg *dpkg = &key_extract->dpkg; + struct dpaa2_key_info *key_info = &key_extract->key_info; + + if (dpkg->num_extracts >= + DPKG_MAX_NUM_OF_EXTRACTS) { + DPAA2_PMD_WARN("Number of extracts overflows"); + return -1; + } + /* Before reorder, the IP SRC and IP DST are already last + * extract(s). + */ + for (index = 0; index < dpkg->num_extracts; index++) { + if (dpkg->extracts[index].extract.from_hdr.prot == + NET_PROT_IP) { + if (dpkg->extracts[index].extract.from_hdr.field == + NH_FLD_IP_SRC) { + ip_src = index; + } + if (dpkg->extracts[index].extract.from_hdr.field == + NH_FLD_IP_DST) { + ip_dst = index; + } + } + } + + if (ip_src >= 0) + RTE_ASSERT((ip_src + 2) >= dpkg->num_extracts); + + if (ip_dst >= 0) + RTE_ASSERT((ip_dst + 2) >= dpkg->num_extracts); + + if (prot == NET_PROT_IP && + (field == NH_FLD_IP_SRC || + field == NH_FLD_IP_DST)) { + index = dpkg->num_extracts; + } else { + if (ip_src >= 0 && ip_dst >= 0) + index = dpkg->num_extracts - 2; + else if (ip_src >= 0 || ip_dst >= 0) + index = dpkg->num_extracts - 1; + else + index = dpkg->num_extracts; + } + + dpkg->extracts[index].type = DPKG_EXTRACT_FROM_HDR; + dpkg->extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; + dpkg->extracts[index].extract.from_hdr.prot = prot; + dpkg->extracts[index].extract.from_hdr.field = field; + if (prot == NET_PROT_IP && + (field == NH_FLD_IP_SRC || + field == NH_FLD_IP_DST)) { + dpaa2_flow_extract_key_set(key_info, index, 0); + } else { + dpaa2_flow_extract_key_set(key_info, index, field_size); + } + + if (prot == NET_PROT_IP) { + if (field == NH_FLD_IP_SRC) { + if (key_info->ipv4_dst_offset >= 0) { + key_info->ipv4_src_offset = + key_info->ipv4_dst_offset + + NH_FLD_IPV4_ADDR_SIZE; + } else { + key_info->ipv4_src_offset = + key_info->key_offset[index - 1] + + key_info->key_size[index - 1]; + } + if (key_info->ipv6_dst_offset >= 0) { + key_info->ipv6_src_offset = + key_info->ipv6_dst_offset + + NH_FLD_IPV6_ADDR_SIZE; + } else { + key_info->ipv6_src_offset = + key_info->key_offset[index - 1] + + key_info->key_size[index - 1]; + } + } else if (field == NH_FLD_IP_DST) { + if (key_info->ipv4_src_offset >= 0) { + key_info->ipv4_dst_offset = + key_info->ipv4_src_offset + + NH_FLD_IPV4_ADDR_SIZE; + } else { + key_info->ipv4_dst_offset = + key_info->key_offset[index - 1] + + key_info->key_size[index - 1]; + } + if (key_info->ipv6_src_offset >= 0) { + key_info->ipv6_dst_offset = + key_info->ipv6_src_offset + + NH_FLD_IPV6_ADDR_SIZE; + } else { + key_info->ipv6_dst_offset = + key_info->key_offset[index - 1] + + key_info->key_size[index - 1]; + } + } + } + + if (index == dpkg->num_extracts) { + dpkg->num_extracts++; + return 0; + } + + if (ip_src >= 0) { + ip_src++; + dpkg->extracts[ip_src].type = + DPKG_EXTRACT_FROM_HDR; + dpkg->extracts[ip_src].extract.from_hdr.type = + DPKG_FULL_FIELD; + dpkg->extracts[ip_src].extract.from_hdr.prot = + NET_PROT_IP; + dpkg->extracts[ip_src].extract.from_hdr.field = + NH_FLD_IP_SRC; + dpaa2_flow_extract_key_set(key_info, ip_src, 0); + key_info->ipv4_src_offset += field_size; + key_info->ipv6_src_offset += field_size; + } + if (ip_dst >= 0) { + ip_dst++; + dpkg->extracts[ip_dst].type = + DPKG_EXTRACT_FROM_HDR; + dpkg->extracts[ip_dst].extract.from_hdr.type = + DPKG_FULL_FIELD; + dpkg->extracts[ip_dst].extract.from_hdr.prot = + NET_PROT_IP; + dpkg->extracts[ip_dst].extract.from_hdr.field = + NH_FLD_IP_DST; + dpaa2_flow_extract_key_set(key_info, ip_dst, 0); + key_info->ipv4_dst_offset += field_size; + key_info->ipv6_dst_offset += field_size; + } + + dpkg->num_extracts++; + + return 0; +} + +/* Protocol discrimination. + * Discriminate IPv4/IPv6/vLan by Eth type. + * Discriminate UDP/TCP/ICMP by next proto of IP. + */ +static inline int +dpaa2_flow_proto_discrimination_extract( + struct dpaa2_key_extract *key_extract, + enum rte_flow_item_type type) +{ + if (type == RTE_FLOW_ITEM_TYPE_ETH) { + return dpaa2_flow_extract_add( + key_extract, NET_PROT_ETH, + NH_FLD_ETH_TYPE, + sizeof(rte_be16_t)); + } else if (type == (enum rte_flow_item_type) + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) { + return dpaa2_flow_extract_add( + key_extract, NET_PROT_IP, + NH_FLD_IP_PROTO, + NH_FLD_IP_PROTO_SIZE); + } + + return -1; +} + +static inline int dpaa2_flow_extract_search( + struct dpkg_profile_cfg *dpkg, + enum net_prot prot, uint32_t field) +{ + int i; + + for (i = 0; i < dpkg->num_extracts; i++) { + if (dpkg->extracts[i].extract.from_hdr.prot == prot && + dpkg->extracts[i].extract.from_hdr.field == field) { + return i; + } + } + + return -1; +} + +static inline int dpaa2_flow_extract_key_offset( + struct dpaa2_key_extract *key_extract, + enum net_prot prot, uint32_t field) +{ + int i; + struct dpkg_profile_cfg *dpkg = &key_extract->dpkg; + struct dpaa2_key_info *key_info = &key_extract->key_info; + + if (prot == NET_PROT_IPV4 || + prot == NET_PROT_IPV6) + i = dpaa2_flow_extract_search(dpkg, NET_PROT_IP, field); + else + i = dpaa2_flow_extract_search(dpkg, prot, field); + + if (i >= 0) { + if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_SRC) + return key_info->ipv4_src_offset; + else if (prot == NET_PROT_IPV4 && field == NH_FLD_IP_DST) + return key_info->ipv4_dst_offset; + else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_SRC) + return key_info->ipv6_src_offset; + else if (prot == NET_PROT_IPV6 && field == NH_FLD_IP_DST) + return key_info->ipv6_dst_offset; + else + return key_info->key_offset[i]; + } else { + return -1; + } +} + +struct proto_discrimination { + enum rte_flow_item_type type; + union { + rte_be16_t eth_type; + uint8_t ip_proto; + }; +}; + +static int +dpaa2_flow_proto_discrimination_rule( + struct dpaa2_dev_priv *priv, struct rte_flow *flow, + struct proto_discrimination proto, int group) +{ + enum net_prot prot; + uint32_t field; + int offset; + size_t key_iova; + size_t mask_iova; + rte_be16_t eth_type; + uint8_t ip_proto; + + if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) { + prot = NET_PROT_ETH; + field = NH_FLD_ETH_TYPE; + } else if (proto.type == DPAA2_FLOW_ITEM_TYPE_GENERIC_IP) { + prot = NET_PROT_IP; + field = NH_FLD_IP_PROTO; + } else { + DPAA2_PMD_ERR( + "Only Eth and IP support to discriminate next proto."); + return -1; + } + + offset = dpaa2_flow_extract_key_offset(&priv->extract.qos_key_extract, + prot, field); + if (offset < 0) { + DPAA2_PMD_ERR("QoS prot %d field %d extract failed", + prot, field); + return -1; + } + key_iova = flow->qos_rule.key_iova + offset; + mask_iova = flow->qos_rule.mask_iova + offset; + if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) { + eth_type = proto.eth_type; + memcpy((void *)key_iova, (const void *)(ð_type), + sizeof(rte_be16_t)); + eth_type = 0xffff; + memcpy((void *)mask_iova, (const void *)(ð_type), + sizeof(rte_be16_t)); + } else { + ip_proto = proto.ip_proto; + memcpy((void *)key_iova, (const void *)(&ip_proto), + sizeof(uint8_t)); + ip_proto = 0xff; + memcpy((void *)mask_iova, (const void *)(&ip_proto), + sizeof(uint8_t)); + } + + offset = dpaa2_flow_extract_key_offset( + &priv->extract.tc_key_extract[group], + prot, field); + if (offset < 0) { + DPAA2_PMD_ERR("FS prot %d field %d extract failed", + prot, field); + return -1; + } + key_iova = flow->fs_rule.key_iova + offset; + mask_iova = flow->fs_rule.mask_iova + offset; + + if (proto.type == RTE_FLOW_ITEM_TYPE_ETH) { + eth_type = proto.eth_type; + memcpy((void *)key_iova, (const void *)(ð_type), + sizeof(rte_be16_t)); + eth_type = 0xffff; + memcpy((void *)mask_iova, (const void *)(ð_type), + sizeof(rte_be16_t)); + } else { + ip_proto = proto.ip_proto; + memcpy((void *)key_iova, (const void *)(&ip_proto), + sizeof(uint8_t)); + ip_proto = 0xff; + memcpy((void *)mask_iova, (const void *)(&ip_proto), + sizeof(uint8_t)); + } + + return 0; +} + +static inline int +dpaa2_flow_rule_data_set( + struct dpaa2_key_extract *key_extract, + struct dpni_rule_cfg *rule, + enum net_prot prot, uint32_t field, + const void *key, const void *mask, int size) +{ + int offset = dpaa2_flow_extract_key_offset(key_extract, + prot, field); + + if (offset < 0) { + DPAA2_PMD_ERR("prot %d, field %d extract failed", + prot, field); + return -1; + } + memcpy((void *)(size_t)(rule->key_iova + offset), key, size); + memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size); + + return 0; +} + +static inline int +_dpaa2_flow_rule_move_ipaddr_tail( + struct dpaa2_key_extract *key_extract, + struct dpni_rule_cfg *rule, int src_offset, + uint32_t field, bool ipv4) +{ + size_t key_src; + size_t mask_src; + size_t key_dst; + size_t mask_dst; + int dst_offset, len; + enum net_prot prot; + char tmp[NH_FLD_IPV6_ADDR_SIZE]; + + if (field != NH_FLD_IP_SRC && + field != NH_FLD_IP_DST) { + DPAA2_PMD_ERR("Field of IP addr reorder must be IP SRC/DST"); + return -1; + } + if (ipv4) + prot = NET_PROT_IPV4; + else + prot = NET_PROT_IPV6; + dst_offset = dpaa2_flow_extract_key_offset(key_extract, + prot, field); + if (dst_offset < 0) { + DPAA2_PMD_ERR("Field %d reorder extract failed", field); + return -1; + } + key_src = rule->key_iova + src_offset; + mask_src = rule->mask_iova + src_offset; + key_dst = rule->key_iova + dst_offset; + mask_dst = rule->mask_iova + dst_offset; + if (ipv4) + len = sizeof(rte_be32_t); + else + len = NH_FLD_IPV6_ADDR_SIZE; + + memcpy(tmp, (char *)key_src, len); + memcpy((char *)key_dst, tmp, len); + + memcpy(tmp, (char *)mask_src, len); + memcpy((char *)mask_dst, tmp, len); + + return 0; +} + +static inline int +dpaa2_flow_rule_move_ipaddr_tail( + struct rte_flow *flow, struct dpaa2_dev_priv *priv, + int fs_group) +{ + int ret; + enum net_prot prot; + + if (flow->ipaddr_rule.ipaddr_type == FLOW_NONE_IPADDR) + return 0; + + if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) + prot = NET_PROT_IPV4; + else + prot = NET_PROT_IPV6; + + if (flow->ipaddr_rule.qos_ipsrc_offset >= 0) { + ret = _dpaa2_flow_rule_move_ipaddr_tail( + &priv->extract.qos_key_extract, + &flow->qos_rule, + flow->ipaddr_rule.qos_ipsrc_offset, + NH_FLD_IP_SRC, prot == NET_PROT_IPV4); + if (ret) { + DPAA2_PMD_ERR("QoS src address reorder failed"); + return -1; + } + flow->ipaddr_rule.qos_ipsrc_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.qos_key_extract, + prot, NH_FLD_IP_SRC); + } + + if (flow->ipaddr_rule.qos_ipdst_offset >= 0) { + ret = _dpaa2_flow_rule_move_ipaddr_tail( + &priv->extract.qos_key_extract, + &flow->qos_rule, + flow->ipaddr_rule.qos_ipdst_offset, + NH_FLD_IP_DST, prot == NET_PROT_IPV4); + if (ret) { + DPAA2_PMD_ERR("QoS dst address reorder failed"); + return -1; + } + flow->ipaddr_rule.qos_ipdst_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.qos_key_extract, + prot, NH_FLD_IP_DST); + } + + if (flow->ipaddr_rule.fs_ipsrc_offset >= 0) { + ret = _dpaa2_flow_rule_move_ipaddr_tail( + &priv->extract.tc_key_extract[fs_group], + &flow->fs_rule, + flow->ipaddr_rule.fs_ipsrc_offset, + NH_FLD_IP_SRC, prot == NET_PROT_IPV4); + if (ret) { + DPAA2_PMD_ERR("FS src address reorder failed"); + return -1; + } + flow->ipaddr_rule.fs_ipsrc_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.tc_key_extract[fs_group], + prot, NH_FLD_IP_SRC); + } + if (flow->ipaddr_rule.fs_ipdst_offset >= 0) { + ret = _dpaa2_flow_rule_move_ipaddr_tail( + &priv->extract.tc_key_extract[fs_group], + &flow->fs_rule, + flow->ipaddr_rule.fs_ipdst_offset, + NH_FLD_IP_DST, prot == NET_PROT_IPV4); + if (ret) { + DPAA2_PMD_ERR("FS dst address reorder failed"); + return -1; + } + flow->ipaddr_rule.fs_ipdst_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.tc_key_extract[fs_group], + prot, NH_FLD_IP_DST); + } + + return 0; +} + static int dpaa2_configure_flow_eth(struct rte_flow *flow, struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_eth *spec, *mask; /* TODO: Currently upper bound of range parameter is not implemented */ const struct rte_flow_item_eth *last __rte_unused; struct dpaa2_dev_priv *priv = dev->data->dev_private; + const char zero_cmp[RTE_ETHER_ADDR_LEN] = {0}; group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - /* TODO: pattern is an array of 9 elements where 9th pattern element */ - /* is for QoS table and 1-8th pattern element is for FS tables. */ - /* It can be changed to macro. */ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_eth *)pattern->spec; + last = (const struct rte_flow_item_eth *)pattern->last; + mask = (const struct rte_flow_item_eth *) + (pattern->mask ? pattern->mask : default_mask); + if (!spec) { + /* Don't care any field of eth header, + * only care eth protocol. + */ + DPAA2_PMD_WARN("No pattern spec for Eth flow, just skip"); + return 0; } - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ETH, NH_FLD_ETH_SA); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_ETH, NH_FLD_ETH_SA, + RTE_ETHER_ADDR_LEN); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add ETH_SA failed."); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ETH, NH_FLD_ETH_SA); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_ETH, NH_FLD_ETH_SA, + RTE_ETHER_ADDR_LEN); + if (ret) { + DPAA2_PMD_ERR("FS Extract add ETH_SA failed."); + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before ETH_SA rule set failed"); + return -1; + } - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_ETH, + NH_FLD_ETH_SA, + &spec->src.addr_bytes, + &mask->src.addr_bytes, + sizeof(struct rte_ether_addr)); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_ETH_SA rule data set failed"); + return -1; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_ETH, + NH_FLD_ETH_SA, + &spec->src.addr_bytes, + &mask->src.addr_bytes, + sizeof(struct rte_ether_addr)); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_ETH_SA rule data set failed"); + return -1; + } } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_SA; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_DA; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ETH; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ETH_TYPE; - index++; - - priv->extract.qos_key_cfg.num_extracts = index; - } - - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_SA; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_DA; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ETH; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ETH_TYPE; - index++; - - priv->extract.fs_key_cfg[group].num_extracts = index; + if (memcmp((const char *)&mask->dst, zero_cmp, RTE_ETHER_ADDR_LEN)) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ETH, NH_FLD_ETH_DA); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_ETH, NH_FLD_ETH_DA, + RTE_ETHER_ADDR_LEN); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add ETH_DA failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ETH, NH_FLD_ETH_DA); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_ETH, NH_FLD_ETH_DA, + RTE_ETHER_ADDR_LEN); + if (ret) { + DPAA2_PMD_ERR("FS Extract add ETH_DA failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before ETH DA rule set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_ETH, + NH_FLD_ETH_DA, + &spec->dst.addr_bytes, + &mask->dst.addr_bytes, + sizeof(struct rte_ether_addr)); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_ETH_DA rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_ETH, + NH_FLD_ETH_DA, + &spec->dst.addr_bytes, + &mask->dst.addr_bytes, + sizeof(struct rte_ether_addr)); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_ETH_DA rule data set failed"); + return -1; + } } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_eth *)pattern->spec; - last = (const struct rte_flow_item_eth *)pattern->last; - mask = (const struct rte_flow_item_eth *) - (pattern->mask ? pattern->mask : default_mask); + if (memcmp((const char *)&mask->type, zero_cmp, sizeof(rte_be16_t))) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_ETH, NH_FLD_ETH_TYPE, + RTE_ETHER_TYPE_LEN); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add ETH_TYPE failed."); - /* Key rule */ - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes), - sizeof(struct rte_ether_addr)); - key_iova += sizeof(struct rte_ether_addr); - memcpy((void *)key_iova, (const void *)(spec->dst.addr_bytes), - sizeof(struct rte_ether_addr)); - key_iova += sizeof(struct rte_ether_addr); - memcpy((void *)key_iova, (const void *)(&spec->type), - sizeof(rte_be16_t)); + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_ETH, NH_FLD_ETH_TYPE, + RTE_ETHER_TYPE_LEN); + if (ret) { + DPAA2_PMD_ERR("FS Extract add ETH_TYPE failed."); - /* Key mask */ - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes), - sizeof(struct rte_ether_addr)); - mask_iova += sizeof(struct rte_ether_addr); - memcpy((void *)mask_iova, (const void *)(mask->dst.addr_bytes), - sizeof(struct rte_ether_addr)); - mask_iova += sizeof(struct rte_ether_addr); - memcpy((void *)mask_iova, (const void *)(&mask->type), - sizeof(rte_be16_t)); + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before ETH TYPE rule set failed"); + return -1; + } - flow->key_size += ((2 * sizeof(struct rte_ether_addr)) + - sizeof(rte_be16_t)); + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_ETH, + NH_FLD_ETH_TYPE, + &spec->type, + &mask->type, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_ETH_TYPE rule data set failed"); + return -1; + } - return device_configured; + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_ETH, + NH_FLD_ETH_TYPE, + &spec->type, + &mask->type, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_ETH_TYPE rule data set failed"); + return -1; + } + } + + (*device_configured) |= local_cfg; + + return 0; } static int @@ -222,12 +802,11 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_vlan *spec, *mask; @@ -236,375 +815,524 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_vlan *)pattern->spec; + last = (const struct rte_flow_item_vlan *)pattern->last; + mask = (const struct rte_flow_item_vlan *) + (pattern->mask ? pattern->mask : default_mask); - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec) { + /* Don't care any field of vlan header, + * only care vlan protocol. + */ + /* Eth type is actually used for vLan classification. + */ + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + RTE_FLOW_ITEM_TYPE_ETH); + if (ret) { + DPAA2_PMD_ERR( + "QoS Ext ETH_TYPE to discriminate vLan failed"); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + RTE_FLOW_ITEM_TYPE_ETH); + if (ret) { + DPAA2_PMD_ERR( + "FS Ext ETH_TYPE to discriminate vLan failed."); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before vLan discrimination set failed"); + return -1; + } + + proto.type = RTE_FLOW_ITEM_TYPE_ETH; + proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN); + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("vLan discrimination rule set failed"); + return -1; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; + (*device_configured) |= local_cfg; + + return 0; } + if (!mask->tci) + return 0; - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_VLAN, NH_FLD_VLAN_TCI); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_VLAN, + NH_FLD_VLAN_TCI, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add VLAN_TCI failed."); - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_VLAN; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_VLAN_TCI; - priv->extract.qos_key_cfg.num_extracts++; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_VLAN, NH_FLD_VLAN_TCI); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_VLAN, + NH_FLD_VLAN_TCI, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("FS Extract add VLAN_TCI failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_VLAN; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_VLAN_TCI; - priv->extract.fs_key_cfg[group].num_extracts++; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before VLAN TCI rule set failed"); + return -1; } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_vlan *)pattern->spec; - last = (const struct rte_flow_item_vlan *)pattern->last; - mask = (const struct rte_flow_item_vlan *) - (pattern->mask ? pattern->mask : default_mask); + ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_VLAN, + NH_FLD_VLAN_TCI, + &spec->tci, + &mask->tci, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_VLAN_TCI rule data set failed"); + return -1; + } - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(&spec->tci), - sizeof(rte_be16_t)); + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_VLAN, + NH_FLD_VLAN_TCI, + &spec->tci, + &mask->tci, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_VLAN_TCI rule data set failed"); + return -1; + } - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(&mask->tci), - sizeof(rte_be16_t)); + (*device_configured) |= local_cfg; - flow->key_size += sizeof(rte_be16_t); - return device_configured; + return 0; } static int -dpaa2_configure_flow_ipv4(struct rte_flow *flow, - struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item *pattern, - const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) +dpaa2_configure_flow_generic_ip( + struct rte_flow *flow, + struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item *pattern, + const struct rte_flow_action actions[] __rte_unused, + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; - const struct rte_flow_item_ipv4 *spec, *mask; + const struct rte_flow_item_ipv4 *spec_ipv4 = 0, + *mask_ipv4 = 0; + const struct rte_flow_item_ipv6 *spec_ipv6 = 0, + *mask_ipv6 = 0; + const void *key, *mask; + enum net_prot prot; - const struct rte_flow_item_ipv4 *last __rte_unused; struct dpaa2_dev_priv *priv = dev->data->dev_private; + const char zero_cmp[NH_FLD_IPV6_ADDR_SIZE] = {0}; + int size; group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; + /* Parse pattern list to get the matching parameters */ + if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) { + spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec; + mask_ipv4 = (const struct rte_flow_item_ipv4 *) + (pattern->mask ? pattern->mask : default_mask); + } else { + spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec; + mask_ipv6 = (const struct rte_flow_item_ipv6 *) + (pattern->mask ? pattern->mask : default_mask); } - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec_ipv4 && !spec_ipv6) { + /* Don't care any field of IP header, + * only care IP protocol. + * Example: flow create 0 ingress pattern ipv6 / + */ + /* Eth type is actually used for IP identification. + */ + /* TODO: Current design only supports Eth + IP, + * Eth + vLan + IP needs to add. + */ + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + RTE_FLOW_ITEM_TYPE_ETH); + if (ret) { + DPAA2_PMD_ERR( + "QoS Ext ETH_TYPE to discriminate IP failed."); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + RTE_FLOW_ITEM_TYPE_ETH); + if (ret) { + DPAA2_PMD_ERR( + "FS Ext ETH_TYPE to discriminate IP failed"); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; - } + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before IP discrimination set failed"); + return -1; + } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_DST; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - - priv->extract.qos_key_cfg.num_extracts = index; - } - - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_DST; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO; - index++; - - priv->extract.fs_key_cfg[group].num_extracts = index; - } + proto.type = RTE_FLOW_ITEM_TYPE_ETH; + if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) + proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + else + proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("IP discrimination rule set failed"); + return -1; + } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_ipv4 *)pattern->spec; - last = (const struct rte_flow_item_ipv4 *)pattern->last; - mask = (const struct rte_flow_item_ipv4 *) - (pattern->mask ? pattern->mask : default_mask); + (*device_configured) |= local_cfg; + + return 0; + } + + if (mask_ipv4 && (mask_ipv4->hdr.src_addr || + mask_ipv4->hdr.dst_addr)) { + flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR; + } else if (mask_ipv6 && + (memcmp((const char *)mask_ipv6->hdr.src_addr, + zero_cmp, NH_FLD_IPV6_ADDR_SIZE) || + memcmp((const char *)mask_ipv6->hdr.dst_addr, + zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) { + flow->ipaddr_rule.ipaddr_type = FLOW_IPV6_ADDR; + } + + if ((mask_ipv4 && mask_ipv4->hdr.src_addr) || + (mask_ipv6 && + memcmp((const char *)mask_ipv6->hdr.src_addr, + zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_IP, + NH_FLD_IP_SRC, + 0); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add IP_SRC failed."); - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr, - sizeof(uint32_t)); - key_iova += sizeof(uint32_t); - memcpy((void *)key_iova, (const void *)&spec->hdr.dst_addr, - sizeof(uint32_t)); - key_iova += sizeof(uint32_t); - memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id, - sizeof(uint8_t)); - - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr, - sizeof(uint32_t)); - mask_iova += sizeof(uint32_t); - memcpy((void *)mask_iova, (const void *)&mask->hdr.dst_addr, - sizeof(uint32_t)); - mask_iova += sizeof(uint32_t); - memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id, - sizeof(uint8_t)); - - flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t); - return device_configured; -} + return -1; + } + local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE | + DPAA2_QOS_TABLE_IPADDR_EXTRACT); + } -static int -dpaa2_configure_flow_ipv6(struct rte_flow *flow, - struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item *pattern, - const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) -{ - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; - uint32_t group; - const struct rte_flow_item_ipv6 *spec, *mask; + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_IP, + NH_FLD_IP_SRC, + 0); + if (ret) { + DPAA2_PMD_ERR("FS Extract add IP_SRC failed."); - const struct rte_flow_item_ipv6 *last __rte_unused; - struct dpaa2_dev_priv *priv = dev->data->dev_private; + return -1; + } + local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE | + DPAA2_FS_TABLE_IPADDR_EXTRACT); + } - group = attr->group; + if (spec_ipv4) + key = &spec_ipv4->hdr.src_addr; + else + key = &spec_ipv6->hdr.src_addr[0]; + if (mask_ipv4) { + mask = &mask_ipv4->hdr.src_addr; + size = NH_FLD_IPV4_ADDR_SIZE; + prot = NET_PROT_IPV4; + } else { + mask = &mask_ipv6->hdr.src_addr[0]; + size = NH_FLD_IPV6_ADDR_SIZE; + prot = NET_PROT_IPV6; + } - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + prot, NH_FLD_IP_SRC, + key, mask, size); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_IP_SRC rule data set failed"); + return -1; + } - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + prot, NH_FLD_IP_SRC, + key, mask, size); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_IP_SRC rule data set failed"); + return -1; + } - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + flow->ipaddr_rule.qos_ipsrc_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.qos_key_extract, + prot, NH_FLD_IP_SRC); + flow->ipaddr_rule.fs_ipsrc_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.tc_key_extract[group], + prot, NH_FLD_IP_SRC); + } + + if ((mask_ipv4 && mask_ipv4->hdr.dst_addr) || + (mask_ipv6 && + memcmp((const char *)mask_ipv6->hdr.dst_addr, + zero_cmp, NH_FLD_IPV6_ADDR_SIZE))) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_DST); + if (index < 0) { + if (mask_ipv4) + size = NH_FLD_IPV4_ADDR_SIZE; + else + size = NH_FLD_IPV6_ADDR_SIZE; + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_IP, + NH_FLD_IP_DST, + size); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add IP_DST failed."); + + return -1; + } + local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE | + DPAA2_QOS_TABLE_IPADDR_EXTRACT); } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_DST); + if (index < 0) { + if (mask_ipv4) + size = NH_FLD_IPV4_ADDR_SIZE; + else + size = NH_FLD_IPV6_ADDR_SIZE; + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_IP, + NH_FLD_IP_DST, + size); + if (ret) { + DPAA2_PMD_ERR("FS Extract add IP_DST failed."); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; + return -1; + } + local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE | + DPAA2_FS_TABLE_IPADDR_EXTRACT); + } + + if (spec_ipv4) + key = &spec_ipv4->hdr.dst_addr; + else + key = spec_ipv6->hdr.dst_addr; + if (mask_ipv4) { + mask = &mask_ipv4->hdr.dst_addr; + size = NH_FLD_IPV4_ADDR_SIZE; + prot = NET_PROT_IPV4; } else { - entry_found = 1; - break; + mask = &mask_ipv6->hdr.dst_addr[0]; + size = NH_FLD_IPV6_ADDR_SIZE; + prot = NET_PROT_IPV6; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; - } + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + prot, NH_FLD_IP_DST, + key, mask, size); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_IP_DST rule data set failed"); + return -1; + } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_DST; - index++; - - priv->extract.qos_key_cfg.num_extracts = index; - } - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_SRC; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_DST; - index++; - - priv->extract.fs_key_cfg[group].num_extracts = index; + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + prot, NH_FLD_IP_DST, + key, mask, size); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_IP_DST rule data set failed"); + return -1; + } + flow->ipaddr_rule.qos_ipdst_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.qos_key_extract, + prot, NH_FLD_IP_DST); + flow->ipaddr_rule.fs_ipdst_offset = + dpaa2_flow_extract_key_offset( + &priv->extract.tc_key_extract[group], + prot, NH_FLD_IP_DST); + } + + if ((mask_ipv4 && mask_ipv4->hdr.next_proto_id) || + (mask_ipv6 && mask_ipv6->hdr.proto)) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_IP, + NH_FLD_IP_PROTO, + NH_FLD_IP_PROTO_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add IP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_IP, + NH_FLD_IP_PROTO, + NH_FLD_IP_PROTO_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add IP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr after NH_FLD_IP_PROTO rule set failed"); + return -1; + } + + if (spec_ipv4) + key = &spec_ipv4->hdr.next_proto_id; + else + key = &spec_ipv6->hdr.proto; + if (mask_ipv4) + mask = &mask_ipv4->hdr.next_proto_id; + else + mask = &mask_ipv6->hdr.proto; + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_IP, + NH_FLD_IP_PROTO, + key, mask, NH_FLD_IP_PROTO_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_IP_PROTO rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_IP, + NH_FLD_IP_PROTO, + key, mask, NH_FLD_IP_PROTO_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_IP_PROTO rule data set failed"); + return -1; + } } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_ipv6 *)pattern->spec; - last = (const struct rte_flow_item_ipv6 *)pattern->last; - mask = (const struct rte_flow_item_ipv6 *) - (pattern->mask ? pattern->mask : default_mask); + (*device_configured) |= local_cfg; - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr), - sizeof(spec->hdr.src_addr)); - key_iova += sizeof(spec->hdr.src_addr); - memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr), - sizeof(spec->hdr.dst_addr)); - - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr), - sizeof(mask->hdr.src_addr)); - mask_iova += sizeof(mask->hdr.src_addr); - memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr), - sizeof(mask->hdr.dst_addr)); - - flow->key_size += sizeof(spec->hdr.src_addr) + - sizeof(mask->hdr.dst_addr); - return device_configured; + return 0; } static int @@ -613,12 +1341,11 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_icmp *spec, *mask; @@ -627,116 +1354,220 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow, group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_icmp *)pattern->spec; + last = (const struct rte_flow_item_icmp *)pattern->last; + mask = (const struct rte_flow_item_icmp *) + (pattern->mask ? pattern->mask : default_mask); - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec) { + /* Don't care any field of ICMP header, + * only care ICMP protocol. + * Example: flow create 0 ingress pattern icmp / + */ + /* Next proto of Generical IP is actually used + * for ICMP identification. + */ + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "QoS Extract IP protocol to discriminate ICMP failed."); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "FS Extract IP protocol to discriminate ICMP failed."); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move IP addr before ICMP discrimination set failed"); + return -1; + } + + proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP; + proto.ip_proto = IPPROTO_ICMP; + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("ICMP discrimination rule set failed"); + return -1; + } + + (*device_configured) |= local_cfg; + + return 0; } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ICMP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ICMP_TYPE; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_ICMP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_ICMP_CODE; - index++; - - priv->extract.qos_key_cfg.num_extracts = index; - } - - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ICMP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ICMP_TYPE; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_ICMP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_ICMP_CODE; - index++; - - priv->extract.fs_key_cfg[group].num_extracts = index; + if (mask->hdr.icmp_type) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ICMP, NH_FLD_ICMP_TYPE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_ICMP, + NH_FLD_ICMP_TYPE, + NH_FLD_ICMP_TYPE_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add ICMP_TYPE failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ICMP, NH_FLD_ICMP_TYPE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_ICMP, + NH_FLD_ICMP_TYPE, + NH_FLD_ICMP_TYPE_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add ICMP_TYPE failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before ICMP TYPE set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_ICMP, + NH_FLD_ICMP_TYPE, + &spec->hdr.icmp_type, + &mask->hdr.icmp_type, + NH_FLD_ICMP_TYPE_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_ICMP_TYPE rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_ICMP, + NH_FLD_ICMP_TYPE, + &spec->hdr.icmp_type, + &mask->hdr.icmp_type, + NH_FLD_ICMP_TYPE_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_ICMP_TYPE rule data set failed"); + return -1; + } } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_icmp *)pattern->spec; - last = (const struct rte_flow_item_icmp *)pattern->last; - mask = (const struct rte_flow_item_icmp *) - (pattern->mask ? pattern->mask : default_mask); + if (mask->hdr.icmp_code) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ICMP, NH_FLD_ICMP_CODE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_ICMP, + NH_FLD_ICMP_CODE, + NH_FLD_ICMP_CODE_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add ICMP_CODE failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type, - sizeof(uint8_t)); - key_iova += sizeof(uint8_t); - memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code, - sizeof(uint8_t)); + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ICMP, NH_FLD_ICMP_CODE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_ICMP, + NH_FLD_ICMP_CODE, + NH_FLD_ICMP_CODE_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add ICMP_CODE failed."); - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type, - sizeof(uint8_t)); - key_iova += sizeof(uint8_t); - memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code, - sizeof(uint8_t)); + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } - flow->key_size += 2 * sizeof(uint8_t); + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr after ICMP CODE set failed"); + return -1; + } - return device_configured; + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_ICMP, + NH_FLD_ICMP_CODE, + &spec->hdr.icmp_code, + &mask->hdr.icmp_code, + NH_FLD_ICMP_CODE_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS NH_FLD_ICMP_CODE rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_ICMP, + NH_FLD_ICMP_CODE, + &spec->hdr.icmp_code, + &mask->hdr.icmp_code, + NH_FLD_ICMP_CODE_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS NH_FLD_ICMP_CODE rule data set failed"); + return -1; + } + } + + (*device_configured) |= local_cfg; + + return 0; } static int @@ -745,12 +1576,11 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_udp *spec, *mask; @@ -759,115 +1589,217 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_udp *)pattern->spec; + last = (const struct rte_flow_item_udp *)pattern->last; + mask = (const struct rte_flow_item_udp *) + (pattern->mask ? pattern->mask : default_mask); - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec || !mc_l4_port_identification) { + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "QoS Extract IP protocol to discriminate UDP failed."); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "FS Extract IP protocol to discriminate UDP failed."); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move IP addr before UDP discrimination set failed"); + return -1; + } + + proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP; + proto.ip_proto = IPPROTO_UDP; + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("UDP discrimination rule set failed"); + return -1; + } + + (*device_configured) |= local_cfg; + + if (!spec) + return 0; } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; + if (mask->hdr.src_port) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_UDP, NH_FLD_UDP_PORT_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_UDP, + NH_FLD_UDP_PORT_SRC, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add UDP_SRC failed."); - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_UDP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_SRC; - index++; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } - priv->extract.qos_key_cfg.extracts[index].type = DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_UDP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_DST; - index++; + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_UDP, NH_FLD_UDP_PORT_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_UDP, + NH_FLD_UDP_PORT_SRC, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add UDP_SRC failed."); - priv->extract.qos_key_cfg.num_extracts = index; - } + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_UDP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_SRC; - index++; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before UDP_PORT_SRC set failed"); + return -1; + } - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_UDP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_UDP_PORT_DST; - index++; + ret = dpaa2_flow_rule_data_set(&priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_UDP, + NH_FLD_UDP_PORT_SRC, + &spec->hdr.src_port, + &mask->hdr.src_port, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_UDP_PORT_SRC rule data set failed"); + return -1; + } - priv->extract.fs_key_cfg[group].num_extracts = index; + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_UDP, + NH_FLD_UDP_PORT_SRC, + &spec->hdr.src_port, + &mask->hdr.src_port, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_UDP_PORT_SRC rule data set failed"); + return -1; + } } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_udp *)pattern->spec; - last = (const struct rte_flow_item_udp *)pattern->last; - mask = (const struct rte_flow_item_udp *) - (pattern->mask ? pattern->mask : default_mask); + if (mask->hdr.dst_port) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_UDP, NH_FLD_UDP_PORT_DST); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_UDP, + NH_FLD_UDP_PORT_DST, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add UDP_DST failed."); - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port), - sizeof(uint16_t)); - key_iova += sizeof(uint16_t); - memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port), - sizeof(uint16_t)); + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port), - sizeof(uint16_t)); - mask_iova += sizeof(uint16_t); - memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port), - sizeof(uint16_t)); + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_UDP, NH_FLD_UDP_PORT_DST); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_UDP, + NH_FLD_UDP_PORT_DST, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add UDP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before UDP_PORT_DST set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_UDP, + NH_FLD_UDP_PORT_DST, + &spec->hdr.dst_port, + &mask->hdr.dst_port, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_UDP_PORT_DST rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_UDP, + NH_FLD_UDP_PORT_DST, + &spec->hdr.dst_port, + &mask->hdr.dst_port, + NH_FLD_UDP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_UDP_PORT_DST rule data set failed"); + return -1; + } + } - flow->key_size += (2 * sizeof(uint16_t)); + (*device_configured) |= local_cfg; - return device_configured; + return 0; } static int @@ -876,130 +1808,231 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_tcp *spec, *mask; - const struct rte_flow_item_tcp *last __rte_unused; - struct dpaa2_dev_priv *priv = dev->data->dev_private; + const struct rte_flow_item_tcp *last __rte_unused; + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + group = attr->group; + + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_tcp *)pattern->spec; + last = (const struct rte_flow_item_tcp *)pattern->last; + mask = (const struct rte_flow_item_tcp *) + (pattern->mask ? pattern->mask : default_mask); + + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec || !mc_l4_port_identification) { + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "QoS Extract IP protocol to discriminate TCP failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "FS Extract IP protocol to discriminate TCP failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move IP addr before TCP discrimination set failed"); + return -1; + } + + proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP; + proto.ip_proto = IPPROTO_TCP; + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("TCP discrimination rule set failed"); + return -1; + } - group = attr->group; + (*device_configured) |= local_cfg; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too.*/ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; + if (!spec) + return 0; } - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + if (mask->hdr.src_port) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_TCP, NH_FLD_TCP_PORT_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_TCP, + NH_FLD_TCP_PORT_SRC, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add TCP_SRC failed."); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_TCP, NH_FLD_TCP_PORT_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_TCP, + NH_FLD_TCP_PORT_SRC, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add TCP_SRC failed."); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; - } + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before TCP_PORT_SRC set failed"); + return -1; + } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_TCP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_SRC; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_TCP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_DST; - index++; - - priv->extract.qos_key_cfg.num_extracts = index; - } - - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_TCP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_SRC; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_TCP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_TCP_PORT_DST; - index++; - - priv->extract.fs_key_cfg[group].num_extracts = index; + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_TCP, + NH_FLD_TCP_PORT_SRC, + &spec->hdr.src_port, + &mask->hdr.src_port, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_TCP_PORT_SRC rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_TCP, + NH_FLD_TCP_PORT_SRC, + &spec->hdr.src_port, + &mask->hdr.src_port, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_TCP_PORT_SRC rule data set failed"); + return -1; + } } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_tcp *)pattern->spec; - last = (const struct rte_flow_item_tcp *)pattern->last; - mask = (const struct rte_flow_item_tcp *) - (pattern->mask ? pattern->mask : default_mask); + if (mask->hdr.dst_port) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_TCP, NH_FLD_TCP_PORT_DST); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_TCP, + NH_FLD_TCP_PORT_DST, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add TCP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_TCP, NH_FLD_TCP_PORT_DST); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_TCP, + NH_FLD_TCP_PORT_DST, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add TCP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port), - sizeof(uint16_t)); - key_iova += sizeof(uint16_t); - memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port), - sizeof(uint16_t)); + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before TCP_PORT_DST set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_TCP, + NH_FLD_TCP_PORT_DST, + &spec->hdr.dst_port, + &mask->hdr.dst_port, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_TCP_PORT_DST rule data set failed"); + return -1; + } - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port), - sizeof(uint16_t)); - mask_iova += sizeof(uint16_t); - memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port), - sizeof(uint16_t)); + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_TCP, + NH_FLD_TCP_PORT_DST, + &spec->hdr.dst_port, + &mask->hdr.dst_port, + NH_FLD_TCP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_TCP_PORT_DST rule data set failed"); + return -1; + } + } - flow->key_size += 2 * sizeof(uint16_t); + (*device_configured) |= local_cfg; - return device_configured; + return 0; } static int @@ -1008,12 +2041,11 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_sctp *spec, *mask; @@ -1022,116 +2054,218 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_sctp *)pattern->spec; + last = (const struct rte_flow_item_sctp *)pattern->last; + mask = (const struct rte_flow_item_sctp *) + (pattern->mask ? pattern->mask : default_mask); - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec || !mc_l4_port_identification) { + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "QoS Extract IP protocol to discriminate SCTP failed."); - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; - } + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "FS Extract IP protocol to discriminate SCTP failed."); - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before SCTP discrimination set failed"); + return -1; + } + + proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP; + proto.ip_proto = IPPROTO_SCTP; + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("SCTP discrimination rule set failed"); + return -1; + } + + (*device_configured) |= local_cfg; + + if (!spec) + return 0; } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; - - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_SCTP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_SRC; - index++; - - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_SCTP; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_DST; - index++; - - priv->extract.qos_key_cfg.num_extracts = index; - } - - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_SCTP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_SRC; - index++; - - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_SCTP; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_SCTP_PORT_DST; - index++; - - priv->extract.fs_key_cfg[group].num_extracts = index; + if (mask->hdr.src_port) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_SRC, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add SCTP_SRC failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_SCTP, NH_FLD_SCTP_PORT_SRC); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_SRC, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add SCTP_SRC failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before SCTP_PORT_SRC set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_SRC, + &spec->hdr.src_port, + &mask->hdr.src_port, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_SCTP_PORT_SRC rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_SRC, + &spec->hdr.src_port, + &mask->hdr.src_port, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_SCTP_PORT_SRC rule data set failed"); + return -1; + } } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_sctp *)pattern->spec; - last = (const struct rte_flow_item_sctp *)pattern->last; - mask = (const struct rte_flow_item_sctp *) - (pattern->mask ? pattern->mask : default_mask); + if (mask->hdr.dst_port) { + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_DST, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add SCTP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_SCTP, NH_FLD_SCTP_PORT_DST); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_DST, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR("FS Extract add SCTP_DST failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before SCTP_PORT_DST set failed"); + return -1; + } - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port), - sizeof(uint16_t)); - key_iova += sizeof(uint16_t); - memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port), - sizeof(uint16_t)); + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_DST, + &spec->hdr.dst_port, + &mask->hdr.dst_port, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_SCTP_PORT_DST rule data set failed"); + return -1; + } - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port), - sizeof(uint16_t)); - mask_iova += sizeof(uint16_t); - memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port), - sizeof(uint16_t)); + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_SCTP, + NH_FLD_SCTP_PORT_DST, + &spec->hdr.dst_port, + &mask->hdr.dst_port, + NH_FLD_SCTP_PORT_SIZE); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_SCTP_PORT_DST rule data set failed"); + return -1; + } + } - flow->key_size += 2 * sizeof(uint16_t); + (*device_configured) |= local_cfg; - return device_configured; + return 0; } static int @@ -1140,12 +2274,11 @@ dpaa2_configure_flow_gre(struct rte_flow *flow, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, const struct rte_flow_action actions[] __rte_unused, - struct rte_flow_error *error __rte_unused) + struct rte_flow_error *error __rte_unused, + int *device_configured) { - int index, j = 0; - size_t key_iova; - size_t mask_iova; - int device_configured = 0, entry_found = 0; + int index, ret; + int local_cfg = 0; uint32_t group; const struct rte_flow_item_gre *spec, *mask; @@ -1154,96 +2287,413 @@ dpaa2_configure_flow_gre(struct rte_flow *flow, group = attr->group; - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */ - if (priv->pattern[8].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; - } + /* Parse pattern list to get the matching parameters */ + spec = (const struct rte_flow_item_gre *)pattern->spec; + last = (const struct rte_flow_item_gre *)pattern->last; + mask = (const struct rte_flow_item_gre *) + (pattern->mask ? pattern->mask : default_mask); + + /* Get traffic class index and flow id to be configured */ + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (!spec) { + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "QoS Extract IP protocol to discriminate GRE failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_IP, NH_FLD_IP_PROTO); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + DPAA2_FLOW_ITEM_TYPE_GENERIC_IP); + if (ret) { + DPAA2_PMD_ERR( + "FS Extract IP protocol to discriminate GRE failed."); + + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } - if (priv->pattern[group].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) { - DPAA2_PMD_ERR("Maximum limit for different pattern type = %d\n", - DPKG_MAX_NUM_OF_EXTRACTS); - return -ENOTSUP; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move IP addr before GRE discrimination set failed"); + return -1; + } + + proto.type = DPAA2_FLOW_ITEM_TYPE_GENERIC_IP; + proto.ip_proto = IPPROTO_GRE; + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, + proto, group); + if (ret) { + DPAA2_PMD_ERR("GRE discrimination rule set failed"); + return -1; + } + + (*device_configured) |= local_cfg; + + return 0; } - for (j = 0; j < priv->pattern[8].item_count; j++) { - if (priv->pattern[8].pattern_type[j] != pattern->type) { - continue; - } else { - entry_found = 1; - break; + if (!mask->protocol) + return 0; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_GRE, NH_FLD_GRE_TYPE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.qos_key_extract, + NET_PROT_GRE, + NH_FLD_GRE_TYPE, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("QoS Extract add GRE_TYPE failed."); + + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_GRE, NH_FLD_GRE_TYPE); + if (index < 0) { + ret = dpaa2_flow_extract_add( + &priv->extract.tc_key_extract[group], + NET_PROT_GRE, + NH_FLD_GRE_TYPE, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR("FS Extract add GRE_TYPE failed."); + + return -1; } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } - if (!entry_found) { - priv->pattern[8].pattern_type[j] = pattern->type; - priv->pattern[8].item_count++; - device_configured |= DPAA2_QOS_TABLE_RECONFIGURE; + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before GRE_TYPE set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.qos_key_extract, + &flow->qos_rule, + NET_PROT_GRE, + NH_FLD_GRE_TYPE, + &spec->protocol, + &mask->protocol, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR( + "QoS NH_FLD_GRE_TYPE rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set( + &priv->extract.tc_key_extract[group], + &flow->fs_rule, + NET_PROT_GRE, + NH_FLD_GRE_TYPE, + &spec->protocol, + &mask->protocol, + sizeof(rte_be16_t)); + if (ret) { + DPAA2_PMD_ERR( + "FS NH_FLD_GRE_TYPE rule data set failed"); + return -1; } - entry_found = 0; - for (j = 0; j < priv->pattern[group].item_count; j++) { - if (priv->pattern[group].pattern_type[j] != pattern->type) { + (*device_configured) |= local_cfg; + + return 0; +} + +/* The existing QoS/FS entry with IP address(es) + * needs update after + * new extract(s) are inserted before IP + * address(es) extract(s). + */ +static int +dpaa2_flow_entry_update( + struct dpaa2_dev_priv *priv, uint8_t tc_id) +{ + struct rte_flow *curr = LIST_FIRST(&priv->flows); + struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw; + int ret; + int qos_ipsrc_offset = -1, qos_ipdst_offset = -1; + int fs_ipsrc_offset = -1, fs_ipdst_offset = -1; + struct dpaa2_key_extract *qos_key_extract = + &priv->extract.qos_key_extract; + struct dpaa2_key_extract *tc_key_extract = + &priv->extract.tc_key_extract[tc_id]; + char ipsrc_key[NH_FLD_IPV6_ADDR_SIZE]; + char ipdst_key[NH_FLD_IPV6_ADDR_SIZE]; + char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE]; + char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE]; + int extend = -1, extend1, size; + + while (curr) { + if (curr->ipaddr_rule.ipaddr_type == + FLOW_NONE_IPADDR) { + curr = LIST_NEXT(curr, next); continue; + } + + if (curr->ipaddr_rule.ipaddr_type == + FLOW_IPV4_ADDR) { + qos_ipsrc_offset = + qos_key_extract->key_info.ipv4_src_offset; + qos_ipdst_offset = + qos_key_extract->key_info.ipv4_dst_offset; + fs_ipsrc_offset = + tc_key_extract->key_info.ipv4_src_offset; + fs_ipdst_offset = + tc_key_extract->key_info.ipv4_dst_offset; + size = NH_FLD_IPV4_ADDR_SIZE; } else { - entry_found = 1; - break; + qos_ipsrc_offset = + qos_key_extract->key_info.ipv6_src_offset; + qos_ipdst_offset = + qos_key_extract->key_info.ipv6_dst_offset; + fs_ipsrc_offset = + tc_key_extract->key_info.ipv6_src_offset; + fs_ipdst_offset = + tc_key_extract->key_info.ipv6_dst_offset; + size = NH_FLD_IPV6_ADDR_SIZE; } - } - if (!entry_found) { - priv->pattern[group].pattern_type[j] = pattern->type; - priv->pattern[group].item_count++; - device_configured |= DPAA2_FS_TABLE_RECONFIGURE; - } + ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, + priv->token, &curr->qos_rule); + if (ret) { + DPAA2_PMD_ERR("Qos entry remove failed."); + return -1; + } - /* Get traffic class index and flow id to be configured */ - flow->tc_id = group; - flow->index = attr->priority; + extend = -1; + + if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) { + RTE_ASSERT(qos_ipsrc_offset >= + curr->ipaddr_rule.qos_ipsrc_offset); + extend1 = qos_ipsrc_offset - + curr->ipaddr_rule.qos_ipsrc_offset; + if (extend >= 0) + RTE_ASSERT(extend == extend1); + else + extend = extend1; + + memcpy(ipsrc_key, + (char *)(size_t)curr->qos_rule.key_iova + + curr->ipaddr_rule.qos_ipsrc_offset, + size); + memset((char *)(size_t)curr->qos_rule.key_iova + + curr->ipaddr_rule.qos_ipsrc_offset, + 0, size); + + memcpy(ipsrc_mask, + (char *)(size_t)curr->qos_rule.mask_iova + + curr->ipaddr_rule.qos_ipsrc_offset, + size); + memset((char *)(size_t)curr->qos_rule.mask_iova + + curr->ipaddr_rule.qos_ipsrc_offset, + 0, size); + + curr->ipaddr_rule.qos_ipsrc_offset = qos_ipsrc_offset; + } - if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - index = priv->extract.qos_key_cfg.num_extracts; - priv->extract.qos_key_cfg.extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_GRE; - priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_GRE_TYPE; - index++; + if (curr->ipaddr_rule.qos_ipdst_offset >= 0) { + RTE_ASSERT(qos_ipdst_offset >= + curr->ipaddr_rule.qos_ipdst_offset); + extend1 = qos_ipdst_offset - + curr->ipaddr_rule.qos_ipdst_offset; + if (extend >= 0) + RTE_ASSERT(extend == extend1); + else + extend = extend1; + + memcpy(ipdst_key, + (char *)(size_t)curr->qos_rule.key_iova + + curr->ipaddr_rule.qos_ipdst_offset, + size); + memset((char *)(size_t)curr->qos_rule.key_iova + + curr->ipaddr_rule.qos_ipdst_offset, + 0, size); + + memcpy(ipdst_mask, + (char *)(size_t)curr->qos_rule.mask_iova + + curr->ipaddr_rule.qos_ipdst_offset, + size); + memset((char *)(size_t)curr->qos_rule.mask_iova + + curr->ipaddr_rule.qos_ipdst_offset, + 0, size); + + curr->ipaddr_rule.qos_ipdst_offset = qos_ipdst_offset; + } - priv->extract.qos_key_cfg.num_extracts = index; - } + if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) { + memcpy((char *)(size_t)curr->qos_rule.key_iova + + curr->ipaddr_rule.qos_ipsrc_offset, + ipsrc_key, + size); + memcpy((char *)(size_t)curr->qos_rule.mask_iova + + curr->ipaddr_rule.qos_ipsrc_offset, + ipsrc_mask, + size); + } + if (curr->ipaddr_rule.qos_ipdst_offset >= 0) { + memcpy((char *)(size_t)curr->qos_rule.key_iova + + curr->ipaddr_rule.qos_ipdst_offset, + ipdst_key, + size); + memcpy((char *)(size_t)curr->qos_rule.mask_iova + + curr->ipaddr_rule.qos_ipdst_offset, + ipdst_mask, + size); + } - if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) { - index = priv->extract.fs_key_cfg[group].num_extracts; - priv->extract.fs_key_cfg[group].extracts[index].type = - DPKG_EXTRACT_FROM_HDR; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_GRE; - priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_GRE_TYPE; - index++; + if (extend >= 0) + curr->qos_rule.key_size += extend; - priv->extract.fs_key_cfg[group].num_extracts = index; - } + ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, + priv->token, &curr->qos_rule, + curr->tc_id, curr->qos_index, + 0, 0); + if (ret) { + DPAA2_PMD_ERR("Qos entry update failed."); + return -1; + } - /* Parse pattern list to get the matching parameters */ - spec = (const struct rte_flow_item_gre *)pattern->spec; - last = (const struct rte_flow_item_gre *)pattern->last; - mask = (const struct rte_flow_item_gre *) - (pattern->mask ? pattern->mask : default_mask); + if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) { + curr = LIST_NEXT(curr, next); + continue; + } + + extend = -1; + + ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, + priv->token, curr->tc_id, &curr->fs_rule); + if (ret) { + DPAA2_PMD_ERR("FS entry remove failed."); + return -1; + } + + if (curr->ipaddr_rule.fs_ipsrc_offset >= 0 && + tc_id == curr->tc_id) { + RTE_ASSERT(fs_ipsrc_offset >= + curr->ipaddr_rule.fs_ipsrc_offset); + extend1 = fs_ipsrc_offset - + curr->ipaddr_rule.fs_ipsrc_offset; + if (extend >= 0) + RTE_ASSERT(extend == extend1); + else + extend = extend1; + + memcpy(ipsrc_key, + (char *)(size_t)curr->fs_rule.key_iova + + curr->ipaddr_rule.fs_ipsrc_offset, + size); + memset((char *)(size_t)curr->fs_rule.key_iova + + curr->ipaddr_rule.fs_ipsrc_offset, + 0, size); + + memcpy(ipsrc_mask, + (char *)(size_t)curr->fs_rule.mask_iova + + curr->ipaddr_rule.fs_ipsrc_offset, + size); + memset((char *)(size_t)curr->fs_rule.mask_iova + + curr->ipaddr_rule.fs_ipsrc_offset, + 0, size); + + curr->ipaddr_rule.fs_ipsrc_offset = fs_ipsrc_offset; + } + + if (curr->ipaddr_rule.fs_ipdst_offset >= 0 && + tc_id == curr->tc_id) { + RTE_ASSERT(fs_ipdst_offset >= + curr->ipaddr_rule.fs_ipdst_offset); + extend1 = fs_ipdst_offset - + curr->ipaddr_rule.fs_ipdst_offset; + if (extend >= 0) + RTE_ASSERT(extend == extend1); + else + extend = extend1; + + memcpy(ipdst_key, + (char *)(size_t)curr->fs_rule.key_iova + + curr->ipaddr_rule.fs_ipdst_offset, + size); + memset((char *)(size_t)curr->fs_rule.key_iova + + curr->ipaddr_rule.fs_ipdst_offset, + 0, size); + + memcpy(ipdst_mask, + (char *)(size_t)curr->fs_rule.mask_iova + + curr->ipaddr_rule.fs_ipdst_offset, + size); + memset((char *)(size_t)curr->fs_rule.mask_iova + + curr->ipaddr_rule.fs_ipdst_offset, + 0, size); + + curr->ipaddr_rule.fs_ipdst_offset = fs_ipdst_offset; + } + + if (curr->ipaddr_rule.fs_ipsrc_offset >= 0) { + memcpy((char *)(size_t)curr->fs_rule.key_iova + + curr->ipaddr_rule.fs_ipsrc_offset, + ipsrc_key, + size); + memcpy((char *)(size_t)curr->fs_rule.mask_iova + + curr->ipaddr_rule.fs_ipsrc_offset, + ipsrc_mask, + size); + } + if (curr->ipaddr_rule.fs_ipdst_offset >= 0) { + memcpy((char *)(size_t)curr->fs_rule.key_iova + + curr->ipaddr_rule.fs_ipdst_offset, + ipdst_key, + size); + memcpy((char *)(size_t)curr->fs_rule.mask_iova + + curr->ipaddr_rule.fs_ipdst_offset, + ipdst_mask, + size); + } - key_iova = flow->rule.key_iova + flow->key_size; - memcpy((void *)key_iova, (const void *)(&spec->protocol), - sizeof(rte_be16_t)); + if (extend >= 0) + curr->fs_rule.key_size += extend; - mask_iova = flow->rule.mask_iova + flow->key_size; - memcpy((void *)mask_iova, (const void *)(&mask->protocol), - sizeof(rte_be16_t)); + ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, + priv->token, curr->tc_id, curr->fs_index, + &curr->fs_rule, &curr->action_cfg); + if (ret) { + DPAA2_PMD_ERR("FS entry update failed."); + return -1; + } - flow->key_size += sizeof(rte_be16_t); + curr = LIST_NEXT(curr, next); + } - return device_configured; + return 0; } static int @@ -1262,7 +2712,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow, struct dpni_attr nic_attr; struct dpni_rx_tc_dist_cfg tc_cfg; struct dpni_qos_tbl_cfg qos_cfg; - struct dpkg_profile_cfg key_cfg; struct dpni_fs_action_cfg action; struct dpaa2_dev_priv *priv = dev->data->dev_private; struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw; @@ -1273,75 +2722,77 @@ dpaa2_generic_flow_set(struct rte_flow *flow, while (!end_of_list) { switch (pattern[i].type) { case RTE_FLOW_ITEM_TYPE_ETH: - is_keycfg_configured = dpaa2_configure_flow_eth(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_eth(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("ETH flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_VLAN: - is_keycfg_configured = dpaa2_configure_flow_vlan(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_vlan(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("vLan flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_IPV4: - is_keycfg_configured = dpaa2_configure_flow_ipv4(flow, - dev, - attr, - &pattern[i], - actions, - error); - break; case RTE_FLOW_ITEM_TYPE_IPV6: - is_keycfg_configured = dpaa2_configure_flow_ipv6(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_generic_ip(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("IP flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_ICMP: - is_keycfg_configured = dpaa2_configure_flow_icmp(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_icmp(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("ICMP flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_UDP: - is_keycfg_configured = dpaa2_configure_flow_udp(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_udp(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("UDP flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_TCP: - is_keycfg_configured = dpaa2_configure_flow_tcp(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_tcp(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("TCP flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_SCTP: - is_keycfg_configured = dpaa2_configure_flow_sctp(flow, - dev, attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_sctp(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("SCTP flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_GRE: - is_keycfg_configured = dpaa2_configure_flow_gre(flow, - dev, - attr, - &pattern[i], - actions, - error); + ret = dpaa2_configure_flow_gre(flow, + dev, attr, &pattern[i], actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("GRE flow configuration failed!"); + return ret; + } break; case RTE_FLOW_ITEM_TYPE_END: end_of_list = 1; @@ -1365,8 +2816,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, memset(&action, 0, sizeof(struct dpni_fs_action_cfg)); action.flow_id = flow->flow_id; if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - if (dpkg_prepare_key_cfg(&priv->extract.qos_key_cfg, - (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) { + if (dpkg_prepare_key_cfg(&priv->extract.qos_key_extract.dpkg, + (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) { DPAA2_PMD_ERR( "Unable to prepare extract parameters"); return -1; @@ -1377,7 +2828,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, qos_cfg.keep_entries = true; qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param; ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, - priv->token, &qos_cfg); + priv->token, &qos_cfg); if (ret < 0) { DPAA2_PMD_ERR( "Distribution cannot be configured.(%d)" @@ -1386,8 +2837,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } } if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) { - if (dpkg_prepare_key_cfg(&priv->extract.fs_key_cfg[flow->tc_id], - (uint8_t *)(size_t)priv->extract.fs_extract_param[flow->tc_id]) < 0) { + if (dpkg_prepare_key_cfg( + &priv->extract.tc_key_extract[flow->tc_id].dpkg, + (uint8_t *)(size_t)priv->extract + .tc_extract_param[flow->tc_id]) < 0) { DPAA2_PMD_ERR( "Unable to prepare extract parameters"); return -1; @@ -1397,7 +2850,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc; tc_cfg.dist_mode = DPNI_DIST_MODE_FS; tc_cfg.key_cfg_iova = - (uint64_t)priv->extract.fs_extract_param[flow->tc_id]; + (uint64_t)priv->extract.tc_extract_param[flow->tc_id]; tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP; tc_cfg.fs_cfg.keep_entries = true; ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, @@ -1422,27 +2875,114 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } action.flow_id = action.flow_id % nic_attr.num_rx_tcs; - index = flow->index + (flow->tc_id * nic_attr.fs_entries); - flow->rule.key_size = flow->key_size; + + if (!priv->qos_index) { + priv->qos_index = rte_zmalloc(0, + nic_attr.qos_entries, 64); + } + for (index = 0; index < nic_attr.qos_entries; index++) { + if (!priv->qos_index[index]) { + priv->qos_index[index] = 1; + break; + } + } + if (index >= nic_attr.qos_entries) { + DPAA2_PMD_ERR("QoS table with %d entries full", + nic_attr.qos_entries); + return -1; + } + flow->qos_rule.key_size = priv->extract + .qos_key_extract.key_info.key_total_size; + if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) { + if (flow->ipaddr_rule.qos_ipdst_offset >= + flow->ipaddr_rule.qos_ipsrc_offset) { + flow->qos_rule.key_size = + flow->ipaddr_rule.qos_ipdst_offset + + NH_FLD_IPV4_ADDR_SIZE; + } else { + flow->qos_rule.key_size = + flow->ipaddr_rule.qos_ipsrc_offset + + NH_FLD_IPV4_ADDR_SIZE; + } + } else if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV6_ADDR) { + if (flow->ipaddr_rule.qos_ipdst_offset >= + flow->ipaddr_rule.qos_ipsrc_offset) { + flow->qos_rule.key_size = + flow->ipaddr_rule.qos_ipdst_offset + + NH_FLD_IPV6_ADDR_SIZE; + } else { + flow->qos_rule.key_size = + flow->ipaddr_rule.qos_ipsrc_offset + + NH_FLD_IPV6_ADDR_SIZE; + } + } ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, - priv->token, &flow->rule, + priv->token, &flow->qos_rule, flow->tc_id, index, 0, 0); if (ret < 0) { DPAA2_PMD_ERR( "Error in addnig entry to QoS table(%d)", ret); + priv->qos_index[index] = 0; return ret; } + flow->qos_index = index; /* Then Configure FS table */ + if (!priv->fs_index) { + priv->fs_index = rte_zmalloc(0, + nic_attr.fs_entries, 64); + } + for (index = 0; index < nic_attr.fs_entries; index++) { + if (!priv->fs_index[index]) { + priv->fs_index[index] = 1; + break; + } + } + if (index >= nic_attr.fs_entries) { + DPAA2_PMD_ERR("FS table with %d entries full", + nic_attr.fs_entries); + return -1; + } + flow->fs_rule.key_size = priv->extract + .tc_key_extract[attr->group].key_info.key_total_size; + if (flow->ipaddr_rule.ipaddr_type == + FLOW_IPV4_ADDR) { + if (flow->ipaddr_rule.fs_ipdst_offset >= + flow->ipaddr_rule.fs_ipsrc_offset) { + flow->fs_rule.key_size = + flow->ipaddr_rule.fs_ipdst_offset + + NH_FLD_IPV4_ADDR_SIZE; + } else { + flow->fs_rule.key_size = + flow->ipaddr_rule.fs_ipsrc_offset + + NH_FLD_IPV4_ADDR_SIZE; + } + } else if (flow->ipaddr_rule.ipaddr_type == + FLOW_IPV6_ADDR) { + if (flow->ipaddr_rule.fs_ipdst_offset >= + flow->ipaddr_rule.fs_ipsrc_offset) { + flow->fs_rule.key_size = + flow->ipaddr_rule.fs_ipdst_offset + + NH_FLD_IPV6_ADDR_SIZE; + } else { + flow->fs_rule.key_size = + flow->ipaddr_rule.fs_ipsrc_offset + + NH_FLD_IPV6_ADDR_SIZE; + } + } ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token, - flow->tc_id, flow->index, - &flow->rule, &action); + flow->tc_id, index, + &flow->fs_rule, &action); if (ret < 0) { DPAA2_PMD_ERR( "Error in adding entry to FS table(%d)", ret); + priv->fs_index[index] = 0; return ret; } + flow->fs_index = index; + memcpy(&flow->action_cfg, &action, + sizeof(struct dpni_fs_action_cfg)); break; case RTE_FLOW_ACTION_TYPE_RSS: ret = dpni_get_attributes(dpni, CMD_PRI_LOW, @@ -1465,7 +3005,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, flow->action = RTE_FLOW_ACTION_TYPE_RSS; ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types, - &key_cfg); + &priv->extract.tc_key_extract[flow->tc_id].dpkg); if (ret < 0) { DPAA2_PMD_ERR( "unable to set flow distribution.please check queue config\n"); @@ -1479,7 +3019,9 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return -1; } - if (dpkg_prepare_key_cfg(&key_cfg, (uint8_t *)param) < 0) { + if (dpkg_prepare_key_cfg( + &priv->extract.tc_key_extract[flow->tc_id].dpkg, + (uint8_t *)param) < 0) { DPAA2_PMD_ERR( "Unable to prepare extract parameters"); rte_free((void *)param); @@ -1503,8 +3045,9 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } rte_free((void *)param); - if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) { - if (dpkg_prepare_key_cfg(&priv->extract.qos_key_cfg, + if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { + if (dpkg_prepare_key_cfg( + &priv->extract.qos_key_extract.dpkg, (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) { DPAA2_PMD_ERR( "Unable to prepare extract parameters"); @@ -1514,29 +3057,47 @@ dpaa2_generic_flow_set(struct rte_flow *flow, sizeof(struct dpni_qos_tbl_cfg)); qos_cfg.discard_on_miss = true; qos_cfg.keep_entries = true; - qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param; + qos_cfg.key_cfg_iova = + (size_t)priv->extract.qos_extract_param; ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, priv->token, &qos_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "Distribution can not be configured(%d)\n", + "Distribution can't be configured %d\n", ret); return -1; } } /* Add Rule into QoS table */ - index = flow->index + (flow->tc_id * nic_attr.fs_entries); - flow->rule.key_size = flow->key_size; + if (!priv->qos_index) { + priv->qos_index = rte_zmalloc(0, + nic_attr.qos_entries, 64); + } + for (index = 0; index < nic_attr.qos_entries; index++) { + if (!priv->qos_index[index]) { + priv->qos_index[index] = 1; + break; + } + } + if (index >= nic_attr.qos_entries) { + DPAA2_PMD_ERR("QoS table with %d entries full", + nic_attr.qos_entries); + return -1; + } + flow->qos_rule.key_size = + priv->extract.qos_key_extract.key_info.key_total_size; ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, - &flow->rule, flow->tc_id, + &flow->qos_rule, flow->tc_id, index, 0, 0); if (ret < 0) { DPAA2_PMD_ERR( "Error in entry addition in QoS table(%d)", ret); + priv->qos_index[index] = 0; return ret; } + flow->qos_index = index; break; case RTE_FLOW_ACTION_TYPE_END: end_of_list = 1; @@ -1550,6 +3111,12 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } if (!ret) { + ret = dpaa2_flow_entry_update(priv, flow->tc_id); + if (ret) { + DPAA2_PMD_ERR("Flow entry update failed."); + + return -1; + } /* New rules are inserted. */ if (!curr) { LIST_INSERT_HEAD(&priv->flows, flow, next); @@ -1625,15 +3192,15 @@ dpaa2_dev_update_default_mask(const struct rte_flow_item *pattern) } static inline int -dpaa2_dev_verify_patterns(struct dpaa2_dev_priv *dev_priv, - const struct rte_flow_item pattern[]) +dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[]) { - unsigned int i, j, k, is_found = 0; + unsigned int i, j, is_found = 0; int ret = 0; for (j = 0; pattern[j].type != RTE_FLOW_ITEM_TYPE_END; j++) { for (i = 0; i < RTE_DIM(dpaa2_supported_pattern_type); i++) { - if (dpaa2_supported_pattern_type[i] == pattern[j].type) { + if (dpaa2_supported_pattern_type[i] + == pattern[j].type) { is_found = 1; break; } @@ -1653,18 +3220,6 @@ dpaa2_dev_verify_patterns(struct dpaa2_dev_priv *dev_priv, dpaa2_dev_update_default_mask(&pattern[j]); } - /* DPAA2 platform has a limitation that extract parameter can not be */ - /* more than DPKG_MAX_NUM_OF_EXTRACTS. Verify this limitation too. */ - for (i = 0; pattern[i].type != RTE_FLOW_ITEM_TYPE_END; i++) { - for (j = 0; j < MAX_TCS + 1; j++) { - for (k = 0; k < DPKG_MAX_NUM_OF_EXTRACTS; k++) { - if (dev_priv->pattern[j].pattern_type[k] == pattern[i].type) - break; - } - if (dev_priv->pattern[j].item_count >= DPKG_MAX_NUM_OF_EXTRACTS) - ret = -ENOTSUP; - } - } return ret; } @@ -1687,7 +3242,8 @@ dpaa2_dev_verify_actions(const struct rte_flow_action actions[]) } } for (j = 0; actions[j].type != RTE_FLOW_ACTION_TYPE_END; j++) { - if ((actions[j].type != RTE_FLOW_ACTION_TYPE_DROP) && (!actions[j].conf)) + if ((actions[j].type + != RTE_FLOW_ACTION_TYPE_DROP) && (!actions[j].conf)) ret = -EINVAL; } return ret; @@ -1729,7 +3285,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev, goto not_valid_params; } /* Verify input pattern list */ - ret = dpaa2_dev_verify_patterns(priv, pattern); + ret = dpaa2_dev_verify_patterns(pattern); if (ret < 0) { DPAA2_PMD_ERR( "Invalid pattern list is given\n"); @@ -1763,28 +3319,54 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev, size_t key_iova = 0, mask_iova = 0; int ret; - flow = rte_malloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE); + flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE); if (!flow) { DPAA2_PMD_ERR("Failure to allocate memory for flow"); goto mem_failure; } /* Allocate DMA'ble memory to write the rules */ - key_iova = (size_t)rte_malloc(NULL, 256, 64); + key_iova = (size_t)rte_zmalloc(NULL, 256, 64); + if (!key_iova) { + DPAA2_PMD_ERR( + "Memory allocation failure for rule configration\n"); + goto mem_failure; + } + mask_iova = (size_t)rte_zmalloc(NULL, 256, 64); + if (!mask_iova) { + DPAA2_PMD_ERR( + "Memory allocation failure for rule configration\n"); + goto mem_failure; + } + + flow->qos_rule.key_iova = key_iova; + flow->qos_rule.mask_iova = mask_iova; + + /* Allocate DMA'ble memory to write the rules */ + key_iova = (size_t)rte_zmalloc(NULL, 256, 64); if (!key_iova) { DPAA2_PMD_ERR( - "Memory allocation failure for rule configuration\n"); + "Memory allocation failure for rule configration\n"); goto mem_failure; } - mask_iova = (size_t)rte_malloc(NULL, 256, 64); + mask_iova = (size_t)rte_zmalloc(NULL, 256, 64); if (!mask_iova) { DPAA2_PMD_ERR( - "Memory allocation failure for rule configuration\n"); + "Memory allocation failure for rule configration\n"); goto mem_failure; } - flow->rule.key_iova = key_iova; - flow->rule.mask_iova = mask_iova; - flow->key_size = 0; + flow->fs_rule.key_iova = key_iova; + flow->fs_rule.mask_iova = mask_iova; + + flow->ipaddr_rule.ipaddr_type = FLOW_NONE_IPADDR; + flow->ipaddr_rule.qos_ipsrc_offset = + IP_ADDRESS_OFFSET_INVALID; + flow->ipaddr_rule.qos_ipdst_offset = + IP_ADDRESS_OFFSET_INVALID; + flow->ipaddr_rule.fs_ipsrc_offset = + IP_ADDRESS_OFFSET_INVALID; + flow->ipaddr_rule.fs_ipdst_offset = + IP_ADDRESS_OFFSET_INVALID; switch (dpaa2_filter_type) { case RTE_ETH_FILTER_GENERIC: @@ -1832,25 +3414,27 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_QUEUE: /* Remove entry from QoS table first */ ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, - &flow->rule); + &flow->qos_rule); if (ret < 0) { DPAA2_PMD_ERR( "Error in adding entry to QoS table(%d)", ret); goto error; } + priv->qos_index[flow->qos_index] = 0; /* Then remove entry from FS table */ ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token, - flow->tc_id, &flow->rule); + flow->tc_id, &flow->fs_rule); if (ret < 0) { DPAA2_PMD_ERR( "Error in entry addition in FS table(%d)", ret); goto error; } + priv->fs_index[flow->fs_index] = 0; break; case RTE_FLOW_ACTION_TYPE_RSS: ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, - &flow->rule); + &flow->qos_rule); if (ret < 0) { DPAA2_PMD_ERR( "Error in entry addition in QoS table(%d)", ret); From patchwork Tue Jul 7 09:22:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234969 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp738031ilg; Tue, 7 Jul 2020 02:30:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz36sIMr5j83kEuheHBpp5gikW466lfgW4DW36hxPz1/o8t+kHkC0cz13xi3MiqlAFS7UPV X-Received: by 2002:a17:906:5283:: with SMTP id c3mr44233671ejm.22.1594114221440; Tue, 07 Jul 2020 02:30:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114221; cv=none; d=google.com; s=arc-20160816; b=lt8S7t4x2QhBw1GAI7vDGOkOxM4ZQTbx77EWPrnBvGbZdDKTL31uoCx9wu5yl9C7nh 9OQElig27oPsBqwbLxuLpbvwzbhZnt7N7x2onUTeNh47423Mdt/LGVsr7QJzZhDEeLy5 qn7ALLvEOePDQaWdksTK2Yraw2Y5ISzcDwu7OTu7PCVft9OVcGx3XCMWXXwZSoWcSbXy n1LN2SDJmYwZXY97rYyeAwomp1yetlqRpBaoClcHcfzu8rqqowLg2z17MdHKEwOgrdqy DS9o8YVbh8t2BxS4zKJ1f8kWswoJuL1WnMLDMJlD5VqgtNRX+2Eo3aL2W7/W61Hyg4BR LMUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=A03gTIXYz6DpcOLbwcLomMcrH3B6V2O9DtQSccj72UA=; b=xsceE8XcvKjZf91kdRhyK2V7PmyDeoyQJ0Pe38HjcCGBeThTA0nYwoeBcY/WOvzscq ja22FyNDS5sf8vm501hivX2+Tx9RIWO7Dp1yAMK1II/6WGOjQVZZPTSJKYNWlF6jV3lb 2KXkMteA/9wm68WvJeJVQhPsh+NOxO3sXxArk8KoYAh0RHe8mapVGznEiEFHrJaUGfS5 jj8Q3Rx1sBSoQju13CEOPydJL/sNtrjE+v79W0GJaxShxMmxwMvOkN8MyYkGhN6Y8QZ/ Igslca6AAa+H29JDABksP194FecHvjCF8gTOI/Vnrmk/jkRIioovkNly/yxSSV+t33rv xpUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id n10si12672670ejb.539.2020.07.07.02.30.21; Tue, 07 Jul 2020 02:30:21 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9AF481DDD3; Tue, 7 Jul 2020 11:27:25 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id A86F51DD29 for ; Tue, 7 Jul 2020 11:27:10 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 85A2C1A0A4F; Tue, 7 Jul 2020 11:27:10 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 9DF6A1A0A30; Tue, 7 Jul 2020 11:27:08 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 61532402A8; Tue, 7 Jul 2020 17:27:06 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:32 +0530 Message-Id: <20200707092244.12791-18-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 17/29] net/dpaa2: add sanity check for flow extracts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Define extracts support for each protocol and check the fields of each pattern before building extracts of QoS/FS table. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_ethdev.c | 7 +- drivers/net/dpaa2/dpaa2_flow.c | 250 +++++++++++++++++++++++++------ 2 files changed, 204 insertions(+), 53 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 492b65840..fd3097c7d 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -2610,11 +2610,8 @@ dpaa2_dev_uninit(struct rte_eth_dev *eth_dev) eth_dev->process_private = NULL; rte_free(dpni); - for (i = 0; i < MAX_TCS; i++) { - if (priv->extract.tc_extract_param[i]) - rte_free((void *) - (size_t)priv->extract.tc_extract_param[i]); - } + for (i = 0; i < MAX_TCS; i++) + rte_free((void *)(size_t)priv->extract.tc_extract_param[i]); if (priv->extract.qos_extract_param) rte_free((void *)(size_t)priv->extract.qos_extract_param); diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 779cb64ab..507a5d0e3 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -87,7 +87,68 @@ enum rte_flow_action_type dpaa2_supported_action_type[] = { #define DPAA2_FLOW_ITEM_TYPE_GENERIC_IP (RTE_FLOW_ITEM_TYPE_META + 1) enum rte_filter_type dpaa2_filter_type = RTE_ETH_FILTER_NONE; -static const void *default_mask; + +#ifndef __cplusplus +static const struct rte_flow_item_eth dpaa2_flow_item_eth_mask = { + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .type = RTE_BE16(0xffff), +}; + +static const struct rte_flow_item_vlan dpaa2_flow_item_vlan_mask = { + .tci = RTE_BE16(0xffff), +}; + +static const struct rte_flow_item_ipv4 dpaa2_flow_item_ipv4_mask = { + .hdr.src_addr = RTE_BE32(0xffffffff), + .hdr.dst_addr = RTE_BE32(0xffffffff), + .hdr.next_proto_id = 0xff, +}; + +static const struct rte_flow_item_ipv6 dpaa2_flow_item_ipv6_mask = { + .hdr = { + .src_addr = + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + .dst_addr = + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + .proto = 0xff + }, +}; + +static const struct rte_flow_item_icmp dpaa2_flow_item_icmp_mask = { + .hdr.icmp_type = 0xff, + .hdr.icmp_code = 0xff, +}; + +static const struct rte_flow_item_udp dpaa2_flow_item_udp_mask = { + .hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, +}; + +static const struct rte_flow_item_tcp dpaa2_flow_item_tcp_mask = { + .hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, +}; + +static const struct rte_flow_item_sctp dpaa2_flow_item_sctp_mask = { + .hdr = { + .src_port = RTE_BE16(0xffff), + .dst_port = RTE_BE16(0xffff), + }, +}; + +static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = { + .protocol = RTE_BE16(0xffff), +}; + +#endif + static inline void dpaa2_flow_extract_key_set( struct dpaa2_key_info *key_info, int index, uint8_t size) @@ -555,6 +616,67 @@ dpaa2_flow_rule_move_ipaddr_tail( return 0; } +static int +dpaa2_flow_extract_support( + const uint8_t *mask_src, + enum rte_flow_item_type type) +{ + char mask[64]; + int i, size = 0; + const char *mask_support = 0; + + switch (type) { + case RTE_FLOW_ITEM_TYPE_ETH: + mask_support = (const char *)&dpaa2_flow_item_eth_mask; + size = sizeof(struct rte_flow_item_eth); + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + mask_support = (const char *)&dpaa2_flow_item_vlan_mask; + size = sizeof(struct rte_flow_item_vlan); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mask_support = (const char *)&dpaa2_flow_item_ipv4_mask; + size = sizeof(struct rte_flow_item_ipv4); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mask_support = (const char *)&dpaa2_flow_item_ipv6_mask; + size = sizeof(struct rte_flow_item_ipv6); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + mask_support = (const char *)&dpaa2_flow_item_icmp_mask; + size = sizeof(struct rte_flow_item_icmp); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + mask_support = (const char *)&dpaa2_flow_item_udp_mask; + size = sizeof(struct rte_flow_item_udp); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + mask_support = (const char *)&dpaa2_flow_item_tcp_mask; + size = sizeof(struct rte_flow_item_tcp); + break; + case RTE_FLOW_ITEM_TYPE_SCTP: + mask_support = (const char *)&dpaa2_flow_item_sctp_mask; + size = sizeof(struct rte_flow_item_sctp); + break; + case RTE_FLOW_ITEM_TYPE_GRE: + mask_support = (const char *)&dpaa2_flow_item_gre_mask; + size = sizeof(struct rte_flow_item_gre); + break; + default: + return -1; + } + + memcpy(mask, mask_support, size); + + for (i = 0; i < size; i++) + mask[i] = (mask[i] | mask_src[i]); + + if (memcmp(mask, mask_support, size)) + return -1; + + return 0; +} + static int dpaa2_configure_flow_eth(struct rte_flow *flow, struct rte_eth_dev *dev, @@ -580,7 +702,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, spec = (const struct rte_flow_item_eth *)pattern->spec; last = (const struct rte_flow_item_eth *)pattern->last; mask = (const struct rte_flow_item_eth *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : &dpaa2_flow_item_eth_mask); if (!spec) { /* Don't care any field of eth header, * only care eth protocol. @@ -593,6 +715,13 @@ dpaa2_configure_flow_eth(struct rte_flow *flow, flow->tc_id = group; flow->tc_index = attr->priority; + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_ETH)) { + DPAA2_PMD_WARN("Extract field(s) of ethernet not support."); + + return -1; + } + if (memcmp((const char *)&mask->src, zero_cmp, RTE_ETHER_ADDR_LEN)) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, @@ -819,7 +948,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, spec = (const struct rte_flow_item_vlan *)pattern->spec; last = (const struct rte_flow_item_vlan *)pattern->last; mask = (const struct rte_flow_item_vlan *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : &dpaa2_flow_item_vlan_mask); /* Get traffic class index and flow id to be configured */ flow->tc_id = group; @@ -886,6 +1015,13 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, return 0; } + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_VLAN)) { + DPAA2_PMD_WARN("Extract field(s) of vlan not support."); + + return -1; + } + if (!mask->tci) return 0; @@ -990,11 +1126,13 @@ dpaa2_configure_flow_generic_ip( if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) { spec_ipv4 = (const struct rte_flow_item_ipv4 *)pattern->spec; mask_ipv4 = (const struct rte_flow_item_ipv4 *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : + &dpaa2_flow_item_ipv4_mask); } else { spec_ipv6 = (const struct rte_flow_item_ipv6 *)pattern->spec; mask_ipv6 = (const struct rte_flow_item_ipv6 *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : + &dpaa2_flow_item_ipv6_mask); } /* Get traffic class index and flow id to be configured */ @@ -1069,6 +1207,24 @@ dpaa2_configure_flow_generic_ip( return 0; } + if (mask_ipv4) { + if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4, + RTE_FLOW_ITEM_TYPE_IPV4)) { + DPAA2_PMD_WARN("Extract field(s) of IPv4 not support."); + + return -1; + } + } + + if (mask_ipv6) { + if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv6, + RTE_FLOW_ITEM_TYPE_IPV6)) { + DPAA2_PMD_WARN("Extract field(s) of IPv6 not support."); + + return -1; + } + } + if (mask_ipv4 && (mask_ipv4->hdr.src_addr || mask_ipv4->hdr.dst_addr)) { flow->ipaddr_rule.ipaddr_type = FLOW_IPV4_ADDR; @@ -1358,7 +1514,7 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow, spec = (const struct rte_flow_item_icmp *)pattern->spec; last = (const struct rte_flow_item_icmp *)pattern->last; mask = (const struct rte_flow_item_icmp *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : &dpaa2_flow_item_icmp_mask); /* Get traffic class index and flow id to be configured */ flow->tc_id = group; @@ -1427,6 +1583,13 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow, return 0; } + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_ICMP)) { + DPAA2_PMD_WARN("Extract field(s) of ICMP not support."); + + return -1; + } + if (mask->hdr.icmp_type) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, @@ -1593,7 +1756,7 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, spec = (const struct rte_flow_item_udp *)pattern->spec; last = (const struct rte_flow_item_udp *)pattern->last; mask = (const struct rte_flow_item_udp *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : &dpaa2_flow_item_udp_mask); /* Get traffic class index and flow id to be configured */ flow->tc_id = group; @@ -1656,6 +1819,13 @@ dpaa2_configure_flow_udp(struct rte_flow *flow, return 0; } + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_UDP)) { + DPAA2_PMD_WARN("Extract field(s) of UDP not support."); + + return -1; + } + if (mask->hdr.src_port) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, @@ -1825,7 +1995,7 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow, spec = (const struct rte_flow_item_tcp *)pattern->spec; last = (const struct rte_flow_item_tcp *)pattern->last; mask = (const struct rte_flow_item_tcp *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : &dpaa2_flow_item_tcp_mask); /* Get traffic class index and flow id to be configured */ flow->tc_id = group; @@ -1888,6 +2058,13 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow, return 0; } + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_TCP)) { + DPAA2_PMD_WARN("Extract field(s) of TCP not support."); + + return -1; + } + if (mask->hdr.src_port) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, @@ -2058,7 +2235,8 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, spec = (const struct rte_flow_item_sctp *)pattern->spec; last = (const struct rte_flow_item_sctp *)pattern->last; mask = (const struct rte_flow_item_sctp *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : + &dpaa2_flow_item_sctp_mask); /* Get traffic class index and flow id to be configured */ flow->tc_id = group; @@ -2121,6 +2299,13 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow, return 0; } + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_SCTP)) { + DPAA2_PMD_WARN("Extract field(s) of SCTP not support."); + + return -1; + } + if (mask->hdr.src_port) { index = dpaa2_flow_extract_search( &priv->extract.qos_key_extract.dpkg, @@ -2291,7 +2476,7 @@ dpaa2_configure_flow_gre(struct rte_flow *flow, spec = (const struct rte_flow_item_gre *)pattern->spec; last = (const struct rte_flow_item_gre *)pattern->last; mask = (const struct rte_flow_item_gre *) - (pattern->mask ? pattern->mask : default_mask); + (pattern->mask ? pattern->mask : &dpaa2_flow_item_gre_mask); /* Get traffic class index and flow id to be configured */ flow->tc_id = group; @@ -2353,6 +2538,13 @@ dpaa2_configure_flow_gre(struct rte_flow *flow, return 0; } + if (dpaa2_flow_extract_support((const uint8_t *)mask, + RTE_FLOW_ITEM_TYPE_GRE)) { + DPAA2_PMD_WARN("Extract field(s) of GRE not support."); + + return -1; + } + if (!mask->protocol) return 0; @@ -3155,42 +3347,6 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr, return ret; } -static inline void -dpaa2_dev_update_default_mask(const struct rte_flow_item *pattern) -{ - switch (pattern->type) { - case RTE_FLOW_ITEM_TYPE_ETH: - default_mask = (const void *)&rte_flow_item_eth_mask; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - default_mask = (const void *)&rte_flow_item_vlan_mask; - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - default_mask = (const void *)&rte_flow_item_ipv4_mask; - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - default_mask = (const void *)&rte_flow_item_ipv6_mask; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - default_mask = (const void *)&rte_flow_item_icmp_mask; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - default_mask = (const void *)&rte_flow_item_udp_mask; - break; - case RTE_FLOW_ITEM_TYPE_TCP: - default_mask = (const void *)&rte_flow_item_tcp_mask; - break; - case RTE_FLOW_ITEM_TYPE_SCTP: - default_mask = (const void *)&rte_flow_item_sctp_mask; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - default_mask = (const void *)&rte_flow_item_gre_mask; - break; - default: - DPAA2_PMD_ERR("Invalid pattern type"); - } -} - static inline int dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[]) { @@ -3216,8 +3372,6 @@ dpaa2_dev_verify_patterns(const struct rte_flow_item pattern[]) ret = -EINVAL; break; } - if ((pattern[j].last) && (!pattern[j].mask)) - dpaa2_dev_update_default_mask(&pattern[j]); } return ret; From patchwork Tue Jul 7 09:22:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234971 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp738436ilg; Tue, 7 Jul 2020 02:30:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlF5v0wO6KerxUsoUCw8dGp2UBko08xczkfJIj/iPTsK8/JIJlZ76nHUifPgIGbEA1f22E X-Received: by 2002:a25:b948:: with SMTP id s8mr83844871ybm.487.1594114252362; Tue, 07 Jul 2020 02:30:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114252; cv=none; d=google.com; s=arc-20160816; b=zUguhTubVz+A5tscno9i0dg8+9+1pkKQP2XUWvOyU0H2a7upQMhzjIl6kZG9uhgkrU ZY0msQw2zURJX8gylOdnBzzaXeYp8kPV9fzTPtsWNRN7LswJW/9NCjEOqFO6Zk+WnXFx d9oPnbirG3c0CCZb7+A/8FEVn96o2eeLyaTKezTvHXQbW+OoYpFsrzAhU82xnNSUvXMM FGmFACrUq6ZbBh0yn01lzLliPoOT/sOIX2tYJ2xmYQN/L6lcAaH6+XV8N582/iDTbe5P VXAeZvJor0w6CVuoYRN8QRqE58OI1bRv8D68otkFsMIRxl1ViHmwVm0OxOG+qM4dv8jQ GvJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=M8OG8GAYbzoHVrkiknFokl2vkVJfoC+7fYarAZiUnx4=; b=W+q5vgTMmpocU4GXw05G/6vU5GEW/mwkyfB8iUfuYaqmckXQX+dcRrfHzhvnhB22wu doZSLByJchJC5brDd5i6drlxSuXg0T4+ETJhlytDCN1LgNq5QqVLZGMjFwoYad/V/Z55 wZymkvaBEFmAyISn2INzevKyo5JZ3hcMilEj5fprUFD4Izrr/yE7HhNz9FHr3e767MfD LgHkTxZKSmIdbwCSbtyMLESVUUXNAnwRNHbyxpfOgGjUX656otagN21uPd3VrDHLCI0x oCj6CfenBV8K5U+PqUAnmfg4umY5EObhyMqkdv9g63JrEujztHUlV/twAaSR7YPtIlz4 uvIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id z6si9441135ybh.203.2020.07.07.02.30.52; Tue, 07 Jul 2020 02:30:52 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2318E1DDE5; Tue, 7 Jul 2020 11:27:28 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 46A4F1DD24 for ; Tue, 7 Jul 2020 11:27:11 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 117B21A0A30; Tue, 7 Jul 2020 11:27:11 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 583BD1A0A46; Tue, 7 Jul 2020 11:27:09 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 1417B402C8; Tue, 7 Jul 2020 17:27:06 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:33 +0530 Message-Id: <20200707092244.12791-19-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 18/29] net/dpaa2: free flow rule memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Free rule memory when the flow is destroyed. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 5 +++++ 1 file changed, 5 insertions(+) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 507a5d0e3..941d62b80 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -3594,6 +3594,7 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, "Error in entry addition in QoS table(%d)", ret); goto error; } + priv->qos_index[flow->qos_index] = 0; break; default: DPAA2_PMD_ERR( @@ -3603,6 +3604,10 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, } LIST_REMOVE(flow, next); + rte_free((void *)(size_t)flow->qos_rule.key_iova); + rte_free((void *)(size_t)flow->qos_rule.mask_iova); + rte_free((void *)(size_t)flow->fs_rule.key_iova); + rte_free((void *)(size_t)flow->fs_rule.mask_iova); /* Now free the flow */ rte_free(flow); From patchwork Tue Jul 7 09:22:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234972 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp738639ilg; Tue, 7 Jul 2020 02:31:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7a8jd9a7nnnQ/nUEsvbFuyTX/NZdh/JEI6XlVZompy8FUyT1XOxrAQyVZzNF3+TL2mX1o X-Received: by 2002:a05:6902:100f:: with SMTP id w15mr1272370ybt.477.1594114265146; Tue, 07 Jul 2020 02:31:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114265; cv=none; d=google.com; s=arc-20160816; b=WJVKHTHXjHXlrIUWdtlmZyftbkK5hQBD5uF+bpUm5WCNMQGPrT7bPRo8VtgLAJ3BYJ szwZzNy6XbfRnP+QFZFSJOENp4JMLmc2LTt5SD5ED05Snpf7sQ1NuFD4g+iJmuWUOEX9 UNlnpAhyI9NNNZkfolgBZhxttLsVmBbekgG1Wc1TLOZWzI0JQ0nvEd79vdiq/ki8ZRD5 54WVWH/vvVbpfI+18ULDowGdH4TwRcrfyhc4vloBXh/VwLR/pOPyjejRt9GDS12haOEV Xo8uJl0txXpaiSG4Rbnc9HfnYwm1w8O4cZAulVRVSQjvwDHzS0x5tZQdq57KYCDcTj4H udQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=in9ke8fy3EX8Y/sb3BtlIpj5/zZugL4+N1OJpdthUoQ=; b=g+FKemT0IthX9xjEEbdbd+oQvLfLqPB/iHB51KH/sh21afObgvAlo3K8AQfu83re7m /bl0K5iCo/smxSrJ1PQNGcRpohxqIM1t4Ta+d8G8a0KXCW5VfrJOBxKbn+o89yZPaEhg 8WW+nD0y2lTetlNzSec7pvnhICzL6Fl8Xf4zV/Frj30kIlQtlhkG8q+ZE1+wUoM4qUVr 7QVSyqP+1EZTHClGakmyKg8Nj/yXlHnELmcBe/G3bmThHg1wkaFf0iy/E7gsphOWecv7 e7PeGmHpv/KEqybYfH1uYLabOBzLrHzCjDO0PaxSkzV8ZGuKMsbZ6v+5VHRus5v5CWD1 egjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id n9si21416221ybm.286.2020.07.07.02.31.04; Tue, 07 Jul 2020 02:31:05 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5AA2A1DDE8; Tue, 7 Jul 2020 11:27:29 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 0DA151DD24 for ; Tue, 7 Jul 2020 11:27:12 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E46151A0A57; Tue, 7 Jul 2020 11:27:11 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 0741A1A0A4E; Tue, 7 Jul 2020 11:27:10 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id C0C99402FA; Tue, 7 Jul 2020 17:27:07 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:34 +0530 Message-Id: <20200707092244.12791-20-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 19/29] net/dpaa2: support QoS or FS table entry indexing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Calculate QoS/FS entry index by group and priority of flow. 1)The less index of entry, the higher priority of flow. 2)Verify if the flow with same group and priority has been added before creating flow. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_ethdev.c | 4 + drivers/net/dpaa2/dpaa2_ethdev.h | 5 +- drivers/net/dpaa2/dpaa2_flow.c | 127 +++++++++++++------------------ 3 files changed, 59 insertions(+), 77 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index fd3097c7d..008e1c570 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -2392,6 +2392,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) } priv->num_rx_tc = attr.num_rx_tcs; + priv->qos_entries = attr.qos_entries; + priv->fs_entries = attr.fs_entries; + priv->dist_queues = attr.num_queues; + /* only if the custom CG is enabled */ if (attr.options & DPNI_OPT_CUSTOM_CG) priv->max_cgs = attr.num_cgs; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 030c625e3..b49b88a2d 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -145,6 +145,9 @@ struct dpaa2_dev_priv { uint8_t max_mac_filters; uint8_t max_vlan_filters; uint8_t num_rx_tc; + uint16_t qos_entries; + uint16_t fs_entries; + uint8_t dist_queues; uint8_t flags; /*dpaa2 config flags */ uint8_t en_ordered; uint8_t en_loose_ordered; @@ -152,8 +155,6 @@ struct dpaa2_dev_priv { uint8_t cgid_in_use[MAX_RX_QUEUES]; struct extract_s extract; - uint8_t *qos_index; - uint8_t *fs_index; uint16_t ss_offset; uint64_t ss_iova; diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 941d62b80..760a8a793 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -47,11 +47,8 @@ struct rte_flow { LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ struct dpni_rule_cfg qos_rule; struct dpni_rule_cfg fs_rule; - uint16_t qos_index; - uint16_t fs_index; uint8_t key_size; uint8_t tc_id; /** Traffic Class ID. */ - uint8_t flow_type; uint8_t tc_index; /** index within this Traffic Class. */ enum rte_flow_action_type action; uint16_t flow_id; @@ -2645,6 +2642,7 @@ dpaa2_flow_entry_update( char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE]; char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE]; int extend = -1, extend1, size; + uint16_t qos_index; while (curr) { if (curr->ipaddr_rule.ipaddr_type == @@ -2676,6 +2674,9 @@ dpaa2_flow_entry_update( size = NH_FLD_IPV6_ADDR_SIZE; } + qos_index = curr->tc_id * priv->fs_entries + + curr->tc_index; + ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, &curr->qos_rule); if (ret) { @@ -2769,7 +2770,7 @@ dpaa2_flow_entry_update( ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &curr->qos_rule, - curr->tc_id, curr->qos_index, + curr->tc_id, qos_index, 0, 0); if (ret) { DPAA2_PMD_ERR("Qos entry update failed."); @@ -2875,7 +2876,7 @@ dpaa2_flow_entry_update( curr->fs_rule.key_size += extend; ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, - priv->token, curr->tc_id, curr->fs_index, + priv->token, curr->tc_id, curr->tc_index, &curr->fs_rule, &curr->action_cfg); if (ret) { DPAA2_PMD_ERR("FS entry update failed."); @@ -2888,6 +2889,28 @@ dpaa2_flow_entry_update( return 0; } +static inline int +dpaa2_flow_verify_attr( + struct dpaa2_dev_priv *priv, + const struct rte_flow_attr *attr) +{ + struct rte_flow *curr = LIST_FIRST(&priv->flows); + + while (curr) { + if (curr->tc_id == attr->group && + curr->tc_index == attr->priority) { + DPAA2_PMD_ERR( + "Flow with group %d and priority %d already exists.", + attr->group, attr->priority); + + return -1; + } + curr = LIST_NEXT(curr, next); + } + + return 0; +} + static int dpaa2_generic_flow_set(struct rte_flow *flow, struct rte_eth_dev *dev, @@ -2898,10 +2921,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, { const struct rte_flow_action_queue *dest_queue; const struct rte_flow_action_rss *rss_conf; - uint16_t index; int is_keycfg_configured = 0, end_of_list = 0; int ret = 0, i = 0, j = 0; - struct dpni_attr nic_attr; struct dpni_rx_tc_dist_cfg tc_cfg; struct dpni_qos_tbl_cfg qos_cfg; struct dpni_fs_action_cfg action; @@ -2909,6 +2930,11 @@ dpaa2_generic_flow_set(struct rte_flow *flow, struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw; size_t param; struct rte_flow *curr = LIST_FIRST(&priv->flows); + uint16_t qos_index; + + ret = dpaa2_flow_verify_attr(priv, attr); + if (ret) + return ret; /* Parse pattern list to get the matching parameters */ while (!end_of_list) { @@ -3056,31 +3082,15 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } } /* Configure QoS table first */ - memset(&nic_attr, 0, sizeof(struct dpni_attr)); - ret = dpni_get_attributes(dpni, CMD_PRI_LOW, - priv->token, &nic_attr); - if (ret < 0) { - DPAA2_PMD_ERR( - "Failure to get attribute. dpni@%p err code(%d)\n", - dpni, ret); - return ret; - } - action.flow_id = action.flow_id % nic_attr.num_rx_tcs; + action.flow_id = action.flow_id % priv->num_rx_tc; - if (!priv->qos_index) { - priv->qos_index = rte_zmalloc(0, - nic_attr.qos_entries, 64); - } - for (index = 0; index < nic_attr.qos_entries; index++) { - if (!priv->qos_index[index]) { - priv->qos_index[index] = 1; - break; - } - } - if (index >= nic_attr.qos_entries) { + qos_index = flow->tc_id * priv->fs_entries + + flow->tc_index; + + if (qos_index >= priv->qos_entries) { DPAA2_PMD_ERR("QoS table with %d entries full", - nic_attr.qos_entries); + priv->qos_entries); return -1; } flow->qos_rule.key_size = priv->extract @@ -3110,30 +3120,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->qos_rule, - flow->tc_id, index, + flow->tc_id, qos_index, 0, 0); if (ret < 0) { DPAA2_PMD_ERR( "Error in addnig entry to QoS table(%d)", ret); - priv->qos_index[index] = 0; return ret; } - flow->qos_index = index; /* Then Configure FS table */ - if (!priv->fs_index) { - priv->fs_index = rte_zmalloc(0, - nic_attr.fs_entries, 64); - } - for (index = 0; index < nic_attr.fs_entries; index++) { - if (!priv->fs_index[index]) { - priv->fs_index[index] = 1; - break; - } - } - if (index >= nic_attr.fs_entries) { + if (flow->tc_index >= priv->fs_entries) { DPAA2_PMD_ERR("FS table with %d entries full", - nic_attr.fs_entries); + priv->fs_entries); return -1; } flow->fs_rule.key_size = priv->extract @@ -3164,31 +3162,23 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } } ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token, - flow->tc_id, index, + flow->tc_id, flow->tc_index, &flow->fs_rule, &action); if (ret < 0) { DPAA2_PMD_ERR( "Error in adding entry to FS table(%d)", ret); - priv->fs_index[index] = 0; return ret; } - flow->fs_index = index; memcpy(&flow->action_cfg, &action, sizeof(struct dpni_fs_action_cfg)); break; case RTE_FLOW_ACTION_TYPE_RSS: - ret = dpni_get_attributes(dpni, CMD_PRI_LOW, - priv->token, &nic_attr); - if (ret < 0) { - DPAA2_PMD_ERR( - "Failure to get attribute. dpni@%p err code(%d)\n", - dpni, ret); - return ret; - } rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf); for (i = 0; i < (int)rss_conf->queue_num; i++) { - if (rss_conf->queue[i] < (attr->group * nic_attr.num_queues) || - rss_conf->queue[i] >= ((attr->group + 1) * nic_attr.num_queues)) { + if (rss_conf->queue[i] < + (attr->group * priv->dist_queues) || + rss_conf->queue[i] >= + ((attr->group + 1) * priv->dist_queues)) { DPAA2_PMD_ERR( "Queue/Group combination are not supported\n"); return -ENOTSUP; @@ -3262,34 +3252,24 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } /* Add Rule into QoS table */ - if (!priv->qos_index) { - priv->qos_index = rte_zmalloc(0, - nic_attr.qos_entries, 64); - } - for (index = 0; index < nic_attr.qos_entries; index++) { - if (!priv->qos_index[index]) { - priv->qos_index[index] = 1; - break; - } - } - if (index >= nic_attr.qos_entries) { + qos_index = flow->tc_id * priv->fs_entries + + flow->tc_index; + if (qos_index >= priv->qos_entries) { DPAA2_PMD_ERR("QoS table with %d entries full", - nic_attr.qos_entries); + priv->qos_entries); return -1; } flow->qos_rule.key_size = priv->extract.qos_key_extract.key_info.key_total_size; ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->qos_rule, flow->tc_id, - index, 0, 0); + qos_index, 0, 0); if (ret < 0) { DPAA2_PMD_ERR( "Error in entry addition in QoS table(%d)", ret); - priv->qos_index[index] = 0; return ret; } - flow->qos_index = index; break; case RTE_FLOW_ACTION_TYPE_END: end_of_list = 1; @@ -3574,7 +3554,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, "Error in adding entry to QoS table(%d)", ret); goto error; } - priv->qos_index[flow->qos_index] = 0; /* Then remove entry from FS table */ ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, priv->token, @@ -3584,7 +3563,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, "Error in entry addition in FS table(%d)", ret); goto error; } - priv->fs_index[flow->fs_index] = 0; break; case RTE_FLOW_ACTION_TYPE_RSS: ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, @@ -3594,7 +3572,6 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, "Error in entry addition in QoS table(%d)", ret); goto error; } - priv->qos_index[flow->qos_index] = 0; break; default: DPAA2_PMD_ERR( From patchwork Tue Jul 7 09:22:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234973 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp738775ilg; Tue, 7 Jul 2020 02:31:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTK1yKofbZJ9oQA+t9sbgYCNWwPfL9uD1gNYr6IhKziXZW0QChSOpTOxi8pm9a5aJL6Hxv X-Received: by 2002:a50:cbcd:: with SMTP id l13mr43301350edi.384.1594114277348; Tue, 07 Jul 2020 02:31:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114277; cv=none; d=google.com; s=arc-20160816; b=vRD0qjARivmQNgkL7mQhk3NUZ51VsROrXYkX8BlJiHsKRzyUBlhfGLa2uvhPbH6HIx oG39neSyN+wCVzFDL61asvThl4y9eF6lD2xDAwnB1iNJcmdC62AGbmuZZQgGpCMiOrSl epOK77aDuR0NnNJRDii6eUGCWWFMFICcdqB4jLJ9HX7X91Fk78iT5TwSeeA7HE7pwDpl pjPgmatZR85oA5EvnqOtNYETcm5t7WE6tb1l/D7+G+Oy25ZgL+2XBEnpyif8iaWvhzur FitiOX2sTphxeVq73wfpZz+IuGaMXmVz1c0Abe4HAcyqkegoehxEafYe5rhjySWJ6jNY U7CQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=W5c/TVUnXEvGvn8qQrfeokLRBXiDWWcrXhKVynGlvds=; b=nUL7O1yaZFdZ8P1rrB+yP4OYzYKHb20IZhCok3hDhW/I+Mc+mYG+8oM8ZQfklqU9Dh wUM8A7Ad35ZjQZBVdKKhiOjG4lLLawLdmizJd8rkXKk79DTL7USuF/aTCn0bhCp4N5J3 KiSv3af3wwW09T2L8a84IT3Btu8bt4sU17a6cg/QviXE/fHChX31TxG1IbHHJPkCA7Ts DFCL+7kyY2IPciIZq0oHldRIOCnjpDbJkY3fkK593hRcjmMPEwJ8qZ7DzafJS7YawgVB QuP/KPEU5He9D4lQ1koHQJozFlOymVYDjZZZyMFMxd2Il4wwDJOe2IiMQESEV6OhgFwO vzxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id dp6si18531896ejc.472.2020.07.07.02.31.17; Tue, 07 Jul 2020 02:31:17 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B68D81DDF4; Tue, 7 Jul 2020 11:27:31 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id B2EC21DC93 for ; Tue, 7 Jul 2020 11:27:12 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 956F42008E9; Tue, 7 Jul 2020 11:27:12 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id AE2EF20060D; Tue, 7 Jul 2020 11:27:10 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 720B0402FC; Tue, 7 Jul 2020 17:27:08 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:35 +0530 Message-Id: <20200707092244.12791-21-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 20/29] net/dpaa2: define the size of table entry X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang If entry size is not bigger than 27, MC alloc one TCAM entry, otherwise, alloc 2 TCAM entries. Extracts size by HW must be not bigger than TCAM entry size(27 or 54). So define the flow entry size as 54. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 90 ++++++++++++++++++++++------------ 1 file changed, 60 insertions(+), 30 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 760a8a793..bcbd5977a 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -29,6 +29,8 @@ */ int mc_l4_port_identification; +#define FIXED_ENTRY_SIZE 54 + enum flow_rule_ipaddr_type { FLOW_NONE_IPADDR, FLOW_IPV4_ADDR, @@ -47,7 +49,8 @@ struct rte_flow { LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ struct dpni_rule_cfg qos_rule; struct dpni_rule_cfg fs_rule; - uint8_t key_size; + uint8_t qos_real_key_size; + uint8_t fs_real_key_size; uint8_t tc_id; /** Traffic Class ID. */ uint8_t tc_index; /** index within this Traffic Class. */ enum rte_flow_action_type action; @@ -478,6 +481,7 @@ dpaa2_flow_rule_data_set( prot, field); return -1; } + memcpy((void *)(size_t)(rule->key_iova + offset), key, size); memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size); @@ -523,9 +527,11 @@ _dpaa2_flow_rule_move_ipaddr_tail( len = NH_FLD_IPV6_ADDR_SIZE; memcpy(tmp, (char *)key_src, len); + memset((char *)key_src, 0, len); memcpy((char *)key_dst, tmp, len); memcpy(tmp, (char *)mask_src, len); + memset((char *)mask_src, 0, len); memcpy((char *)mask_dst, tmp, len); return 0; @@ -1251,8 +1257,7 @@ dpaa2_configure_flow_generic_ip( return -1; } - local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE | - DPAA2_QOS_TABLE_IPADDR_EXTRACT); + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } index = dpaa2_flow_extract_search( @@ -1269,8 +1274,7 @@ dpaa2_configure_flow_generic_ip( return -1; } - local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE | - DPAA2_FS_TABLE_IPADDR_EXTRACT); + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } if (spec_ipv4) @@ -1339,8 +1343,7 @@ dpaa2_configure_flow_generic_ip( return -1; } - local_cfg |= (DPAA2_QOS_TABLE_RECONFIGURE | - DPAA2_QOS_TABLE_IPADDR_EXTRACT); + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; } index = dpaa2_flow_extract_search( @@ -1361,8 +1364,7 @@ dpaa2_configure_flow_generic_ip( return -1; } - local_cfg |= (DPAA2_FS_TABLE_RECONFIGURE | - DPAA2_FS_TABLE_IPADDR_EXTRACT); + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; } if (spec_ipv4) @@ -2641,7 +2643,7 @@ dpaa2_flow_entry_update( char ipdst_key[NH_FLD_IPV6_ADDR_SIZE]; char ipsrc_mask[NH_FLD_IPV6_ADDR_SIZE]; char ipdst_mask[NH_FLD_IPV6_ADDR_SIZE]; - int extend = -1, extend1, size; + int extend = -1, extend1, size = -1; uint16_t qos_index; while (curr) { @@ -2696,6 +2698,9 @@ dpaa2_flow_entry_update( else extend = extend1; + RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) || + (size == NH_FLD_IPV6_ADDR_SIZE)); + memcpy(ipsrc_key, (char *)(size_t)curr->qos_rule.key_iova + curr->ipaddr_rule.qos_ipsrc_offset, @@ -2725,6 +2730,9 @@ dpaa2_flow_entry_update( else extend = extend1; + RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) || + (size == NH_FLD_IPV6_ADDR_SIZE)); + memcpy(ipdst_key, (char *)(size_t)curr->qos_rule.key_iova + curr->ipaddr_rule.qos_ipdst_offset, @@ -2745,6 +2753,8 @@ dpaa2_flow_entry_update( } if (curr->ipaddr_rule.qos_ipsrc_offset >= 0) { + RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) || + (size == NH_FLD_IPV6_ADDR_SIZE)); memcpy((char *)(size_t)curr->qos_rule.key_iova + curr->ipaddr_rule.qos_ipsrc_offset, ipsrc_key, @@ -2755,6 +2765,8 @@ dpaa2_flow_entry_update( size); } if (curr->ipaddr_rule.qos_ipdst_offset >= 0) { + RTE_ASSERT((size == NH_FLD_IPV4_ADDR_SIZE) || + (size == NH_FLD_IPV6_ADDR_SIZE)); memcpy((char *)(size_t)curr->qos_rule.key_iova + curr->ipaddr_rule.qos_ipdst_offset, ipdst_key, @@ -2766,7 +2778,9 @@ dpaa2_flow_entry_update( } if (extend >= 0) - curr->qos_rule.key_size += extend; + curr->qos_real_key_size += extend; + + curr->qos_rule.key_size = FIXED_ENTRY_SIZE; ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &curr->qos_rule, @@ -2873,7 +2887,8 @@ dpaa2_flow_entry_update( } if (extend >= 0) - curr->fs_rule.key_size += extend; + curr->fs_real_key_size += extend; + curr->fs_rule.key_size = FIXED_ENTRY_SIZE; ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token, curr->tc_id, curr->tc_index, @@ -3093,31 +3108,34 @@ dpaa2_generic_flow_set(struct rte_flow *flow, priv->qos_entries); return -1; } - flow->qos_rule.key_size = priv->extract - .qos_key_extract.key_info.key_total_size; + flow->qos_rule.key_size = FIXED_ENTRY_SIZE; if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) { if (flow->ipaddr_rule.qos_ipdst_offset >= flow->ipaddr_rule.qos_ipsrc_offset) { - flow->qos_rule.key_size = + flow->qos_real_key_size = flow->ipaddr_rule.qos_ipdst_offset + NH_FLD_IPV4_ADDR_SIZE; } else { - flow->qos_rule.key_size = + flow->qos_real_key_size = flow->ipaddr_rule.qos_ipsrc_offset + NH_FLD_IPV4_ADDR_SIZE; } - } else if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV6_ADDR) { + } else if (flow->ipaddr_rule.ipaddr_type == + FLOW_IPV6_ADDR) { if (flow->ipaddr_rule.qos_ipdst_offset >= flow->ipaddr_rule.qos_ipsrc_offset) { - flow->qos_rule.key_size = + flow->qos_real_key_size = flow->ipaddr_rule.qos_ipdst_offset + NH_FLD_IPV6_ADDR_SIZE; } else { - flow->qos_rule.key_size = + flow->qos_real_key_size = flow->ipaddr_rule.qos_ipsrc_offset + NH_FLD_IPV6_ADDR_SIZE; } } + + flow->qos_rule.key_size = FIXED_ENTRY_SIZE; + ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->qos_rule, flow->tc_id, qos_index, @@ -3134,17 +3152,20 @@ dpaa2_generic_flow_set(struct rte_flow *flow, priv->fs_entries); return -1; } - flow->fs_rule.key_size = priv->extract - .tc_key_extract[attr->group].key_info.key_total_size; + + flow->fs_real_key_size = + priv->extract.tc_key_extract[flow->tc_id] + .key_info.key_total_size; + if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) { if (flow->ipaddr_rule.fs_ipdst_offset >= flow->ipaddr_rule.fs_ipsrc_offset) { - flow->fs_rule.key_size = + flow->fs_real_key_size = flow->ipaddr_rule.fs_ipdst_offset + NH_FLD_IPV4_ADDR_SIZE; } else { - flow->fs_rule.key_size = + flow->fs_real_key_size = flow->ipaddr_rule.fs_ipsrc_offset + NH_FLD_IPV4_ADDR_SIZE; } @@ -3152,15 +3173,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow, FLOW_IPV6_ADDR) { if (flow->ipaddr_rule.fs_ipdst_offset >= flow->ipaddr_rule.fs_ipsrc_offset) { - flow->fs_rule.key_size = + flow->fs_real_key_size = flow->ipaddr_rule.fs_ipdst_offset + NH_FLD_IPV6_ADDR_SIZE; } else { - flow->fs_rule.key_size = + flow->fs_real_key_size = flow->ipaddr_rule.fs_ipsrc_offset + NH_FLD_IPV6_ADDR_SIZE; } } + + flow->fs_rule.key_size = FIXED_ENTRY_SIZE; + ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token, flow->tc_id, flow->tc_index, &flow->fs_rule, &action); @@ -3259,8 +3283,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow, priv->qos_entries); return -1; } - flow->qos_rule.key_size = + + flow->qos_real_key_size = priv->extract.qos_key_extract.key_info.key_total_size; + flow->qos_rule.key_size = FIXED_ENTRY_SIZE; ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->qos_rule, flow->tc_id, qos_index, 0, 0); @@ -3283,11 +3309,15 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } if (!ret) { - ret = dpaa2_flow_entry_update(priv, flow->tc_id); - if (ret) { - DPAA2_PMD_ERR("Flow entry update failed."); + if (is_keycfg_configured & + (DPAA2_QOS_TABLE_RECONFIGURE | + DPAA2_FS_TABLE_RECONFIGURE)) { + ret = dpaa2_flow_entry_update(priv, flow->tc_id); + if (ret) { + DPAA2_PMD_ERR("Flow entry update failed."); - return -1; + return -1; + } } /* New rules are inserted. */ if (!curr) { From patchwork Tue Jul 7 09:22:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234974 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp738942ilg; Tue, 7 Jul 2020 02:31:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwrI6H6Lg9vGmmC2xgrZiYJd/1E6MJo72CgSH0H9AuTHJdFn5+f9RqPoNyWLT63jgvyi4sy X-Received: by 2002:a25:cbce:: with SMTP id b197mr9556824ybg.327.1594114290316; Tue, 07 Jul 2020 02:31:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114290; cv=none; d=google.com; s=arc-20160816; b=COFf7SGRHEH8ZdUhDAxQHgK8pQUiq+vQpTqMxQypBuHJuJXYB1n8ghAJ2M72hJ8A7j juXhfI53P6Dli3Nyai29v6oqXjMQR6pAYxksPPrY5u2lNwyP+pip+iptX3ZXOr7ain1z Lt3ORfkD4jktQ6x9GHVHKX+MfxUbYp+ckWoYRe4VmYNMZEns+DqC40EEBMI4zpNtJ05o c2CU1MXL4YdF7hD1IJlZ5+HRgOmZfAudL9uw8nbwvp9Aw2Gy5n/wG9Ow3ceeeQLMYg+5 BYUPYP0m69dI/8bWjy1bC3kI2qAjKvw9Xs0KU1rZH+PU0gbizmi3tNxKpbIMc1lzTECQ vB6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=H1XaXuU5xX5E38hm587kA/xpkFaDG8t5F2901tVFV+U=; b=0yfD6yPelSkownsufuYyZjadTm5TXQr9w7IdAuk3TCNz+2TyOJxqCLauC1PGETDFsc EkftP9kN8qy4/dRgj26sB2G4t+cr9qy9zKQHa4RqJgEAXPBlQrpf6P0eX6x5tJt780zP kMOp6U6r2Wa7+2L7cavc1B6BLLS2r7UOdzZ8Pl/aE6hf+nfhw1s1x7d4Nml6plDaHHVt LHSE4+UdpW2BnHfxNqaSYhkZbI5xSBA2W1bEIkr+OwIzn+SQw4Iv78DtneK/KIO9TO1k PanLCFmD6ngfptJU1RJA7VY89Hg8OHdGi2e6+ZxAFccyRVdmLsj339cLUMVEGfQunk4T e9BQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id x33si1582714ybd.174.2020.07.07.02.31.29; Tue, 07 Jul 2020 02:31:30 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E0B461DDFB; Tue, 7 Jul 2020 11:27:32 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 61A111DD1F for ; Tue, 7 Jul 2020 11:27:13 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 465DC2008EB; Tue, 7 Jul 2020 11:27:13 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 5F4A62008CE; Tue, 7 Jul 2020 11:27:11 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 24E05402F0; Tue, 7 Jul 2020 17:27:09 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:36 +0530 Message-Id: <20200707092244.12791-22-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 21/29] net/dpaa2: add logging of flow extracts and rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang This patch add support for logging the flow rules. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 213 ++++++++++++++++++++++++++++++++- 1 file changed, 209 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index bcbd5977a..95756bf7b 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -29,6 +29,8 @@ */ int mc_l4_port_identification; +static char *dpaa2_flow_control_log; + #define FIXED_ENTRY_SIZE 54 enum flow_rule_ipaddr_type { @@ -149,6 +151,189 @@ static const struct rte_flow_item_gre dpaa2_flow_item_gre_mask = { #endif +static inline void dpaa2_prot_field_string( + enum net_prot prot, uint32_t field, + char *string) +{ + if (!dpaa2_flow_control_log) + return; + + if (prot == NET_PROT_ETH) { + strcpy(string, "eth"); + if (field == NH_FLD_ETH_DA) + strcat(string, ".dst"); + else if (field == NH_FLD_ETH_SA) + strcat(string, ".src"); + else if (field == NH_FLD_ETH_TYPE) + strcat(string, ".type"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_VLAN) { + strcpy(string, "vlan"); + if (field == NH_FLD_VLAN_TCI) + strcat(string, ".tci"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_IP) { + strcpy(string, "ip"); + if (field == NH_FLD_IP_SRC) + strcat(string, ".src"); + else if (field == NH_FLD_IP_DST) + strcat(string, ".dst"); + else if (field == NH_FLD_IP_PROTO) + strcat(string, ".proto"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_TCP) { + strcpy(string, "tcp"); + if (field == NH_FLD_TCP_PORT_SRC) + strcat(string, ".src"); + else if (field == NH_FLD_TCP_PORT_DST) + strcat(string, ".dst"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_UDP) { + strcpy(string, "udp"); + if (field == NH_FLD_UDP_PORT_SRC) + strcat(string, ".src"); + else if (field == NH_FLD_UDP_PORT_DST) + strcat(string, ".dst"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_ICMP) { + strcpy(string, "icmp"); + if (field == NH_FLD_ICMP_TYPE) + strcat(string, ".type"); + else if (field == NH_FLD_ICMP_CODE) + strcat(string, ".code"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_SCTP) { + strcpy(string, "sctp"); + if (field == NH_FLD_SCTP_PORT_SRC) + strcat(string, ".src"); + else if (field == NH_FLD_SCTP_PORT_DST) + strcat(string, ".dst"); + else + strcat(string, ".unknown field"); + } else if (prot == NET_PROT_GRE) { + strcpy(string, "gre"); + if (field == NH_FLD_GRE_TYPE) + strcat(string, ".type"); + else + strcat(string, ".unknown field"); + } else { + strcpy(string, "unknown protocol"); + } +} + +static inline void dpaa2_flow_qos_table_extracts_log( + const struct dpaa2_dev_priv *priv) +{ + int idx; + char string[32]; + + if (!dpaa2_flow_control_log) + return; + + printf("Setup QoS table: number of extracts: %d\r\n", + priv->extract.qos_key_extract.dpkg.num_extracts); + for (idx = 0; idx < priv->extract.qos_key_extract.dpkg.num_extracts; + idx++) { + dpaa2_prot_field_string(priv->extract.qos_key_extract.dpkg + .extracts[idx].extract.from_hdr.prot, + priv->extract.qos_key_extract.dpkg.extracts[idx] + .extract.from_hdr.field, + string); + printf("%s", string); + if ((idx + 1) < priv->extract.qos_key_extract.dpkg.num_extracts) + printf(" / "); + } + printf("\r\n"); +} + +static inline void dpaa2_flow_fs_table_extracts_log( + const struct dpaa2_dev_priv *priv, int tc_id) +{ + int idx; + char string[32]; + + if (!dpaa2_flow_control_log) + return; + + printf("Setup FS table: number of extracts of TC[%d]: %d\r\n", + tc_id, priv->extract.tc_key_extract[tc_id] + .dpkg.num_extracts); + for (idx = 0; idx < priv->extract.tc_key_extract[tc_id] + .dpkg.num_extracts; idx++) { + dpaa2_prot_field_string(priv->extract.tc_key_extract[tc_id] + .dpkg.extracts[idx].extract.from_hdr.prot, + priv->extract.tc_key_extract[tc_id].dpkg.extracts[idx] + .extract.from_hdr.field, + string); + printf("%s", string); + if ((idx + 1) < priv->extract.tc_key_extract[tc_id] + .dpkg.num_extracts) + printf(" / "); + } + printf("\r\n"); +} + +static inline void dpaa2_flow_qos_entry_log( + const char *log_info, const struct rte_flow *flow, int qos_index) +{ + int idx; + uint8_t *key, *mask; + + if (!dpaa2_flow_control_log) + return; + + printf("\r\n%s QoS entry[%d] for TC[%d], extracts size is %d\r\n", + log_info, qos_index, flow->tc_id, flow->qos_real_key_size); + + key = (uint8_t *)(size_t)flow->qos_rule.key_iova; + mask = (uint8_t *)(size_t)flow->qos_rule.mask_iova; + + printf("key:\r\n"); + for (idx = 0; idx < flow->qos_real_key_size; idx++) + printf("%02x ", key[idx]); + + printf("\r\nmask:\r\n"); + for (idx = 0; idx < flow->qos_real_key_size; idx++) + printf("%02x ", mask[idx]); + + printf("\r\n%s QoS ipsrc: %d, ipdst: %d\r\n", log_info, + flow->ipaddr_rule.qos_ipsrc_offset, + flow->ipaddr_rule.qos_ipdst_offset); +} + +static inline void dpaa2_flow_fs_entry_log( + const char *log_info, const struct rte_flow *flow) +{ + int idx; + uint8_t *key, *mask; + + if (!dpaa2_flow_control_log) + return; + + printf("\r\n%s FS/TC entry[%d] of TC[%d], extracts size is %d\r\n", + log_info, flow->tc_index, flow->tc_id, flow->fs_real_key_size); + + key = (uint8_t *)(size_t)flow->fs_rule.key_iova; + mask = (uint8_t *)(size_t)flow->fs_rule.mask_iova; + + printf("key:\r\n"); + for (idx = 0; idx < flow->fs_real_key_size; idx++) + printf("%02x ", key[idx]); + + printf("\r\nmask:\r\n"); + for (idx = 0; idx < flow->fs_real_key_size; idx++) + printf("%02x ", mask[idx]); + + printf("\r\n%s FS ipsrc: %d, ipdst: %d\r\n", log_info, + flow->ipaddr_rule.fs_ipsrc_offset, + flow->ipaddr_rule.fs_ipdst_offset); +} static inline void dpaa2_flow_extract_key_set( struct dpaa2_key_info *key_info, int index, uint8_t size) @@ -2679,6 +2864,8 @@ dpaa2_flow_entry_update( qos_index = curr->tc_id * priv->fs_entries + curr->tc_index; + dpaa2_flow_qos_entry_log("Before update", curr, qos_index); + ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, &curr->qos_rule); if (ret) { @@ -2782,6 +2969,8 @@ dpaa2_flow_entry_update( curr->qos_rule.key_size = FIXED_ENTRY_SIZE; + dpaa2_flow_qos_entry_log("Start update", curr, qos_index); + ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &curr->qos_rule, curr->tc_id, qos_index, @@ -2796,6 +2985,7 @@ dpaa2_flow_entry_update( continue; } + dpaa2_flow_fs_entry_log("Before update", curr); extend = -1; ret = dpni_remove_fs_entry(dpni, CMD_PRI_LOW, @@ -2890,6 +3080,8 @@ dpaa2_flow_entry_update( curr->fs_real_key_size += extend; curr->fs_rule.key_size = FIXED_ENTRY_SIZE; + dpaa2_flow_fs_entry_log("Start update", curr); + ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token, curr->tc_id, curr->tc_index, &curr->fs_rule, &curr->action_cfg); @@ -3043,14 +3235,18 @@ dpaa2_generic_flow_set(struct rte_flow *flow, while (!end_of_list) { switch (actions[j].type) { case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_queue = (const struct rte_flow_action_queue *)(actions[j].conf); + dest_queue = + (const struct rte_flow_action_queue *)(actions[j].conf); flow->flow_id = dest_queue->index; flow->action = RTE_FLOW_ACTION_TYPE_QUEUE; memset(&action, 0, sizeof(struct dpni_fs_action_cfg)); action.flow_id = flow->flow_id; if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - if (dpkg_prepare_key_cfg(&priv->extract.qos_key_extract.dpkg, - (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) { + dpaa2_flow_qos_table_extracts_log(priv); + if (dpkg_prepare_key_cfg( + &priv->extract.qos_key_extract.dpkg, + (uint8_t *)(size_t)priv->extract.qos_extract_param) + < 0) { DPAA2_PMD_ERR( "Unable to prepare extract parameters"); return -1; @@ -3059,7 +3255,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg)); qos_cfg.discard_on_miss = true; qos_cfg.keep_entries = true; - qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param; + qos_cfg.key_cfg_iova = + (size_t)priv->extract.qos_extract_param; ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, priv->token, &qos_cfg); if (ret < 0) { @@ -3070,6 +3267,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } } if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) { + dpaa2_flow_fs_table_extracts_log(priv, flow->tc_id); if (dpkg_prepare_key_cfg( &priv->extract.tc_key_extract[flow->tc_id].dpkg, (uint8_t *)(size_t)priv->extract @@ -3136,6 +3334,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, flow->qos_rule.key_size = FIXED_ENTRY_SIZE; + dpaa2_flow_qos_entry_log("Start add", flow, qos_index); + ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->qos_rule, flow->tc_id, qos_index, @@ -3185,6 +3385,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, flow->fs_rule.key_size = FIXED_ENTRY_SIZE; + dpaa2_flow_fs_entry_log("Start add", flow); + ret = dpni_add_fs_entry(dpni, CMD_PRI_LOW, priv->token, flow->tc_id, flow->tc_index, &flow->fs_rule, &action); @@ -3483,6 +3685,9 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev, size_t key_iova = 0, mask_iova = 0; int ret; + dpaa2_flow_control_log = + getenv("DPAA2_FLOW_CONTROL_LOG"); + flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE); if (!flow) { DPAA2_PMD_ERR("Failure to allocate memory for flow"); From patchwork Tue Jul 7 09:22:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234975 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739129ilg; Tue, 7 Jul 2020 02:31:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwD4EcAXsYxk7Ep85UPAoFKdDLlvUnK7F5WMtX8Y3mIwDmY14y0R2mQgm7/v3nExM9KxClw X-Received: by 2002:a25:50cc:: with SMTP id e195mr89008812ybb.452.1594114303603; Tue, 07 Jul 2020 02:31:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114303; cv=none; d=google.com; s=arc-20160816; b=PPNeAyTNGYfr9q+l99qxPJ5fy86elpJ7meGgHpfnq1ztCuwh9AKWQXwiqOactGp/J9 kC3NuxyxQ/TsynuJV+WFVI6Qpq9jV37qv+BCVQM5eQgxSl8fBZvy6WFxPUU4tnTpEmw6 +oOn95Z5FYC5jWgrrCzXQOofA9tesTTJj3JRggT9bHNIbcywUWb+blRaZiAI81jbImfH Klh/HT44QxeM5ldBwos9WE9dpq4+Z8xWep0bx794Z2kyP8G2BO4KdBAao5cKtj8figci MryUuqpb/8V1JndumftS0m8aIksojHg2i6u8BNJBPtW3n+RXXYWdFj1t7VmiHDbTpX+c FuMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=zl40onH5z+ipC0kb26u4cN42RLtd4TpgznjRvs9O2nQ=; b=Ek/BB2z1DupOr0BHgFSYDKWPBxz/fjGJj/DuRQBMudndqIQEV7033IAWjdiDqYcXqN MC0P6Q4u8gD3pHMtSD3NVKyHlxU3tPh2x/y0QgHUekNLSnJNfWc0zOfzMA9EAer9NP25 ahfBo27J09+vPr0/EWa9/K7NJDZxi78iIAhoq7EJLKDr9h5AlwdhivRyesZ/KLzKuaqP hPrgjgeFgzPlAvMErGYb5mvl2mFZd15NJnSc8Wd//rXdhaqKMS5DoU8C5wrma/bHPBOX W/5KqLMK8bNzbzqoXpa/LED3ncaEuz0nQZj7oXeWyFSwn2C47XVnFTNOQqRiJX1/Y+Aq zwoQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id t5si22194191ybp.39.2020.07.07.02.31.43; Tue, 07 Jul 2020 02:31:43 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 24A891DD37; Tue, 7 Jul 2020 11:27:34 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 7ABD81DD44 for ; Tue, 7 Jul 2020 11:27:14 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 4F31120060D; Tue, 7 Jul 2020 11:27:14 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 683162008E1; Tue, 7 Jul 2020 11:27:12 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id C816F40305; Tue, 7 Jul 2020 17:27:09 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:37 +0530 Message-Id: <20200707092244.12791-23-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 22/29] net/dpaa2: support iscrimination between IPv4 and IPv6 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Discriminate between IPv4 and IPv6 in generic IP flow setup. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 153 +++++++++++++++++---------------- 1 file changed, 80 insertions(+), 73 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 95756bf7b..6f3139f86 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -1284,6 +1284,70 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, return 0; } +static int +dpaa2_configure_flow_ip_discrimation( + struct dpaa2_dev_priv *priv, struct rte_flow *flow, + const struct rte_flow_item *pattern, + int *local_cfg, int *device_configured, + uint32_t group) +{ + int index, ret; + struct proto_discrimination proto; + + index = dpaa2_flow_extract_search( + &priv->extract.qos_key_extract.dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.qos_key_extract, + RTE_FLOW_ITEM_TYPE_ETH); + if (ret) { + DPAA2_PMD_ERR( + "QoS Extract ETH_TYPE to discriminate IP failed."); + return -1; + } + (*local_cfg) |= DPAA2_QOS_TABLE_RECONFIGURE; + } + + index = dpaa2_flow_extract_search( + &priv->extract.tc_key_extract[group].dpkg, + NET_PROT_ETH, NH_FLD_ETH_TYPE); + if (index < 0) { + ret = dpaa2_flow_proto_discrimination_extract( + &priv->extract.tc_key_extract[group], + RTE_FLOW_ITEM_TYPE_ETH); + if (ret) { + DPAA2_PMD_ERR( + "FS Extract ETH_TYPE to discriminate IP failed."); + return -1; + } + (*local_cfg) |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); + if (ret) { + DPAA2_PMD_ERR( + "Move ipaddr before IP discrimination set failed"); + return -1; + } + + proto.type = RTE_FLOW_ITEM_TYPE_ETH; + if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) + proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); + else + proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); + ret = dpaa2_flow_proto_discrimination_rule(priv, flow, proto, group); + if (ret) { + DPAA2_PMD_ERR("IP discrimination rule set failed"); + return -1; + } + + (*device_configured) |= (*local_cfg); + + return 0; +} + + static int dpaa2_configure_flow_generic_ip( struct rte_flow *flow, @@ -1327,73 +1391,16 @@ dpaa2_configure_flow_generic_ip( flow->tc_id = group; flow->tc_index = attr->priority; - if (!spec_ipv4 && !spec_ipv6) { - /* Don't care any field of IP header, - * only care IP protocol. - * Example: flow create 0 ingress pattern ipv6 / - */ - /* Eth type is actually used for IP identification. - */ - /* TODO: Current design only supports Eth + IP, - * Eth + vLan + IP needs to add. - */ - struct proto_discrimination proto; - - index = dpaa2_flow_extract_search( - &priv->extract.qos_key_extract.dpkg, - NET_PROT_ETH, NH_FLD_ETH_TYPE); - if (index < 0) { - ret = dpaa2_flow_proto_discrimination_extract( - &priv->extract.qos_key_extract, - RTE_FLOW_ITEM_TYPE_ETH); - if (ret) { - DPAA2_PMD_ERR( - "QoS Ext ETH_TYPE to discriminate IP failed."); - - return -1; - } - local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; - } - - index = dpaa2_flow_extract_search( - &priv->extract.tc_key_extract[group].dpkg, - NET_PROT_ETH, NH_FLD_ETH_TYPE); - if (index < 0) { - ret = dpaa2_flow_proto_discrimination_extract( - &priv->extract.tc_key_extract[group], - RTE_FLOW_ITEM_TYPE_ETH); - if (ret) { - DPAA2_PMD_ERR( - "FS Ext ETH_TYPE to discriminate IP failed"); - - return -1; - } - local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; - } - - ret = dpaa2_flow_rule_move_ipaddr_tail(flow, priv, group); - if (ret) { - DPAA2_PMD_ERR( - "Move ipaddr before IP discrimination set failed"); - return -1; - } - - proto.type = RTE_FLOW_ITEM_TYPE_ETH; - if (pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) - proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); - else - proto.eth_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6); - ret = dpaa2_flow_proto_discrimination_rule(priv, flow, - proto, group); - if (ret) { - DPAA2_PMD_ERR("IP discrimination rule set failed"); - return -1; - } - - (*device_configured) |= local_cfg; + ret = dpaa2_configure_flow_ip_discrimation(priv, + flow, pattern, &local_cfg, + device_configured, group); + if (ret) { + DPAA2_PMD_ERR("IP discrimation failed!"); + return -1; + } + if (!spec_ipv4 && !spec_ipv6) return 0; - } if (mask_ipv4) { if (dpaa2_flow_extract_support((const uint8_t *)mask_ipv4, @@ -1433,10 +1440,10 @@ dpaa2_configure_flow_generic_ip( NET_PROT_IP, NH_FLD_IP_SRC); if (index < 0) { ret = dpaa2_flow_extract_add( - &priv->extract.qos_key_extract, - NET_PROT_IP, - NH_FLD_IP_SRC, - 0); + &priv->extract.qos_key_extract, + NET_PROT_IP, + NH_FLD_IP_SRC, + 0); if (ret) { DPAA2_PMD_ERR("QoS Extract add IP_SRC failed."); @@ -1519,10 +1526,10 @@ dpaa2_configure_flow_generic_ip( else size = NH_FLD_IPV6_ADDR_SIZE; ret = dpaa2_flow_extract_add( - &priv->extract.qos_key_extract, - NET_PROT_IP, - NH_FLD_IP_DST, - size); + &priv->extract.qos_key_extract, + NET_PROT_IP, + NH_FLD_IP_DST, + size); if (ret) { DPAA2_PMD_ERR("QoS Extract add IP_DST failed."); From patchwork Tue Jul 7 09:22:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234976 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739312ilg; Tue, 7 Jul 2020 02:31:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzKx7dGUAgfrU0SGGQhfiETAvZEKmMqOMPkXyTuT/MSm5863jzKySwcy+MmxnUtZ60XntIs X-Received: by 2002:a25:b21e:: with SMTP id i30mr83356855ybj.35.1594114316785; Tue, 07 Jul 2020 02:31:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114316; cv=none; d=google.com; s=arc-20160816; b=MonNWHGdA1q3pefWXaZWKuUmWANyaLsfTiU4rTvivt3WxfT0nV2MfokhDh9spLTt3T QQyvMsAqNFG2N6UMhUol1IAGI5Kvgod16xb4vyTQAtwYGY4dpKRaWm7Er9OGMTZSN7ji wqqjch4wdlGzgIrdYlMhHQMP/x3h+4DssRnbhIbKRu+//vv5krLJuv7d0xqieOxC5yxC MvR2F6y8138f8RrfBqNpLBPTZ/sLpfGIXrAHtCTsngW/ZLe/2AmOzS/QSwgAvcum30jy whXvAnPzCsEAfd46aRiydp3Tcssvul8aEgJfITsvEys3M9/zdRSDMfIqVsG/Ye2B/+Yy yW/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=qqtoVfSPbJXJntqo3FMxbpapVTTsYdZiEdz0edLzCKw=; b=OgMEzNM+Ld1hIcPU4UfNtugI/h97tdaGAzx6LQj5LwyHhYrdPz7TRXBg5i2WgD2t75 rO/GN3mOIpHxJFSv8FoZ0yU3wta/JTYd1WSepKoZfmlB0ZZpC5BgmxDHALp5jUhi57lP Pq1PYl92KP6+DGKq0nolLQzSQmAOWsyGNVc+b5gvoq73me70Qxs/xWMnchbgdDxt7z3M U7Xr8MF3sQ4/jIciu76rPBryKwTwlLjR4ymW4tFLFKQ6n7OseuhnsBHejZH/sjprl2j/ PtQEzxHWQnUvPChYMrRs0MYT6gdcjFcdikpwd1gSUMNjLY9dReCDgfUIXutl1KIIyXzK PL6g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id s15si643614ybl.120.2020.07.07.02.31.56; Tue, 07 Jul 2020 02:31:56 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2B1E91DDFF; Tue, 7 Jul 2020 11:27:35 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id B77CE1DD49 for ; Tue, 7 Jul 2020 11:27:14 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 9D40C1A0A55; Tue, 7 Jul 2020 11:27:14 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B37F31A0A46; Tue, 7 Jul 2020 11:27:12 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 782F44030B; Tue, 7 Jul 2020 17:27:10 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:38 +0530 Message-Id: <20200707092244.12791-24-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 23/29] net/dpaa2: support distribution size set on multiple TCs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Default distribution size of TC is 1, which is limited by MC. We have to set the distribution size for each TC to support multiple RXQs per TC. Signed-off-by: Jun Yang --- drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 6 +-- drivers/net/dpaa2/dpaa2_ethdev.c | 51 ++++++++++++++++---------- drivers/net/dpaa2/dpaa2_ethdev.h | 2 +- 3 files changed, 36 insertions(+), 23 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c index 34de0d1f7..9f0dad6e7 100644 --- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c +++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c @@ -81,14 +81,14 @@ rte_pmd_dpaa2_set_custom_hash(uint16_t port_id, int dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, - uint64_t req_dist_set) + uint64_t req_dist_set, int tc_index) { struct dpaa2_dev_priv *priv = eth_dev->data->dev_private; struct fsl_mc_io *dpni = priv->hw; struct dpni_rx_tc_dist_cfg tc_cfg; struct dpkg_profile_cfg kg_cfg; void *p_params; - int ret, tc_index = 0; + int ret; p_params = rte_malloc( NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE); @@ -107,7 +107,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, return ret; } tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); - tc_cfg.dist_size = eth_dev->data->nb_rx_queues; + tc_cfg.dist_size = priv->dist_queues; tc_cfg.dist_mode = DPNI_DIST_MODE_HASH; ret = dpkg_prepare_key_cfg(&kg_cfg, p_params); diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 008e1c570..020af4b03 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -453,7 +453,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) int rx_l4_csum_offload = false; int tx_l3_csum_offload = false; int tx_l4_csum_offload = false; - int ret; + int ret, tc_index; PMD_INIT_FUNC_TRACE(); @@ -493,12 +493,16 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) } if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) { - ret = dpaa2_setup_flow_dist(dev, - eth_conf->rx_adv_conf.rss_conf.rss_hf); - if (ret) { - DPAA2_PMD_ERR("Unable to set flow distribution." - "Check queue config"); - return ret; + for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) { + ret = dpaa2_setup_flow_dist(dev, + eth_conf->rx_adv_conf.rss_conf.rss_hf, + tc_index); + if (ret) { + DPAA2_PMD_ERR( + "Unable to set flow distribution on tc%d." + "Check queue config", tc_index); + return ret; + } } } @@ -755,11 +759,11 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, flow_id = 0; ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX, - tc_id, flow_id, options, &tx_flow_cfg); + tc_id, flow_id, options, &tx_flow_cfg); if (ret) { DPAA2_PMD_ERR("Error in setting the tx flow: " - "tc_id=%d, flow=%d err=%d", - tc_id, flow_id, ret); + "tc_id=%d, flow=%d err=%d", + tc_id, flow_id, ret); return -1; } @@ -1984,22 +1988,31 @@ dpaa2_dev_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { struct rte_eth_dev_data *data = dev->data; + struct dpaa2_dev_priv *priv = data->dev_private; struct rte_eth_conf *eth_conf = &data->dev_conf; - int ret; + int ret, tc_index; PMD_INIT_FUNC_TRACE(); if (rss_conf->rss_hf) { - ret = dpaa2_setup_flow_dist(dev, rss_conf->rss_hf); - if (ret) { - DPAA2_PMD_ERR("Unable to set flow dist"); - return ret; + for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) { + ret = dpaa2_setup_flow_dist(dev, rss_conf->rss_hf, + tc_index); + if (ret) { + DPAA2_PMD_ERR("Unable to set flow dist on tc%d", + tc_index); + return ret; + } } } else { - ret = dpaa2_remove_flow_dist(dev, 0); - if (ret) { - DPAA2_PMD_ERR("Unable to remove flow dist"); - return ret; + for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) { + ret = dpaa2_remove_flow_dist(dev, tc_index); + if (ret) { + DPAA2_PMD_ERR( + "Unable to remove flow dist on tc%d", + tc_index); + return ret; + } } } eth_conf->rx_adv_conf.rss_conf.rss_hf = rss_conf->rss_hf; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index b49b88a2d..52faeeefe 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -179,7 +179,7 @@ int dpaa2_distset_to_dpkg_profile_cfg(uint64_t req_dist_set, struct dpkg_profile_cfg *kg_cfg); int dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, - uint64_t req_dist_set); + uint64_t req_dist_set, int tc_index); int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev, uint8_t tc_index); From patchwork Tue Jul 7 09:22:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234977 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739410ilg; Tue, 7 Jul 2020 02:32:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy170czb9TPFnAu8cZfbfU5LnYO9gVKgHSx2Dtg8oXa5RU2SokZpRtPkjjU5vq1D0xWZkAV X-Received: by 2002:a25:56:: with SMTP id 83mr94165353yba.149.1594114328204; Tue, 07 Jul 2020 02:32:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114328; cv=none; d=google.com; s=arc-20160816; b=yLmxJ50U0PsTP1xohOl6aP6wnA3XmqorHx90F8ZaZjbnHDO6LXjS8lfTM4XxVwIjxv SCDmkYY+FMoClcuLKsP2ec2Ucf3CvAiT6PvlNTq3ceKo84BEv0cdbAxrZwJD7ZN5SF+V 6SF5tZoxh6dAOw5BIVYxm6vYktzOxOVNEOwPwRbjPIgM6AYmorGHenoJSBDFU8DhOo/K 3icjN7yV6Mr4B9zBx0Og3yl1FO0UR74W49kCLAtt47cjdQ0rZjpEneyPMI5AURBmc1FE x1C9c4pbBa3bq2m1VRxb0dctRDirFIqhZkz1hdJAJRUbDoARYmkm/1BjjEk+lQ99/Dzv NFWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=ySOi1njfj1DfXxqOCZI6QRmqQQkTfnv7GX24s55LKL8=; b=KUaBIoEZClqPqq5fCicj97ntgWXt1yBWmFtgZiRRhozzNeOlZO0rai1cXTg5OuT+A7 c/vYknvHCdQ06egHMoCjQWcx+1IZzRxKNPWLrXvbDQMYQgDN770S2km8bs5wAESm5HYr 5kb6F4K0V/oM4IV6Jln0Dj4L0Rpsfm7pFQokv8Mf/xR5pG4Mp9Ae4N+9wdQPBF5Zqhji cxsV7v61gwZE8JCpH3YyRxx+iTsTanJG6+qB3BIC2fgR6DBxBJkBqH8SSCbZDHrR6xmv SWkwIt0guksBb/yqFecvKeO6HmjriWKdJJP2G2WF4Sgcv1EygI1UdrfQtnI30ODflJxU 3fCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id n184si26197370ybc.403.2020.07.07.02.32.07; Tue, 07 Jul 2020 02:32:08 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B3DA1DE04; Tue, 7 Jul 2020 11:27:36 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 39E061DD3C for ; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1DD561A0A4F; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 643D31A0A30; Tue, 7 Jul 2020 11:27:13 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 27EC5402A8; Tue, 7 Jul 2020 17:27:11 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:39 +0530 Message-Id: <20200707092244.12791-25-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Make more sense to use RXQ index for queue distribution instead of flow ID. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 6f3139f86..76f68b903 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -56,7 +56,6 @@ struct rte_flow { uint8_t tc_id; /** Traffic Class ID. */ uint8_t tc_index; /** index within this Traffic Class. */ enum rte_flow_action_type action; - uint16_t flow_id; /* Special for IP address to specify the offset * in key/mask. */ @@ -3141,6 +3140,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, struct dpni_qos_tbl_cfg qos_cfg; struct dpni_fs_action_cfg action; struct dpaa2_dev_priv *priv = dev->data->dev_private; + struct dpaa2_queue *rxq; struct fsl_mc_io *dpni = (struct fsl_mc_io *)priv->hw; size_t param; struct rte_flow *curr = LIST_FIRST(&priv->flows); @@ -3244,10 +3244,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow, case RTE_FLOW_ACTION_TYPE_QUEUE: dest_queue = (const struct rte_flow_action_queue *)(actions[j].conf); - flow->flow_id = dest_queue->index; + rxq = priv->rx_vq[dest_queue->index]; flow->action = RTE_FLOW_ACTION_TYPE_QUEUE; memset(&action, 0, sizeof(struct dpni_fs_action_cfg)); - action.flow_id = flow->flow_id; + action.flow_id = rxq->flow_id; if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { dpaa2_flow_qos_table_extracts_log(priv); if (dpkg_prepare_key_cfg( @@ -3303,8 +3303,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } /* Configure QoS table first */ - action.flow_id = action.flow_id % priv->num_rx_tc; - qos_index = flow->tc_id * priv->fs_entries + flow->tc_index; @@ -3407,13 +3405,22 @@ dpaa2_generic_flow_set(struct rte_flow *flow, break; case RTE_FLOW_ACTION_TYPE_RSS: rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf); + if (rss_conf->queue_num > priv->dist_queues) { + DPAA2_PMD_ERR( + "RSS number exceeds the distrbution size"); + return -ENOTSUP; + } + for (i = 0; i < (int)rss_conf->queue_num; i++) { - if (rss_conf->queue[i] < - (attr->group * priv->dist_queues) || - rss_conf->queue[i] >= - ((attr->group + 1) * priv->dist_queues)) { + if (rss_conf->queue[i] >= priv->nb_rx_queues) { + DPAA2_PMD_ERR( + "RSS RXQ number exceeds the total number"); + return -ENOTSUP; + } + rxq = priv->rx_vq[rss_conf->queue[i]]; + if (rxq->tc_index != attr->group) { DPAA2_PMD_ERR( - "Queue/Group combination are not supported\n"); + "RSS RXQ distributed is not in current group"); return -ENOTSUP; } } From patchwork Tue Jul 7 09:22:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234978 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739686ilg; Tue, 7 Jul 2020 02:32:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzB/0E2v9XqtOE6ZZRbPFvBXUipWxy0D5K0j4SQXN8ohZu4VrLuT6g+h+hqRBe0j23UipaT X-Received: by 2002:a25:cc12:: with SMTP id l18mr18696122ybf.480.1594114349426; Tue, 07 Jul 2020 02:32:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114349; cv=none; d=google.com; s=arc-20160816; b=iY0fhS+aimGJdSu1acEYaPC3YPil47yVgWjfCROZL+vYIWvYI4SX0ZhOJcpEoyKv7K qgdrIQDpvtD4Y6TSP6xDYGhZRMws4HuPpIwG8C6fXHmAMavR9VVcXxDEl2zcGZvsl9yq depxwbTmcLt64brWc+MmrBv41tlg1jlu1u6QVRnOYHPvrKA1AizhW74vKmRI6vKqGnN8 YPN3vapOLNvaGsqX2pbbzmMFWYLKqtTLhRNaCnT2jYEVd6lKiUeL2cofGxJzaZfTVVlu 8XZuFGqMMGqueTn5+O0JZm8y5YImMFJbBktoXDKJ/EiQnJHqVb7VTRpuHYANKNySnQ1G YjbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=fKGyYC4C5sAlWuN5ugDMjBcZswrsytswd5yTg1kQoKg=; b=NkbNbJZgL9YDfSuWc2Esr4R2Zlyfq/dnlH3q3woXU8Zl+VqsRE0szrq53HAEKTBcZ9 WUeM02FLrUmyZH/XAy0WyKlqrEVyKE+NiciZQtwq6hnEWNzO8IkukLQ5RTE0iRQn5fnU QV7kfywSnyOJEw+sYrL7GZZ716LpmgnB3FkY9/OPibqasDXCPrmoLOquVGkPnt9QZ+F0 I4p7E8xluZ/qRdjuuOHYCW5IVbEMi/nCfkJYh+JoMoPOibG1v/tnW++IgUp9SGi9SmDF 5Aip3dcdhs2DkxIZRaXXMXKVk0RcrAYq5vnrJdMl0Zo46mXJCEl4JTdT9KfCQ6AJt/2N heFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id a27si9467220ybj.43.2020.07.07.02.32.29; Tue, 07 Jul 2020 02:32:29 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFA001DE15; Tue, 7 Jul 2020 11:27:37 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 22E1D1DD3C for ; Tue, 7 Jul 2020 11:27:16 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 096181A0A30; Tue, 7 Jul 2020 11:27:16 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 185651A0A4E; Tue, 7 Jul 2020 11:27:14 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id CE564402C8; Tue, 7 Jul 2020 17:27:11 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:40 +0530 Message-Id: <20200707092244.12791-26-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 25/29] net/dpaa2: add flow data sanity check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Check flow attributions and actions before creating flow. Otherwise, the QoS table and FS table need to re-build if checking fails. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 84 ++++++++++++++++++++++++++-------- 1 file changed, 65 insertions(+), 19 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 76f68b903..3601829c9 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -3124,6 +3124,67 @@ dpaa2_flow_verify_attr( return 0; } +static inline int +dpaa2_flow_verify_action( + struct dpaa2_dev_priv *priv, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[]) +{ + int end_of_list = 0, i, j = 0; + const struct rte_flow_action_queue *dest_queue; + const struct rte_flow_action_rss *rss_conf; + struct dpaa2_queue *rxq; + + while (!end_of_list) { + switch (actions[j].type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + dest_queue = (const struct rte_flow_action_queue *) + (actions[j].conf); + rxq = priv->rx_vq[dest_queue->index]; + if (attr->group != rxq->tc_index) { + DPAA2_PMD_ERR( + "RXQ[%d] does not belong to the group %d", + dest_queue->index, attr->group); + + return -1; + } + break; + case RTE_FLOW_ACTION_TYPE_RSS: + rss_conf = (const struct rte_flow_action_rss *) + (actions[j].conf); + if (rss_conf->queue_num > priv->dist_queues) { + DPAA2_PMD_ERR( + "RSS number exceeds the distrbution size"); + return -ENOTSUP; + } + for (i = 0; i < (int)rss_conf->queue_num; i++) { + if (rss_conf->queue[i] >= priv->nb_rx_queues) { + DPAA2_PMD_ERR( + "RSS queue index exceeds the number of RXQs"); + return -ENOTSUP; + } + rxq = priv->rx_vq[rss_conf->queue[i]]; + if (rxq->tc_index != attr->group) { + DPAA2_PMD_ERR( + "Queue/Group combination are not supported\n"); + return -ENOTSUP; + } + } + + break; + case RTE_FLOW_ACTION_TYPE_END: + end_of_list = 1; + break; + default: + DPAA2_PMD_ERR("Invalid action type"); + return -ENOTSUP; + } + j++; + } + + return 0; +} + static int dpaa2_generic_flow_set(struct rte_flow *flow, struct rte_eth_dev *dev, @@ -3150,6 +3211,10 @@ dpaa2_generic_flow_set(struct rte_flow *flow, if (ret) return ret; + ret = dpaa2_flow_verify_action(priv, attr, actions); + if (ret) + return ret; + /* Parse pattern list to get the matching parameters */ while (!end_of_list) { switch (pattern[i].type) { @@ -3405,25 +3470,6 @@ dpaa2_generic_flow_set(struct rte_flow *flow, break; case RTE_FLOW_ACTION_TYPE_RSS: rss_conf = (const struct rte_flow_action_rss *)(actions[j].conf); - if (rss_conf->queue_num > priv->dist_queues) { - DPAA2_PMD_ERR( - "RSS number exceeds the distrbution size"); - return -ENOTSUP; - } - - for (i = 0; i < (int)rss_conf->queue_num; i++) { - if (rss_conf->queue[i] >= priv->nb_rx_queues) { - DPAA2_PMD_ERR( - "RSS RXQ number exceeds the total number"); - return -ENOTSUP; - } - rxq = priv->rx_vq[rss_conf->queue[i]]; - if (rxq->tc_index != attr->group) { - DPAA2_PMD_ERR( - "RSS RXQ distributed is not in current group"); - return -ENOTSUP; - } - } flow->action = RTE_FLOW_ACTION_TYPE_RSS; ret = dpaa2_distset_to_dpkg_profile_cfg(rss_conf->types, From patchwork Tue Jul 7 09:22:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234979 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739823ilg; Tue, 7 Jul 2020 02:32:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzhyJLNV81OqaXFYfohpaJF5YnE8obKFKkZSoUk/nq0iso+9OvhzqD56G1AK+L0xpum/9m8 X-Received: by 2002:a25:3610:: with SMTP id d16mr91454712yba.213.1594114361845; Tue, 07 Jul 2020 02:32:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114361; cv=none; d=google.com; s=arc-20160816; b=FSTi7F25DnEt+CZL1/KrNu9psr8iGovTB430yP5th7ZWahudgthSAQUZEoJsJA6Lp/ e5LKOm64jWRXt3d/y/CMid/2GrkO5Q7LG3GoFDZY0twT+Q2Jzgym1viNwN7d71zlYrdB m6yb437R24HpAyhwEBmxeie2H9yZaPLrvu3lvK0oUfuHvXdjHADgnsoyGnfeSePy3ddI zmWo+TCLLgjVN2+xpjepSRt/z0SqH0547vG3rPUyaxUI4yc4lhHcwk3AQ7gnoQIzRFsh P7lGCiKJl4T3FXj+oZKOhv6+nNRdJpL4qqsGb/LK7slZUJfshEPgHwAFrVbAKR51gU6L MwVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Lyd1t8Vx8mcWtSjKsthKn0BcqOfA0fagjWMvvZCfJic=; b=al62RvlaP6+SbTd8MZd3yXJ+Ru6wUi/voCktDIUeW0i9yzS0XW11a9zRYQdb4LE4at eE+hUoxV0Ht/hUSohsolH5hSppaSSdhEND1910DSM8Ja3giGNRnT10ZIKUUYuQ8aMyUf B2ctdOnq2NmmB6i3UsR/SoPrMvYWbYxwvX5GCnre5p3g7oXHiI+UFKp9YGm54R4Jm4U+ 8kf3d1RPeDEW9B9CdXSlpHSkqJja92Mt1ObEzgh51YYO1cu/1lv1Ccjh4rtKGmOWBBuq W/Jetvit9HorpvzIiDZt9FMOuQiIXfFR+Jmh/yZILYjPFeIqPp2mO073y1Mk6In6JAty bKBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id e7si14245874ybh.109.2020.07.07.02.32.41; Tue, 07 Jul 2020 02:32:41 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D46E71DE1B; Tue, 7 Jul 2020 11:27:38 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id D7AE31DD5F for ; Tue, 7 Jul 2020 11:27:16 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id BBBEB20060D; Tue, 7 Jul 2020 11:27:16 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C0F822008CE; Tue, 7 Jul 2020 11:27:14 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 84AF4402F7; Tue, 7 Jul 2020 17:27:12 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:41 +0530 Message-Id: <20200707092244.12791-27-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 26/29] net/dpaa2: modify flow API QoS setup to follow FS setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang In HW/MC logical, QoS setup should follow FS setup. In addition, Skip QoS setup if MAX TC number of DPNI is set 1. Signed-off-by: Jun Yang --- drivers/net/dpaa2/dpaa2_flow.c | 151 ++++++++++++++++++--------------- 1 file changed, 84 insertions(+), 67 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 3601829c9..9239fa459 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -2872,11 +2872,13 @@ dpaa2_flow_entry_update( dpaa2_flow_qos_entry_log("Before update", curr, qos_index); - ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, - priv->token, &curr->qos_rule); - if (ret) { - DPAA2_PMD_ERR("Qos entry remove failed."); - return -1; + if (priv->num_rx_tc > 1) { + ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, + priv->token, &curr->qos_rule); + if (ret) { + DPAA2_PMD_ERR("Qos entry remove failed."); + return -1; + } } extend = -1; @@ -2977,13 +2979,15 @@ dpaa2_flow_entry_update( dpaa2_flow_qos_entry_log("Start update", curr, qos_index); - ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, - priv->token, &curr->qos_rule, - curr->tc_id, qos_index, - 0, 0); - if (ret) { - DPAA2_PMD_ERR("Qos entry update failed."); - return -1; + if (priv->num_rx_tc > 1) { + ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, + priv->token, &curr->qos_rule, + curr->tc_id, qos_index, + 0, 0); + if (ret) { + DPAA2_PMD_ERR("Qos entry update failed."); + return -1; + } } if (curr->action != RTE_FLOW_ACTION_TYPE_QUEUE) { @@ -3313,31 +3317,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, flow->action = RTE_FLOW_ACTION_TYPE_QUEUE; memset(&action, 0, sizeof(struct dpni_fs_action_cfg)); action.flow_id = rxq->flow_id; - if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { - dpaa2_flow_qos_table_extracts_log(priv); - if (dpkg_prepare_key_cfg( - &priv->extract.qos_key_extract.dpkg, - (uint8_t *)(size_t)priv->extract.qos_extract_param) - < 0) { - DPAA2_PMD_ERR( - "Unable to prepare extract parameters"); - return -1; - } - memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg)); - qos_cfg.discard_on_miss = true; - qos_cfg.keep_entries = true; - qos_cfg.key_cfg_iova = - (size_t)priv->extract.qos_extract_param; - ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, - priv->token, &qos_cfg); - if (ret < 0) { - DPAA2_PMD_ERR( - "Distribution cannot be configured.(%d)" - , ret); - return -1; - } - } + /* Configure FS table first*/ if (is_keycfg_configured & DPAA2_FS_TABLE_RECONFIGURE) { dpaa2_flow_fs_table_extracts_log(priv, flow->tc_id); if (dpkg_prepare_key_cfg( @@ -3366,17 +3347,39 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return -1; } } - /* Configure QoS table first */ - qos_index = flow->tc_id * priv->fs_entries + - flow->tc_index; + /* Configure QoS table then.*/ + if (is_keycfg_configured & DPAA2_QOS_TABLE_RECONFIGURE) { + dpaa2_flow_qos_table_extracts_log(priv); + if (dpkg_prepare_key_cfg( + &priv->extract.qos_key_extract.dpkg, + (uint8_t *)(size_t)priv->extract.qos_extract_param) < 0) { + DPAA2_PMD_ERR( + "Unable to prepare extract parameters"); + return -1; + } - if (qos_index >= priv->qos_entries) { - DPAA2_PMD_ERR("QoS table with %d entries full", - priv->qos_entries); - return -1; + memset(&qos_cfg, 0, sizeof(struct dpni_qos_tbl_cfg)); + qos_cfg.discard_on_miss = false; + qos_cfg.default_tc = 0; + qos_cfg.keep_entries = true; + qos_cfg.key_cfg_iova = + (size_t)priv->extract.qos_extract_param; + /* QoS table is effecitive for multiple TCs.*/ + if (priv->num_rx_tc > 1) { + ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, + priv->token, &qos_cfg); + if (ret < 0) { + DPAA2_PMD_ERR( + "RSS QoS table can not be configured(%d)\n", + ret); + return -1; + } + } } - flow->qos_rule.key_size = FIXED_ENTRY_SIZE; + + flow->qos_real_key_size = priv->extract + .qos_key_extract.key_info.key_total_size; if (flow->ipaddr_rule.ipaddr_type == FLOW_IPV4_ADDR) { if (flow->ipaddr_rule.qos_ipdst_offset >= flow->ipaddr_rule.qos_ipsrc_offset) { @@ -3402,21 +3405,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow, } } - flow->qos_rule.key_size = FIXED_ENTRY_SIZE; + /* QoS entry added is only effective for multiple TCs.*/ + if (priv->num_rx_tc > 1) { + qos_index = flow->tc_id * priv->fs_entries + + flow->tc_index; + if (qos_index >= priv->qos_entries) { + DPAA2_PMD_ERR("QoS table with %d entries full", + priv->qos_entries); + return -1; + } + flow->qos_rule.key_size = FIXED_ENTRY_SIZE; - dpaa2_flow_qos_entry_log("Start add", flow, qos_index); + dpaa2_flow_qos_entry_log("Start add", flow, qos_index); - ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, + ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token, &flow->qos_rule, flow->tc_id, qos_index, 0, 0); - if (ret < 0) { - DPAA2_PMD_ERR( - "Error in addnig entry to QoS table(%d)", ret); - return ret; + if (ret < 0) { + DPAA2_PMD_ERR( + "Error in addnig entry to QoS table(%d)", ret); + return ret; + } } - /* Then Configure FS table */ if (flow->tc_index >= priv->fs_entries) { DPAA2_PMD_ERR("FS table with %d entries full", priv->fs_entries); @@ -3507,7 +3519,8 @@ dpaa2_generic_flow_set(struct rte_flow *flow, &tc_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "Distribution cannot be configured: %d\n", ret); + "RSS FS table cannot be configured: %d\n", + ret); rte_free((void *)param); return -1; } @@ -3841,13 +3854,15 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, switch (flow->action) { case RTE_FLOW_ACTION_TYPE_QUEUE: - /* Remove entry from QoS table first */ - ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, - &flow->qos_rule); - if (ret < 0) { - DPAA2_PMD_ERR( - "Error in adding entry to QoS table(%d)", ret); - goto error; + if (priv->num_rx_tc > 1) { + /* Remove entry from QoS table first */ + ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, + &flow->qos_rule); + if (ret < 0) { + DPAA2_PMD_ERR( + "Error in removing entry from QoS table(%d)", ret); + goto error; + } } /* Then remove entry from FS table */ @@ -3855,17 +3870,19 @@ int dpaa2_flow_destroy(struct rte_eth_dev *dev, flow->tc_id, &flow->fs_rule); if (ret < 0) { DPAA2_PMD_ERR( - "Error in entry addition in FS table(%d)", ret); + "Error in removing entry from FS table(%d)", ret); goto error; } break; case RTE_FLOW_ACTION_TYPE_RSS: - ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, - &flow->qos_rule); - if (ret < 0) { - DPAA2_PMD_ERR( - "Error in entry addition in QoS table(%d)", ret); - goto error; + if (priv->num_rx_tc > 1) { + ret = dpni_remove_qos_entry(dpni, CMD_PRI_LOW, priv->token, + &flow->qos_rule); + if (ret < 0) { + DPAA2_PMD_ERR( + "Error in entry addition in QoS table(%d)", ret); + goto error; + } } break; default: From patchwork Tue Jul 7 09:22:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234980 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739987ilg; Tue, 7 Jul 2020 02:32:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwyf/vneUrhZazel2S/blgix9UJTHhw/frVWKqk9Pd1cZxzzVmz2BDSTkhnaqJXdc04hYF6 X-Received: by 2002:a25:44e:: with SMTP id 75mr88685631ybe.388.1594114375131; Tue, 07 Jul 2020 02:32:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114375; cv=none; d=google.com; s=arc-20160816; b=f1NMoS1obtt875y8sznEe3OSngELZctLUM4Z6g4AeKrvWexFX7BVtGYtN0sjb/RBTg 4bb7pX1I1VJVlIijm0/3UE/xBDL+KMOBW2hrvt8CBtXqi1SLAQrMLQo9ogY0ABf5SrFd v73zVPUKdS2lIYuCQyqZA0uM+BICSeIQ6fNumIDJFk2tKJyfmXDHlTzYEc+EzrLPdyEq 2WGgVIawsDNZuvE8gykF4xmKdbkqu+CVO4E/QSv2xTnXv62wR6TzMk1jLsi7WBozdl3r oDDSh+jPjN3Ku6MMeGNDzPSY0VFqbDWJC+0x+ikKYYoA9201/A6NptN4wayXVCh6LlvU jEbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=LHbUS/+T87T0I7HhXzf6NBudc7sPFfqgXqYYJrw9TAg=; b=v4Yr7uUBfbKbuE6sN9AGGYXJ53sj3pGEnoxhK8dp3DtJsNVYMmYKHYdNjoZFuxUYBr jE2qLRMzZTue20lqAf+L9XKEE1uipvo50LpFEeJAW4CBmuMMXRjkk4k2glBAg5ArBURk Nh09ttYmilwTbmvi2wJUzWXhMc19tT9yVPjwSX07l4kBdsfwmCJWERli9xcEvwQ1glo9 GWNVpvIMciQdqThbYKZIXPwpJcQcFOBs6RIy3c7n0i2wDkG5LQeTAZCM/GmVyMhAe7IH kwEfW7o3COlKokC+Zf17w26V3C+NtS9MUrLf3UoJAG3MEdgxzaxBuq2kMQ1j5vXVFCPB dS/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id e1si2143355ybj.27.2020.07.07.02.32.54; Tue, 07 Jul 2020 02:32:55 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDDF21DE21; Tue, 7 Jul 2020 11:27:39 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 96C2F1DD5F for ; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 7CB142008CE; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 78F502008DD; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 36A77402FA; Tue, 7 Jul 2020 17:27:13 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:42 +0530 Message-Id: <20200707092244.12791-28-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang 1) dpni_set_rx_hash_dist and dpni_set_rx_fs_dist used for TC configuration instead of dpni_set_rx_tc_dist. Otherwise, re-configuration of default TC of QoS fails. 2) Default miss action is to drop. "export DPAA2_FLOW_CONTROL_MISS_FLOW=flow_id" is used receive the missed packets from flow with flow ID specified. Signed-off-by: Jun Yang --- drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 30 +++++++------ drivers/net/dpaa2/dpaa2_flow.c | 62 ++++++++++++++++++-------- 2 files changed, 60 insertions(+), 32 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c index 9f0dad6e7..d69156bcc 100644 --- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c +++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c @@ -85,7 +85,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, { struct dpaa2_dev_priv *priv = eth_dev->data->dev_private; struct fsl_mc_io *dpni = priv->hw; - struct dpni_rx_tc_dist_cfg tc_cfg; + struct dpni_rx_dist_cfg tc_cfg; struct dpkg_profile_cfg kg_cfg; void *p_params; int ret; @@ -96,8 +96,9 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, DPAA2_PMD_ERR("Unable to allocate flow-dist parameters"); return -ENOMEM; } + memset(p_params, 0, DIST_PARAM_IOVA_SIZE); - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); + memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg)); ret = dpaa2_distset_to_dpkg_profile_cfg(req_dist_set, &kg_cfg); if (ret) { @@ -106,9 +107,11 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, rte_free(p_params); return ret; } + tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); tc_cfg.dist_size = priv->dist_queues; - tc_cfg.dist_mode = DPNI_DIST_MODE_HASH; + tc_cfg.enable = true; + tc_cfg.tc = tc_index; ret = dpkg_prepare_key_cfg(&kg_cfg, p_params); if (ret) { @@ -117,8 +120,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, return ret; } - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index, - &tc_cfg); + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg); rte_free(p_params); if (ret) { DPAA2_PMD_ERR( @@ -136,7 +138,7 @@ int dpaa2_remove_flow_dist( { struct dpaa2_dev_priv *priv = eth_dev->data->dev_private; struct fsl_mc_io *dpni = priv->hw; - struct dpni_rx_tc_dist_cfg tc_cfg; + struct dpni_rx_dist_cfg tc_cfg; struct dpkg_profile_cfg kg_cfg; void *p_params; int ret; @@ -147,13 +149,15 @@ int dpaa2_remove_flow_dist( DPAA2_PMD_ERR("Unable to allocate flow-dist parameters"); return -ENOMEM; } - memset(p_params, 0, DIST_PARAM_IOVA_SIZE); - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); - kg_cfg.num_extracts = 0; - tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); + + memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg)); tc_cfg.dist_size = 0; - tc_cfg.dist_mode = DPNI_DIST_MODE_NONE; + tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); + tc_cfg.enable = true; + tc_cfg.tc = tc_index; + memset(p_params, 0, DIST_PARAM_IOVA_SIZE); + kg_cfg.num_extracts = 0; ret = dpkg_prepare_key_cfg(&kg_cfg, p_params); if (ret) { DPAA2_PMD_ERR("Unable to prepare extract parameters"); @@ -161,8 +165,8 @@ int dpaa2_remove_flow_dist( return ret; } - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index, - &tc_cfg); + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, + &tc_cfg); rte_free(p_params); if (ret) DPAA2_PMD_ERR( diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 9239fa459..cc789346a 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -30,6 +30,8 @@ int mc_l4_port_identification; static char *dpaa2_flow_control_log; +static int dpaa2_flow_miss_flow_id = + DPNI_FS_MISS_DROP; #define FIXED_ENTRY_SIZE 54 @@ -3201,7 +3203,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, const struct rte_flow_action_rss *rss_conf; int is_keycfg_configured = 0, end_of_list = 0; int ret = 0, i = 0, j = 0; - struct dpni_rx_tc_dist_cfg tc_cfg; + struct dpni_rx_dist_cfg tc_cfg; struct dpni_qos_tbl_cfg qos_cfg; struct dpni_fs_action_cfg action; struct dpaa2_dev_priv *priv = dev->data->dev_private; @@ -3330,20 +3332,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return -1; } - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); + memset(&tc_cfg, 0, + sizeof(struct dpni_rx_dist_cfg)); tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc; - tc_cfg.dist_mode = DPNI_DIST_MODE_FS; tc_cfg.key_cfg_iova = (uint64_t)priv->extract.tc_extract_param[flow->tc_id]; - tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP; - tc_cfg.fs_cfg.keep_entries = true; - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, - priv->token, - flow->tc_id, &tc_cfg); + tc_cfg.tc = flow->tc_id; + tc_cfg.enable = false; + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, + priv->token, &tc_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "Distribution cannot be configured.(%d)" - , ret); + "TC hash cannot be disabled.(%d)", + ret); + return -1; + } + tc_cfg.enable = true; + tc_cfg.fs_miss_flow_id = + dpaa2_flow_miss_flow_id; + ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW, + priv->token, &tc_cfg); + if (ret < 0) { + DPAA2_PMD_ERR( + "TC distribution cannot be configured.(%d)", + ret); return -1; } } @@ -3508,18 +3520,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return -1; } - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); + memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg)); tc_cfg.dist_size = rss_conf->queue_num; - tc_cfg.dist_mode = DPNI_DIST_MODE_HASH; tc_cfg.key_cfg_iova = (size_t)param; - tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP; - - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, - priv->token, flow->tc_id, - &tc_cfg); + tc_cfg.enable = true; + tc_cfg.tc = flow->tc_id; + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, + priv->token, &tc_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "RSS FS table cannot be configured: %d\n", + "RSS TC table cannot be configured: %d\n", ret); rte_free((void *)param); return -1; @@ -3544,7 +3554,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, priv->token, &qos_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "Distribution can't be configured %d\n", + "RSS QoS dist can't be configured-%d\n", ret); return -1; } @@ -3761,6 +3771,20 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev, dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG"); + if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) { + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + dpaa2_flow_miss_flow_id = + atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")); + if (dpaa2_flow_miss_flow_id >= priv->dist_queues) { + DPAA2_PMD_ERR( + "The missed flow ID %d exceeds the max flow ID %d", + dpaa2_flow_miss_flow_id, + priv->dist_queues - 1); + return NULL; + } + } + flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE); if (!flow) { DPAA2_PMD_ERR("Failure to allocate memory for flow"); From patchwork Tue Jul 7 09:22:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234981 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp740172ilg; Tue, 7 Jul 2020 02:33:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxe7ld2rGvinYv5z3UwtlrAr50B2d13Fs/YMJ2BgsKY97n2Z4IUKRPjq0CGx0jwWLAsECZT X-Received: by 2002:a25:850b:: with SMTP id w11mr82024193ybk.291.1594114390330; Tue, 07 Jul 2020 02:33:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114390; cv=none; d=google.com; s=arc-20160816; b=mgTizOaQG9vUj2tjy6eQGxYuhnDpAiYRIY82O1EM17QczjaWB/wfAxzfNuvQz/5LF4 BtnWLrJi4XAlC0Yl9WQn+5UdmINfFUdapIFicWqz4aJctHSGBjGVuGNTAgY9eojtORyW vsgwGAqoATQ/rtcp1vmL1vNliZifSqA8H+59z6TcmyzYhu7OhmYNxhjSzr7wvNWk0oyR w5MxmwXSU4A5EB/+Ijnr5g+oB1vujA48GGWMk3jh+S2Volrh4LPA8lnw+tPeU6MfPvUa lm5q9D6L7X1foHzTNc+S3mCP3RtvDJpuloyx4HJLupgTWavc1gkeqgiH+ajKzTp9m8qq 17wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=C0OdTga3G+J7Kj692cPMMv+CmzJwrJuxC38VqEegXdA=; b=JYB5NUZTAiC9SxqIqET5m09lLXkq6wtNBvnTpAeOTVj1os/dvm20VRBIlleAJVBLVB Nybn15qvlkDzvwN74Ee+s5pZs8EdGHhwBBR6eEHsBP7liblMAE7rFbspIKBN6NsnjJHp clW64+HFG75pUIe7O5KuWOg6h61Ec3dA7TKFMDGVRQJ0jWDj6g8ey/4ak7PN+DcBAkNn eZwfCsJabTY4/4Zjh1Lyf4bdjZ1ukL2bbnLNO3p02Mea9a237SwHko2LKLKNRe9WAb99 iDNE99QcSQD8SvcXukv6RYY7mxPZgIs6dSPwql3fmgKL0aowRuFsi8PfLUWKsyPa3bg5 bFRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b74si14795880ybg.398.2020.07.07.02.33.10; Tue, 07 Jul 2020 02:33:10 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 74A611DE2F; Tue, 7 Jul 2020 11:27:41 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id F10241DD6E for ; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id D61532008DD; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 29CB02008E1; Tue, 7 Jul 2020 11:27:16 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id DC336402F0; Tue, 7 Jul 2020 17:27:13 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:43 +0530 Message-Id: <20200707092244.12791-29-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 28/29] net/dpaa2: configure per class distribution size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang TC distribution size is set with dist_queues or nb_rx_queues % dist_queues in order of TC priority. Signed-off-by: Jun Yang --- drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c index d69156bcc..25b1d2bb6 100644 --- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c +++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c @@ -88,7 +88,21 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, struct dpni_rx_dist_cfg tc_cfg; struct dpkg_profile_cfg kg_cfg; void *p_params; - int ret; + int ret, tc_dist_queues; + + /*TC distribution size is set with dist_queues or + * nb_rx_queues % dist_queues in order of TC priority index. + * Calculating dist size for this tc_index:- + */ + tc_dist_queues = eth_dev->data->nb_rx_queues - + tc_index * priv->dist_queues; + if (tc_dist_queues <= 0) { + DPAA2_PMD_INFO("No distribution on TC%d", tc_index); + return 0; + } + + if (tc_dist_queues > priv->dist_queues) + tc_dist_queues = priv->dist_queues; p_params = rte_malloc( NULL, DIST_PARAM_IOVA_SIZE, RTE_CACHE_LINE_SIZE); @@ -109,7 +123,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, } tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); - tc_cfg.dist_size = priv->dist_queues; + tc_cfg.dist_size = tc_dist_queues; tc_cfg.enable = true; tc_cfg.tc = tc_index; From patchwork Tue Jul 7 09:22:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234982 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp740335ilg; Tue, 7 Jul 2020 02:33:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy4umuGKfiij0brVgSiGhzlHNGcRIv6BiSzG87KnR2TgpEly35WmZ+2QgofspKxv5EHre/5 X-Received: by 2002:a25:abef:: with SMTP id v102mr87620424ybi.518.1594114402389; Tue, 07 Jul 2020 02:33:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114402; cv=none; d=google.com; s=arc-20160816; b=N3wlnjnV7XTErZAekN3+/3G5QcRalhDMLr/xjk1CUw8B2KYTFDVdf5aTptEQ/2+X/j vQFbzSXqWmsH55NCqeLjdC/ACyfJ7YZhrZMRwadCsjUHGqLz0rN1A7sOJWnq3LhNnzWF CHNXv+RftQYTzvZHh10bkU9nK1uyGneF7L/X11nGdM1sI8bNonViNlOHP/gXFVBBFGXI +Z0rI6+Ul0Ycy5S8lctV6Fedrwjyl9wuNI5LKDbyMIaYA2N7b5Id6VUHUrFYhVYlFE/O C/4+AxnAH4xwQtTEEpUUDULMKYvRaxNnNG1JVaoR4SHRKVpo3pLK57XgnTK2Ly333okK CVAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=0RPu2CcGzdd8lxxCph1iO3AiA5Ojrh7K0c7FK/2QvqI=; b=MsBI2ZUdfQeSLkCC2hO1SaT0ykeSNbqVRZ36Y+XMtK+tF6djlce/wlkQ5+aNRy4M2i ZhAdoAmZFYDVlV7oCIJOdovYHRgRix23QDnZQFJwVA6rv+G6f9KOcV2E1YSXMSkiflBa 4DyFpUw/Qz6fefm3Lr68ao85EJpvar2A4zYn3ZzX/lnx6fqgPYccV6qNerUpYPHgmMTZ YHJfznN4DOnl+zX8/jSwZ9JPgDKl9auH0QRlc/jPl61Ib3uE7ur6YSmIQrwmn68X/Rdd pii6Vi0Lu12gWARJetLuzlOBELCpmEMYG0SAzjuANTTuH2hO3QFZkXdQQCFH+GRzu+S9 qFSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id t14si22959811ybl.37.2020.07.07.02.33.22; Tue, 07 Jul 2020 02:33:22 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B8B11DE33; Tue, 7 Jul 2020 11:27:42 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id D9F3F1DD7C for ; Tue, 7 Jul 2020 11:27:18 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id C076B1A0A30; Tue, 7 Jul 2020 11:27:18 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id D96D71A0A46; Tue, 7 Jul 2020 11:27:16 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8DC8B402FC; Tue, 7 Jul 2020 17:27:14 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Nipun Gupta Date: Tue, 7 Jul 2020 14:52:44 +0530 Message-Id: <20200707092244.12791-30-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 29/29] net/dpaa2: support raw flow classification X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta Add support for raw flow, which can be used for any protocol rules. Signed-off-by: Nipun Gupta --- drivers/net/dpaa2/dpaa2_ethdev.h | 3 +- drivers/net/dpaa2/dpaa2_flow.c | 135 +++++++++++++++++++++++++++++++ 2 files changed, 137 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 52faeeefe..2bc0f3f5a 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause * * Copyright (c) 2015-2016 Freescale Semiconductor, Inc. All rights reserved. - * Copyright 2016-2019 NXP + * Copyright 2016-2020 NXP * */ @@ -99,6 +99,7 @@ extern enum pmd_dpaa2_ts dpaa2_enable_ts; #define DPAA2_QOS_TABLE_IPADDR_EXTRACT 4 #define DPAA2_FS_TABLE_IPADDR_EXTRACT 8 +#define DPAA2_FLOW_MAX_KEY_SIZE 16 /*Externaly defined*/ extern const struct rte_flow_ops dpaa2_flow_ops; diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index cc789346a..136bdd5fa 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -493,6 +493,42 @@ static int dpaa2_flow_extract_add( return 0; } +static int dpaa2_flow_extract_add_raw(struct dpaa2_key_extract *key_extract, + int size) +{ + struct dpkg_profile_cfg *dpkg = &key_extract->dpkg; + struct dpaa2_key_info *key_info = &key_extract->key_info; + int last_extract_size, index; + + if (dpkg->num_extracts != 0 && dpkg->extracts[0].type != + DPKG_EXTRACT_FROM_DATA) { + DPAA2_PMD_WARN("RAW extract cannot be combined with others"); + return -1; + } + + last_extract_size = (size % DPAA2_FLOW_MAX_KEY_SIZE); + dpkg->num_extracts = (size / DPAA2_FLOW_MAX_KEY_SIZE); + if (last_extract_size) + dpkg->num_extracts++; + else + last_extract_size = DPAA2_FLOW_MAX_KEY_SIZE; + + for (index = 0; index < dpkg->num_extracts; index++) { + dpkg->extracts[index].type = DPKG_EXTRACT_FROM_DATA; + if (index == dpkg->num_extracts - 1) + dpkg->extracts[index].extract.from_data.size = + last_extract_size; + else + dpkg->extracts[index].extract.from_data.size = + DPAA2_FLOW_MAX_KEY_SIZE; + dpkg->extracts[index].extract.from_data.offset = + DPAA2_FLOW_MAX_KEY_SIZE * index; + } + + key_info->key_total_size = size; + return 0; +} + /* Protocol discrimination. * Discriminate IPv4/IPv6/vLan by Eth type. * Discriminate UDP/TCP/ICMP by next proto of IP. @@ -674,6 +710,18 @@ dpaa2_flow_rule_data_set( return 0; } +static inline int +dpaa2_flow_rule_data_set_raw(struct dpni_rule_cfg *rule, + const void *key, const void *mask, int size) +{ + int offset = 0; + + memcpy((void *)(size_t)(rule->key_iova + offset), key, size); + memcpy((void *)(size_t)(rule->mask_iova + offset), mask, size); + + return 0; +} + static inline int _dpaa2_flow_rule_move_ipaddr_tail( struct dpaa2_key_extract *key_extract, @@ -2814,6 +2862,83 @@ dpaa2_configure_flow_gre(struct rte_flow *flow, return 0; } +static int +dpaa2_configure_flow_raw(struct rte_flow *flow, + struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item *pattern, + const struct rte_flow_action actions[] __rte_unused, + struct rte_flow_error *error __rte_unused, + int *device_configured) +{ + struct dpaa2_dev_priv *priv = dev->data->dev_private; + const struct rte_flow_item_raw *spec = pattern->spec; + const struct rte_flow_item_raw *mask = pattern->mask; + int prev_key_size = + priv->extract.qos_key_extract.key_info.key_total_size; + int local_cfg = 0, ret; + uint32_t group; + + /* Need both spec and mask */ + if (!spec || !mask) { + DPAA2_PMD_ERR("spec or mask not present."); + return -EINVAL; + } + /* Only supports non-relative with offset 0 */ + if (spec->relative || spec->offset != 0 || + spec->search || spec->limit) { + DPAA2_PMD_ERR("relative and non zero offset not supported."); + return -EINVAL; + } + /* Spec len and mask len should be same */ + if (spec->length != mask->length) { + DPAA2_PMD_ERR("Spec len and mask len mismatch."); + return -EINVAL; + } + + /* Get traffic class index and flow id to be configured */ + group = attr->group; + flow->tc_id = group; + flow->tc_index = attr->priority; + + if (prev_key_size < spec->length) { + ret = dpaa2_flow_extract_add_raw(&priv->extract.qos_key_extract, + spec->length); + if (ret) { + DPAA2_PMD_ERR("QoS Extract RAW add failed."); + return -1; + } + local_cfg |= DPAA2_QOS_TABLE_RECONFIGURE; + + ret = dpaa2_flow_extract_add_raw( + &priv->extract.tc_key_extract[group], + spec->length); + if (ret) { + DPAA2_PMD_ERR("FS Extract RAW add failed."); + return -1; + } + local_cfg |= DPAA2_FS_TABLE_RECONFIGURE; + } + + ret = dpaa2_flow_rule_data_set_raw(&flow->qos_rule, spec->pattern, + mask->pattern, spec->length); + if (ret) { + DPAA2_PMD_ERR("QoS RAW rule data set failed"); + return -1; + } + + ret = dpaa2_flow_rule_data_set_raw(&flow->fs_rule, spec->pattern, + mask->pattern, spec->length); + if (ret) { + DPAA2_PMD_ERR("FS RAW rule data set failed"); + return -1; + } + + (*device_configured) |= local_cfg; + + return 0; +} + /* The existing QoS/FS entry with IP address(es) * needs update after * new extract(s) are inserted before IP @@ -3297,6 +3422,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return ret; } break; + case RTE_FLOW_ITEM_TYPE_RAW: + ret = dpaa2_configure_flow_raw(flow, + dev, attr, &pattern[i], + actions, error, + &is_keycfg_configured); + if (ret) { + DPAA2_PMD_ERR("RAW flow configuration failed!"); + return ret; + } + break; case RTE_FLOW_ITEM_TYPE_END: end_of_list = 1; break; /*End of List*/