From patchwork Tue Jul 7 09:22:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 234980 Delivered-To: patch@linaro.org Received: by 2002:a92:d244:0:0:0:0:0 with SMTP id v4csp739987ilg; Tue, 7 Jul 2020 02:32:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwyf/vneUrhZazel2S/blgix9UJTHhw/frVWKqk9Pd1cZxzzVmz2BDSTkhnaqJXdc04hYF6 X-Received: by 2002:a25:44e:: with SMTP id 75mr88685631ybe.388.1594114375131; Tue, 07 Jul 2020 02:32:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594114375; cv=none; d=google.com; s=arc-20160816; b=f1NMoS1obtt875y8sznEe3OSngELZctLUM4Z6g4AeKrvWexFX7BVtGYtN0sjb/RBTg 4bb7pX1I1VJVlIijm0/3UE/xBDL+KMOBW2hrvt8CBtXqi1SLAQrMLQo9ogY0ABf5SrFd v73zVPUKdS2lIYuCQyqZA0uM+BICSeIQ6fNumIDJFk2tKJyfmXDHlTzYEc+EzrLPdyEq 2WGgVIawsDNZuvE8gykF4xmKdbkqu+CVO4E/QSv2xTnXv62wR6TzMk1jLsi7WBozdl3r oDDSh+jPjN3Ku6MMeGNDzPSY0VFqbDWJC+0x+ikKYYoA9201/A6NptN4wayXVCh6LlvU jEbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=LHbUS/+T87T0I7HhXzf6NBudc7sPFfqgXqYYJrw9TAg=; b=v4Yr7uUBfbKbuE6sN9AGGYXJ53sj3pGEnoxhK8dp3DtJsNVYMmYKHYdNjoZFuxUYBr jE2qLRMzZTue20lqAf+L9XKEE1uipvo50LpFEeJAW4CBmuMMXRjkk4k2glBAg5ArBURk Nh09ttYmilwTbmvi2wJUzWXhMc19tT9yVPjwSX07l4kBdsfwmCJWERli9xcEvwQ1glo9 GWNVpvIMciQdqThbYKZIXPwpJcQcFOBs6RIy3c7n0i2wDkG5LQeTAZCM/GmVyMhAe7IH kwEfW7o3COlKokC+Zf17w26V3C+NtS9MUrLf3UoJAG3MEdgxzaxBuq2kMQ1j5vXVFCPB dS/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id e1si2143355ybj.27.2020.07.07.02.32.54; Tue, 07 Jul 2020 02:32:55 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDDF21DE21; Tue, 7 Jul 2020 11:27:39 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 96C2F1DD5F for ; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 7CB142008CE; Tue, 7 Jul 2020 11:27:17 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 78F502008DD; Tue, 7 Jul 2020 11:27:15 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 36A77402FA; Tue, 7 Jul 2020 17:27:13 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jun Yang Date: Tue, 7 Jul 2020 14:52:42 +0530 Message-Id: <20200707092244.12791-28-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com> References: <20200527132326.1382-1-hemant.agrawal@nxp.com> <20200707092244.12791-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang 1) dpni_set_rx_hash_dist and dpni_set_rx_fs_dist used for TC configuration instead of dpni_set_rx_tc_dist. Otherwise, re-configuration of default TC of QoS fails. 2) Default miss action is to drop. "export DPAA2_FLOW_CONTROL_MISS_FLOW=flow_id" is used receive the missed packets from flow with flow ID specified. Signed-off-by: Jun Yang --- drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 30 +++++++------ drivers/net/dpaa2/dpaa2_flow.c | 62 ++++++++++++++++++-------- 2 files changed, 60 insertions(+), 32 deletions(-) -- 2.17.1 diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c index 9f0dad6e7..d69156bcc 100644 --- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c +++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c @@ -85,7 +85,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, { struct dpaa2_dev_priv *priv = eth_dev->data->dev_private; struct fsl_mc_io *dpni = priv->hw; - struct dpni_rx_tc_dist_cfg tc_cfg; + struct dpni_rx_dist_cfg tc_cfg; struct dpkg_profile_cfg kg_cfg; void *p_params; int ret; @@ -96,8 +96,9 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, DPAA2_PMD_ERR("Unable to allocate flow-dist parameters"); return -ENOMEM; } + memset(p_params, 0, DIST_PARAM_IOVA_SIZE); - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); + memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg)); ret = dpaa2_distset_to_dpkg_profile_cfg(req_dist_set, &kg_cfg); if (ret) { @@ -106,9 +107,11 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, rte_free(p_params); return ret; } + tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); tc_cfg.dist_size = priv->dist_queues; - tc_cfg.dist_mode = DPNI_DIST_MODE_HASH; + tc_cfg.enable = true; + tc_cfg.tc = tc_index; ret = dpkg_prepare_key_cfg(&kg_cfg, p_params); if (ret) { @@ -117,8 +120,7 @@ dpaa2_setup_flow_dist(struct rte_eth_dev *eth_dev, return ret; } - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index, - &tc_cfg); + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, &tc_cfg); rte_free(p_params); if (ret) { DPAA2_PMD_ERR( @@ -136,7 +138,7 @@ int dpaa2_remove_flow_dist( { struct dpaa2_dev_priv *priv = eth_dev->data->dev_private; struct fsl_mc_io *dpni = priv->hw; - struct dpni_rx_tc_dist_cfg tc_cfg; + struct dpni_rx_dist_cfg tc_cfg; struct dpkg_profile_cfg kg_cfg; void *p_params; int ret; @@ -147,13 +149,15 @@ int dpaa2_remove_flow_dist( DPAA2_PMD_ERR("Unable to allocate flow-dist parameters"); return -ENOMEM; } - memset(p_params, 0, DIST_PARAM_IOVA_SIZE); - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); - kg_cfg.num_extracts = 0; - tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); + + memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg)); tc_cfg.dist_size = 0; - tc_cfg.dist_mode = DPNI_DIST_MODE_NONE; + tc_cfg.key_cfg_iova = (uint64_t)(DPAA2_VADDR_TO_IOVA(p_params)); + tc_cfg.enable = true; + tc_cfg.tc = tc_index; + memset(p_params, 0, DIST_PARAM_IOVA_SIZE); + kg_cfg.num_extracts = 0; ret = dpkg_prepare_key_cfg(&kg_cfg, p_params); if (ret) { DPAA2_PMD_ERR("Unable to prepare extract parameters"); @@ -161,8 +165,8 @@ int dpaa2_remove_flow_dist( return ret; } - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, priv->token, tc_index, - &tc_cfg); + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, priv->token, + &tc_cfg); rte_free(p_params); if (ret) DPAA2_PMD_ERR( diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 9239fa459..cc789346a 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -30,6 +30,8 @@ int mc_l4_port_identification; static char *dpaa2_flow_control_log; +static int dpaa2_flow_miss_flow_id = + DPNI_FS_MISS_DROP; #define FIXED_ENTRY_SIZE 54 @@ -3201,7 +3203,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, const struct rte_flow_action_rss *rss_conf; int is_keycfg_configured = 0, end_of_list = 0; int ret = 0, i = 0, j = 0; - struct dpni_rx_tc_dist_cfg tc_cfg; + struct dpni_rx_dist_cfg tc_cfg; struct dpni_qos_tbl_cfg qos_cfg; struct dpni_fs_action_cfg action; struct dpaa2_dev_priv *priv = dev->data->dev_private; @@ -3330,20 +3332,30 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return -1; } - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); + memset(&tc_cfg, 0, + sizeof(struct dpni_rx_dist_cfg)); tc_cfg.dist_size = priv->nb_rx_queues / priv->num_rx_tc; - tc_cfg.dist_mode = DPNI_DIST_MODE_FS; tc_cfg.key_cfg_iova = (uint64_t)priv->extract.tc_extract_param[flow->tc_id]; - tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP; - tc_cfg.fs_cfg.keep_entries = true; - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, - priv->token, - flow->tc_id, &tc_cfg); + tc_cfg.tc = flow->tc_id; + tc_cfg.enable = false; + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, + priv->token, &tc_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "Distribution cannot be configured.(%d)" - , ret); + "TC hash cannot be disabled.(%d)", + ret); + return -1; + } + tc_cfg.enable = true; + tc_cfg.fs_miss_flow_id = + dpaa2_flow_miss_flow_id; + ret = dpni_set_rx_fs_dist(dpni, CMD_PRI_LOW, + priv->token, &tc_cfg); + if (ret < 0) { + DPAA2_PMD_ERR( + "TC distribution cannot be configured.(%d)", + ret); return -1; } } @@ -3508,18 +3520,16 @@ dpaa2_generic_flow_set(struct rte_flow *flow, return -1; } - memset(&tc_cfg, 0, sizeof(struct dpni_rx_tc_dist_cfg)); + memset(&tc_cfg, 0, sizeof(struct dpni_rx_dist_cfg)); tc_cfg.dist_size = rss_conf->queue_num; - tc_cfg.dist_mode = DPNI_DIST_MODE_HASH; tc_cfg.key_cfg_iova = (size_t)param; - tc_cfg.fs_cfg.miss_action = DPNI_FS_MISS_DROP; - - ret = dpni_set_rx_tc_dist(dpni, CMD_PRI_LOW, - priv->token, flow->tc_id, - &tc_cfg); + tc_cfg.enable = true; + tc_cfg.tc = flow->tc_id; + ret = dpni_set_rx_hash_dist(dpni, CMD_PRI_LOW, + priv->token, &tc_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "RSS FS table cannot be configured: %d\n", + "RSS TC table cannot be configured: %d\n", ret); rte_free((void *)param); return -1; @@ -3544,7 +3554,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, priv->token, &qos_cfg); if (ret < 0) { DPAA2_PMD_ERR( - "Distribution can't be configured %d\n", + "RSS QoS dist can't be configured-%d\n", ret); return -1; } @@ -3761,6 +3771,20 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev, dpaa2_flow_control_log = getenv("DPAA2_FLOW_CONTROL_LOG"); + if (getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")) { + struct dpaa2_dev_priv *priv = dev->data->dev_private; + + dpaa2_flow_miss_flow_id = + atoi(getenv("DPAA2_FLOW_CONTROL_MISS_FLOW")); + if (dpaa2_flow_miss_flow_id >= priv->dist_queues) { + DPAA2_PMD_ERR( + "The missed flow ID %d exceeds the max flow ID %d", + dpaa2_flow_miss_flow_id, + priv->dist_queues - 1); + return NULL; + } + } + flow = rte_zmalloc(NULL, sizeof(struct rte_flow), RTE_CACHE_LINE_SIZE); if (!flow) { DPAA2_PMD_ERR("Failure to allocate memory for flow");