From patchwork Mon Sep 30 08:32:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 174722 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp6527554oce; Mon, 30 Sep 2019 01:34:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqy4Q5oDaXVKgzyjfH0HTh0T/0CSZTCJQt2Vbwj7S7YhOpS1XvEpwHnPsIEdXW/V5ZKkXnjN X-Received: by 2002:a50:9250:: with SMTP id j16mr18425517eda.160.1569832494784; Mon, 30 Sep 2019 01:34:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569832494; cv=none; d=google.com; s=arc-20160816; b=KQrMin56a+geUdeHbI++WMHJyAEO4OvjOeEZ5NS35VTIMZfZJwmvCBUfVUPyY3BQN/ JZUYmZLLzuFLIuXZgB0sXMBHnwmTtM9QHbfZJmjyrSuhr80/kMt664IKTEd200wMRRWQ 6KIk4ZF+dJPkjJP4FbFB065l5qK0yo+h2/q5xpWHPZjswY8Rtfnlpz4fvkyM2XcsAtYk qs2PxUh0KA3G6vKG4VJ1v2i5y6XC1o5JxXhbXJbj3flu85Vw+Bp+ov4Sxqi4iuAYxjVv 3LASVPdsJ93Zvte39/WRjHWTGc0Uen6JCugW3nbnVhbDmR0fMoAP9c7rXxVaJSjzLwkI 2YKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Y2uTUa2cVEEgnQtL6Lz94sETccv1Fmx0ZVxZQwYNDrE=; b=Rv6Eg+UcVDT7Uns0wFdSXlHx2UnbFvh6l0pcwksOXf6jpaz/5AbeV1OqHCtgFxcKp+ kQAkqTaLgyCg1VYHxIr64xbdLmynWFUxq2xFL8c+Ka4+vpLxJ3hf0InogZXdAJMjZmJu aaNTqR2SnzibmZ8vAWg/P+x/GbbiwE8XSjFj4wBQtxB/5w408fXZT5ZNs6PTH+he4LGy zI/sZA0vnrb8TGF8e86mQaM3f/kXLbGjacAdBmhI21xUsDqsJbPB5MdzbSRY53DKAd8X U4S9nxDwRmpD+YDXkGWsbLLxw+rpetuajkHqaz7GgEryak8bV8ZhaULZPf7Iq3+XuaIV pf/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g31si6947817eda.399.2019.09.30.01.34.54; Mon, 30 Sep 2019 01:34:54 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 37E6649DF; Mon, 30 Sep 2019 10:34:49 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 9BF813423; Mon, 30 Sep 2019 10:34:39 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 4CAE21A0814; Mon, 30 Sep 2019 10:34:39 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 7EFF41A0206; Mon, 30 Sep 2019 10:34:37 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 34BDA402B7; Mon, 30 Sep 2019 16:34:34 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: jerinj@marvell.com, stable@dpdk.org Date: Mon, 30 Sep 2019 14:02:10 +0530 Message-Id: <20190930083215.8241-2-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190930083215.8241-1-hemant.agrawal@nxp.com> References: <20190927075841.21841-1-hemant.agrawal@nxp.com> <20190930083215.8241-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v4 1/6] event/dpaa2: fix def queue conf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Test vector expect only one type of scheduling as default. The old code is provide support scheduling types instead of default. Fixes: 13370a3877a5 ("eventdev: fix inconsistency in queue config") Cc: stable@dpdk.org Signed-off-by: Hemant Agrawal --- drivers/event/dpaa2/dpaa2_eventdev.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) -- 2.17.1 diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 926b7edd8..b8cb437a0 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -1,7 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * - * Copyright 2017 NXP - * + * Copyright 2017,2019 NXP */ #include @@ -470,8 +468,7 @@ dpaa2_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id, RTE_SET_USED(queue_conf); queue_conf->nb_atomic_flows = DPAA2_EVENT_QUEUE_ATOMIC_FLOWS; - queue_conf->schedule_type = RTE_SCHED_TYPE_ATOMIC | - RTE_SCHED_TYPE_PARALLEL; + queue_conf->schedule_type = RTE_SCHED_TYPE_PARALLEL; queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL; } From patchwork Mon Sep 30 08:32:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 174721 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp6527469oce; Mon, 30 Sep 2019 01:34:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyWwXhfpFwuTw85/AXtbaNq+y9v+pHhre028MoRUIf44ksrW/5dQhpPNVopAfps2IYRMN8T X-Received: by 2002:a17:906:1d03:: with SMTP id n3mr18321307ejh.287.1569832487513; Mon, 30 Sep 2019 01:34:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569832487; cv=none; d=google.com; s=arc-20160816; b=p2yTzjAHkhLdFentsNjYBhH/Cc2ATbnxJWdyMPMS9y4QP/3LLuffqDIbnbSnQ0p81U jWPEi+xRwahYqLVaT5C1W3YWZ29P1L7AQOHZPeeCvG3K0nIi5VvXkjois/DagfIKJfpL U2MZJ7sTVbxCR1r8WXpIJi8G1YHak7hmNcItE7zaJ85G2jr/hxQ2sLqRK14F19z/pxLN eynG4etcfHwwAInEpWD5AYpYcZqZ+oSfGXI2Z1zj5W+pcIRhpwnvqQRfv2EO1xbGKVKt hPjbM+cRZosjmyHnXCUjkTlClXzi/lThPcCIdV5iu/KOJraemYb93xnWspmRkF9V9tj6 FcrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=gkU1VMMwD5XWSCMrad8/QUfU1ApL8D0Bv87vSA2BNS4=; b=GaG5vBLWrIcMjP3TM0WbSqPGGwQdz8f/daSEDmWfKuRWtLZf533O07Yhjq15v0IY3f 3b9sEbVSS10ruxQUcU8wVnpHneicqgKxXYsvZW5p78CoAHWI0AA3Cvu1ud9bm30nYBC1 HTGe1KkgSGPVG/WXOFmhZM/JXLUHru+fd83Xq2CAgHk33MpZ7WvpKxkZnAgIVZlwkym/ wYJWtq6eL2xa+60M55b7mo6e+cWX/mT8QA8NNWKGybYC1X7m7Jy+3XDPYNFSAH1PQx1G LMtPvGvCwF/ZRxy510wCt/kzpM1d3Tgr8BbsVLh5OqiPNY6qoDW9oR3bnnY8ZB1p1T5a RfpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id v10si6913296edc.224.2019.09.30.01.34.47; Mon, 30 Sep 2019 01:34:47 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 040B837B0; Mon, 30 Sep 2019 10:34:47 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id BAD3C343C for ; Mon, 30 Sep 2019 10:34:39 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 595AA1A0816; Mon, 30 Sep 2019 10:34:39 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id EABBE1A0A57; Mon, 30 Sep 2019 10:34:37 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 8BB04402C1; Mon, 30 Sep 2019 16:34:35 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: jerinj@marvell.com Date: Mon, 30 Sep 2019 14:02:11 +0530 Message-Id: <20190930083215.8241-3-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190930083215.8241-1-hemant.agrawal@nxp.com> References: <20190927075841.21841-1-hemant.agrawal@nxp.com> <20190930083215.8241-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v4 2/6] event/dpaa2: remove conditional compilation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch removes the conditional compilation for cryptodev event support from RTE_LIBRTE_SECURITY flag. Signed-off-by: Hemant Agrawal --- drivers/event/dpaa2/Makefile | 2 -- drivers/event/dpaa2/dpaa2_eventdev.c | 6 ------ 2 files changed, 8 deletions(-) -- 2.17.1 diff --git a/drivers/event/dpaa2/Makefile b/drivers/event/dpaa2/Makefile index 470157f25..e0bb527b1 100644 --- a/drivers/event/dpaa2/Makefile +++ b/drivers/event/dpaa2/Makefile @@ -24,10 +24,8 @@ LDLIBS += -lrte_common_dpaax CFLAGS += -I$(RTE_SDK)/drivers/net/dpaa2 CFLAGS += -I$(RTE_SDK)/drivers/net/dpaa2/mc -ifeq ($(CONFIG_RTE_LIBRTE_SECURITY),y) LDLIBS += -lrte_pmd_dpaa2_sec CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec -endif # versioning export map EXPORT_MAP := rte_pmd_dpaa2_event_version.map diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index b8cb437a0..98b487603 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -33,9 +33,7 @@ #include #include #include -#ifdef RTE_LIBRTE_SECURITY #include -#endif #include "dpaa2_eventdev.h" #include "dpaa2_eventdev_logs.h" #include @@ -794,7 +792,6 @@ dpaa2_eventdev_eth_stop(const struct rte_eventdev *dev, return 0; } -#ifdef RTE_LIBRTE_SECURITY static int dpaa2_eventdev_crypto_caps_get(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev, @@ -937,7 +934,6 @@ dpaa2_eventdev_crypto_stop(const struct rte_eventdev *dev, return 0; } -#endif static struct rte_eventdev_ops dpaa2_eventdev_ops = { .dev_infos_get = dpaa2_eventdev_info_get, @@ -960,13 +956,11 @@ static struct rte_eventdev_ops dpaa2_eventdev_ops = { .eth_rx_adapter_queue_del = dpaa2_eventdev_eth_queue_del, .eth_rx_adapter_start = dpaa2_eventdev_eth_start, .eth_rx_adapter_stop = dpaa2_eventdev_eth_stop, -#ifdef RTE_LIBRTE_SECURITY .crypto_adapter_caps_get = dpaa2_eventdev_crypto_caps_get, .crypto_adapter_queue_pair_add = dpaa2_eventdev_crypto_queue_add, .crypto_adapter_queue_pair_del = dpaa2_eventdev_crypto_queue_del, .crypto_adapter_start = dpaa2_eventdev_crypto_start, .crypto_adapter_stop = dpaa2_eventdev_crypto_stop, -#endif }; static int From patchwork Mon Sep 30 08:32:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 174723 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp6527650oce; Mon, 30 Sep 2019 01:35:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqzm+WL5n7+6dIDs75PKcIpXqv5E8fKlFOFOQ25ucwP5VdAtjFJebsM+y+P1PFuGXozvAu5C X-Received: by 2002:a05:6402:6c6:: with SMTP id n6mr17789012edy.162.1569832502546; Mon, 30 Sep 2019 01:35:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569832502; cv=none; d=google.com; s=arc-20160816; b=f//kHLwNgpQKEvMp29LCY+eh6j3hBY6kSSvpiNZrWE5gXnwQ7B6zEP7Op9W+nM4gmA q6Vyig7sTzu6cC0lhf2Hevdl5IdlhZx4FedAP5TUc/eWfZuQXxMQ/LCS4YQT0FQuQbTw Fe7DDBnb39ObaR+vs42OVvLqclo5PqLqCR7FmgvKx4q9rmq6Rl9xFns+f6tPf0cF/ct7 8MQCNTJDrv1Miy3ZUIJid0eSHnh3D0VyiC1TUGXgNaLiKjTfkYnR5DvGabcaAJS9UosR 1YEfCCpyttKWg4OfJtgREqivhDwnqoQpyHoNGC22wWJhl1ZfTupWyWvzYrElDsGh9hp7 Zo4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=vkjrvWg/t+b3kL8BzyfC68n3g0+4dxB09gryVKFI6a4=; b=vfTlwzv1K2waQeSdzn4BicIqT+wiFxFa1969T/H7TrUxDLyto7pEdVWTg/Ik884rii pSUBvRl7V1mgjCnjDoPzp16AG3pFW24k/FOp1iMCbhOfhTpHqD6t40mmfoFbTguQjr2M i6N3zLAxUqPeuXz9Te8V2oqkHEXJppXaggAwmyym6tqhTmO1k/JlJWtcoGu06OW1FCJG P2wwkgJ3rSFTB7q1gcgKk9P+Msb9Dw+fBqhDbjmTGxmZhY1elxeJIX6IPbf0VqWLBOHS r+nKgMAH8SM/g1swptGhpOxxIabR3U2sCVZU1rYn9/o+9GsIXNfMKuKundj/iEVpQ+n0 xfng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g50si6397199edb.47.2019.09.30.01.35.02; Mon, 30 Sep 2019 01:35:02 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A26914CA6; Mon, 30 Sep 2019 10:34:51 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 9D2673423 for ; Mon, 30 Sep 2019 10:34:40 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 3130220098B; Mon, 30 Sep 2019 10:34:40 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id C2CA020079D; Mon, 30 Sep 2019 10:34:38 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 61585402C7; Mon, 30 Sep 2019 16:34:36 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: jerinj@marvell.com Date: Mon, 30 Sep 2019 14:02:12 +0530 Message-Id: <20190930083215.8241-4-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190930083215.8241-1-hemant.agrawal@nxp.com> References: <20190927075841.21841-1-hemant.agrawal@nxp.com> <20190930083215.8241-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v4 3/6] event/dpaa2: add destroy support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add support to destroy the event device Signed-off-by: Hemant Agrawal --- drivers/event/dpaa2/dpaa2_eventdev.c | 35 ++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) -- 2.17.1 diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 98b487603..9255de16f 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -1059,6 +1059,39 @@ dpaa2_eventdev_create(const char *name) return -EFAULT; } +static int +dpaa2_eventdev_destroy(const char *name) +{ + struct rte_eventdev *eventdev; + struct dpaa2_eventdev *priv; + int i; + + eventdev = rte_event_pmd_get_named_dev(name); + if (eventdev == NULL) { + RTE_EDEV_LOG_ERR("eventdev with name %s not allocated", name); + return -1; + } + + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + priv = eventdev->data->dev_private; + for (i = 0; i < priv->max_event_queues; i++) { + if (priv->evq_info[i].dpcon) + rte_dpaa2_free_dpcon_dev(priv->evq_info[i].dpcon); + + if (priv->evq_info[i].dpci) + rte_dpaa2_free_dpci_dev(priv->evq_info[i].dpci); + + } + priv->max_event_queues = 0; + + RTE_LOG(INFO, PMD, "%s eventdev cleaned\n", name); + return 0; +} + + static int dpaa2_eventdev_probe(struct rte_vdev_device *vdev) { @@ -1077,6 +1110,8 @@ dpaa2_eventdev_remove(struct rte_vdev_device *vdev) name = rte_vdev_device_name(vdev); DPAA2_EVENTDEV_INFO("Closing %s", name); + dpaa2_eventdev_destroy(name); + return rte_event_pmd_vdev_uninit(name); } From patchwork Mon Sep 30 08:32:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 174724 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp6527727oce; Mon, 30 Sep 2019 01:35:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqwRRP4BpzmWkwZANMkQJ53dmUCoIhyE+EMSzX7SYCvzcJCPPEimcjiAj+G1vkfHc4ulqsly X-Received: by 2002:a17:906:c47:: with SMTP id t7mr18137128ejf.133.1569832509723; Mon, 30 Sep 2019 01:35:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569832509; cv=none; d=google.com; s=arc-20160816; b=euIxifZiL+piiLXtIE/2vggDFmDIatcVpkbbUSWxm99sWp3R/+O7P/78E7FhnQuWgS 4phYAlrv1NB5cZEtIengNyoq6jL7Ah2iTabutpoJwXVJfOYvAYyIDWkIxbcxLL84K4fv KFmP1PnMZxHGjFqR3/qNdcrtxUgq9QS/xpkDH2ubr83ijSpHk+soPOh1D0v9Y7+wjp3l 0PFL4ft3tP/Rpyu/ji/ANe1b7Y8XP6SNuknxXUHAGAavyUYhafDWwV0GrAFBTedf02U2 ipcWWB+fmotp7yBDCRQ7lAt6WMFxI8EWtjMUWOCV3ycu7jN8PvEjgZy59TTbfl2jwuAR K+GA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=RSUVeYRZ6K/5aHtBTpnUqiTvGmAPoJYhoI9+TKo8buA=; b=Dpla+UN/jtFexuKLT71XL2yleqlyqlVEy5yj1qUypFe67n/WGwuue1gtvLoB2lKk6e ZvjXX94kCRelsNhizka8kwgPJZYxnP/+EkkFYA7bjRx2A5RVFdfaAd08dUu59rCeTyTS iOGLVq1vj0QsZYyJSpt+px4CW5POusGVVyQ4PMrd11LXUVWVK3dQpza1vu34RBZuZFZ8 vF3BTFJExQICVS/YgxlYjfqAD+jHTW75JR+7hSSyUAm1oAMkNNcu/NvcUuV6Irv/9v1i L4NiHD7CZCKzj3zV8OeJJHmFxDY9X8VOEQSHAJ5Ou0CvRPTfup8pVGxalWAUY9ckD0Bx HvXQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id u12si6621865edi.326.2019.09.30.01.35.09; Mon, 30 Sep 2019 01:35:09 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 228DE4C90; Mon, 30 Sep 2019 10:34:55 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 2E81C9E4 for ; Mon, 30 Sep 2019 10:34:42 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id E876C200466; Mon, 30 Sep 2019 10:34:41 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 26B71200989; Mon, 30 Sep 2019 10:34:40 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 398574030D; Mon, 30 Sep 2019 16:34:37 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: jerinj@marvell.com, Nipun Gupta Date: Mon, 30 Sep 2019 14:02:13 +0530 Message-Id: <20190930083215.8241-5-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190930083215.8241-1-hemant.agrawal@nxp.com> References: <20190927075841.21841-1-hemant.agrawal@nxp.com> <20190930083215.8241-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v4 4/6] event/dpaa2: add retry break in packet enqueue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nipun Gupta The patch adds the break in the TX function, if it is failing to send the packets out. Previously the system was trying infinitely to send packet out. Signed-off-by: Nipun Gupta --- drivers/event/dpaa2/dpaa2_eventdev.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 9255de16f..834d3cba1 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -49,6 +49,7 @@ /* Dynamic logging identified for mempool */ int dpaa2_logtype_event; +#define DPAA2_EV_TX_RETRY_COUNT 10000 static uint16_t dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], @@ -59,7 +60,7 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], struct dpaa2_dpio_dev *dpio_dev; uint32_t queue_id = ev[0].queue_id; struct dpaa2_eventq *evq_info; - uint32_t fqid; + uint32_t fqid, retry_count; struct qbman_swp *swp; struct qbman_fd fd_arr[MAX_TX_RING_SLOTS]; uint32_t loop, frames_to_send; @@ -162,13 +163,25 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], } send_partial: loop = 0; + retry_count = 0; while (loop < frames_to_send) { - loop += qbman_swp_enqueue_multiple_desc(swp, + ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop], &fd_arr[loop], frames_to_send - loop); + if (unlikely(ret < 0)) { + retry_count++; + if (retry_count > DPAA2_EV_TX_RETRY_COUNT) { + num_tx += loop; + nb_events -= loop; + return num_tx + loop; + } + } else { + loop += ret; + retry_count = 0; + } } - num_tx += frames_to_send; - nb_events -= frames_to_send; + num_tx += loop; + nb_events -= loop; } return num_tx; From patchwork Mon Sep 30 08:32:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 174725 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp6527859oce; Mon, 30 Sep 2019 01:35:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqzt9Kq8ODVp6ywEMOtuwm4bxge8ukFIRkqFJupsXfi2UtVQ9YsKpKxAS+52hUJWUgGdd8T1 X-Received: by 2002:a50:d0d5:: with SMTP id g21mr18279191edf.204.1569832519317; Mon, 30 Sep 2019 01:35:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569832519; cv=none; d=google.com; s=arc-20160816; b=yPCGsjv566mw/UvPKYY3Q5jJsk5v9f/EZ0+rwDiPj+aOIfzym/GghfkbdGTnBru2Qe ka1oeaIxEXeppHmHHJOgm5oXwapRiqQ1iUIt746uCU2QmCzaw2lrloR0sDUabIze1l/D yyOr01R3bTef1EFEdnh4/NoawPUsnq/Wy9SYyuDwRPRrFyF2TtOJ77HbNC+nFE85DjJU G8u2KqQD+SnkeSEdapFmyDk8/Cyo/cppFBxxt68dGjvwFpX49LZuHAzsKx7gWVIigJhc +y8SbcItH0RVUg0C0nheYsx0NptYWfYL3Hap+I+8v9GYO0J7KtzvPoEgWmxa92itGvcn fLzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=O6k+oDNpeFWKTEU76xjtz7xZJKY8+8uNqEizqcRCEKc=; b=n2GBV3vINdBakC/fcmv1Zq7e6ZZhYRTM9oTcIop20iLMjkdNlGZpZL1YsbUuuAUEv0 CKp1SywmMUWiivX/wKi2xWefpw77WzlE6K76A15Wo3evP7Kapx+KJqBUSF3nprGrwlcz i4sJkIJfZbe15jT0jvJtPGLvCl6RI7pNWekNkcqvGpaI4ok7aTZIR/GnfHnU6Pfaabpz ZCql4cLcHdLl4jKiiT6yqw81Oy+dZYvdQVQzDiZybu+dTL4D+x5Fx+cI+CPVoxAcIzVj /jNfQIW0lW+Vh6r1TCg/4uTkax80BlU68d1EXaQkPgILjLPTKI3Xezx94FVAvn54VVpz ECcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id rk6si6598936ejb.129.2019.09.30.01.35.19; Mon, 30 Sep 2019 01:35:19 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A91687CBC; Mon, 30 Sep 2019 10:34:58 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id CCDEB9E4 for ; Mon, 30 Sep 2019 10:34:42 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 8857D1A0A57; Mon, 30 Sep 2019 10:34:42 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id B61CD1A0206; Mon, 30 Sep 2019 10:34:40 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 323BB40296; Mon, 30 Sep 2019 16:34:38 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: jerinj@marvell.com Date: Mon, 30 Sep 2019 14:02:14 +0530 Message-Id: <20190930083215.8241-6-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190930083215.8241-1-hemant.agrawal@nxp.com> References: <20190927075841.21841-1-hemant.agrawal@nxp.com> <20190930083215.8241-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v4 5/6] event/dpaa2: add selftest cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add support for testing dpaa2 eventdev self test for basic sanity for parallel and atomic queues. Signed-off-by: Hemant Agrawal --- drivers/event/dpaa2/Makefile | 5 +- drivers/event/dpaa2/dpaa2_eventdev.c | 1 + drivers/event/dpaa2/dpaa2_eventdev.h | 2 + drivers/event/dpaa2/dpaa2_eventdev_logs.h | 8 +- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 833 ++++++++++++++++++ drivers/event/dpaa2/meson.build | 3 +- 6 files changed, 848 insertions(+), 4 deletions(-) create mode 100644 drivers/event/dpaa2/dpaa2_eventdev_selftest.c -- 2.17.1 diff --git a/drivers/event/dpaa2/Makefile b/drivers/event/dpaa2/Makefile index e0bb527b1..c6ab326da 100644 --- a/drivers/event/dpaa2/Makefile +++ b/drivers/event/dpaa2/Makefile @@ -18,9 +18,9 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/portal CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa2 CFLAGS += -I$(RTE_SDK)/drivers/event/dpaa2 LDLIBS += -lrte_eal -lrte_eventdev -LDLIBS += -lrte_bus_fslmc -lrte_mempool_dpaa2 -lrte_pmd_dpaa2 -LDLIBS += -lrte_bus_vdev LDLIBS += -lrte_common_dpaax +LDLIBS += -lrte_bus_fslmc -lrte_mempool_dpaa2 -lrte_pmd_dpaa2 +LDLIBS += -lrte_bus_vdev -lrte_mempool -lrte_mbuf CFLAGS += -I$(RTE_SDK)/drivers/net/dpaa2 CFLAGS += -I$(RTE_SDK)/drivers/net/dpaa2/mc @@ -40,5 +40,6 @@ CFLAGS += -DALLOW_EXPERIMENTAL_API # SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2_hw_dpcon.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2_eventdev.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2_eventdev_selftest.c include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 834d3cba1..5249d2fe4 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -964,6 +964,7 @@ static struct rte_eventdev_ops dpaa2_eventdev_ops = { .port_unlink = dpaa2_eventdev_port_unlink, .timeout_ticks = dpaa2_eventdev_timeout_ticks, .dump = dpaa2_eventdev_dump, + .dev_selftest = test_eventdev_dpaa2, .eth_rx_adapter_caps_get = dpaa2_eventdev_eth_caps_get, .eth_rx_adapter_queue_add = dpaa2_eventdev_eth_queue_add, .eth_rx_adapter_queue_del = dpaa2_eventdev_eth_queue_del, diff --git a/drivers/event/dpaa2/dpaa2_eventdev.h b/drivers/event/dpaa2/dpaa2_eventdev.h index bdac1aa56..abc038e49 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.h +++ b/drivers/event/dpaa2/dpaa2_eventdev.h @@ -98,4 +98,6 @@ struct dpaa2_eventdev { struct dpaa2_dpcon_dev *rte_dpaa2_alloc_dpcon_dev(void); void rte_dpaa2_free_dpcon_dev(struct dpaa2_dpcon_dev *dpcon); +int test_eventdev_dpaa2(void); + #endif /* __DPAA2_EVENTDEV_H__ */ diff --git a/drivers/event/dpaa2/dpaa2_eventdev_logs.h b/drivers/event/dpaa2/dpaa2_eventdev_logs.h index 86f2e5393..5da85c60f 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_logs.h +++ b/drivers/event/dpaa2/dpaa2_eventdev_logs.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2018 NXP + * Copyright 2018-2019 NXP */ #ifndef _DPAA2_EVENTDEV_LOGS_H_ @@ -35,4 +35,10 @@ extern int dpaa2_logtype_event; #define DPAA2_EVENTDEV_DP_WARN(fmt, args...) \ DPAA2_EVENTDEV_DP_LOG(WARNING, fmt, ## args) +#define dpaa2_evdev_info(fmt, ...) DPAA2_EVENTDEV_LOG(INFO, fmt, ##__VA_ARGS__) +#define dpaa2_evdev_dbg(fmt, ...) DPAA2_EVENTDEV_LOG(DEBUG, fmt, ##__VA_ARGS__) +#define dpaa2_evdev_err(fmt, ...) DPAA2_EVENTDEV_LOG(ERR, fmt, ##__VA_ARGS__) +#define dpaa2_evdev__func_trace dpaa2_evdev_dbg +#define dpaa2_evdev_selftest dpaa2_evdev_info + #endif /* _DPAA2_EVENTDEV_LOGS_H_ */ diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c new file mode 100644 index 000000000..ba4f4bd23 --- /dev/null +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -0,0 +1,833 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2018-2019 NXP + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "dpaa2_eventdev.h" +#include "dpaa2_eventdev_logs.h" + +#define MAX_PORTS 4 +#define NUM_PACKETS (1 << 18) +#define MAX_EVENTS 8 +#define DPAA2_TEST_RUN(setup, teardown, test) \ + dpaa2_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + +static int evdev; +static struct rte_mempool *eventdev_test_mempool; + +struct event_attr { + uint32_t flow_id; + uint8_t event_type; + uint8_t sub_event_type; + uint8_t sched_type; + uint8_t queue; + uint8_t port; + uint8_t seq; +}; + +static uint32_t seqn_list_index; +static int seqn_list[NUM_PACKETS]; + +static void +seqn_list_init(void) +{ + RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); + memset(seqn_list, 0, sizeof(seqn_list)); + seqn_list_index = 0; +} + +struct test_core_param { + rte_atomic32_t *total_events; + uint64_t dequeue_tmo_ticks; + uint8_t port; + uint8_t sched_type; +}; + +static int +testsuite_setup(void) +{ + const char *eventdev_name = "event_dpaa2"; + + evdev = rte_event_dev_get_dev_id(eventdev_name); + if (evdev < 0) { + dpaa2_evdev_dbg("%d: Eventdev %s not found - creating.", + __LINE__, eventdev_name); + if (rte_vdev_init(eventdev_name, NULL) < 0) { + dpaa2_evdev_err("Error creating eventdev %s", + eventdev_name); + return -1; + } + evdev = rte_event_dev_get_dev_id(eventdev_name); + if (evdev < 0) { + dpaa2_evdev_err("Error finding newly created eventdev"); + return -1; + } + } + + return 0; +} + +static void +testsuite_teardown(void) +{ + rte_event_dev_close(evdev); +} + +static void +devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf, + struct rte_event_dev_info *info) +{ + memset(dev_conf, 0, sizeof(struct rte_event_dev_config)); + dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns; + dev_conf->nb_event_ports = info->max_event_ports; + dev_conf->nb_event_queues = info->max_event_queues; + dev_conf->nb_event_queue_flows = info->max_event_queue_flows; + dev_conf->nb_event_port_dequeue_depth = + info->max_event_port_dequeue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_events_limit = + info->max_num_events; +} + +enum { + TEST_EVENTDEV_SETUP_DEFAULT, + TEST_EVENTDEV_SETUP_PRIORITY, + TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT, +}; + +static int +_eventdev_setup(int mode) +{ + int i, ret; + struct rte_event_dev_config dev_conf; + struct rte_event_dev_info info; + const char *pool_name = "evdev_dpaa2_test_pool"; + + /* Create and destrory pool for each test case to make it standalone */ + eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, + MAX_EVENTS, + 0 /*MBUF_CACHE_SIZE*/, + 0, + 512, /* Use very small mbufs */ + rte_socket_id()); + if (!eventdev_test_mempool) { + dpaa2_evdev_err("ERROR creating mempool"); + return -1; + } + + ret = rte_event_dev_info_get(evdev, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, + "ERROR max_num_events=%d < max_events=%d", + info.max_num_events, MAX_EVENTS); + + devconf_set_default_sane_values(&dev_conf, &info); + if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) + dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; + + ret = rte_event_dev_configure(evdev, &dev_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + + uint32_t queue_count; + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), "Queue count get failed"); + + if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { + if (queue_count > 8) { + dpaa2_evdev_err( + "test expects the unique priority per queue"); + return -ENOTSUP; + } + + /* Configure event queues(0 to n) with + * RTE_EVENT_DEV_PRIORITY_HIGHEST to + * RTE_EVENT_DEV_PRIORITY_LOWEST + */ + uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / + queue_count; + for (i = 0; i < (int)queue_count; i++) { + struct rte_event_queue_conf queue_conf; + + ret = rte_event_queue_default_conf_get(evdev, i, + &queue_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", + i); + queue_conf.priority = i * step; + ret = rte_event_queue_setup(evdev, i, &queue_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); + } + + } else { + /* Configure event queues with default priority */ + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_setup(evdev, i, NULL); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); + } + } + /* Configure event ports */ + uint32_t port_count; + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, + &port_count), "Port count get failed"); + for (i = 0; i < (int)port_count; i++) { + ret = rte_event_port_setup(evdev, i, NULL); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + ret = rte_event_port_link(evdev, i, NULL, NULL, 0); + RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", + i); + } + + ret = rte_event_dev_start(evdev); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + + return 0; +} + +static int +eventdev_setup(void) +{ + return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT); +} + +static void +eventdev_teardown(void) +{ + rte_event_dev_stop(evdev); + rte_mempool_free(eventdev_test_mempool); +} + +static void +update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev, + uint32_t flow_id, uint8_t event_type, + uint8_t sub_event_type, uint8_t sched_type, + uint8_t queue, uint8_t port, uint8_t seq) +{ + struct event_attr *attr; + + /* Store the event attributes in mbuf for future reference */ + attr = rte_pktmbuf_mtod(m, struct event_attr *); + attr->flow_id = flow_id; + attr->event_type = event_type; + attr->sub_event_type = sub_event_type; + attr->sched_type = sched_type; + attr->queue = queue; + attr->port = port; + attr->seq = seq; + + ev->flow_id = flow_id; + ev->sub_event_type = sub_event_type; + ev->event_type = event_type; + /* Inject the new event */ + ev->op = RTE_EVENT_OP_NEW; + ev->sched_type = sched_type; + ev->queue_id = queue; + ev->mbuf = m; +} + +static int +inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, + uint8_t sched_type, uint8_t queue, uint8_t port, + unsigned int events) +{ + struct rte_mbuf *m; + unsigned int i; + + for (i = 0; i < events; i++) { + struct rte_event ev = {.event = 0, .u64 = 0}; + + m = rte_pktmbuf_alloc(eventdev_test_mempool); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + + update_event_and_validation_attr(m, &ev, flow_id, event_type, + sub_event_type, sched_type, queue, port, i); + rte_event_enqueue_burst(evdev, port, &ev, 1); + } + return 0; +} + +static int +check_excess_events(uint8_t port) +{ + int i; + uint16_t valid_event; + struct rte_event ev; + + /* Check for excess events, try for a few times and exit */ + for (i = 0; i < 32; i++) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + + RTE_TEST_ASSERT_SUCCESS(valid_event, + "Unexpected valid event=%d", ev.mbuf->seqn); + } + return 0; +} + +static int +generate_random_events(const unsigned int total_events) +{ + struct rte_event_dev_info info; + unsigned int i; + int ret; + + uint32_t queue_count; + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), "Queue count get failed"); + + ret = rte_event_dev_info_get(evdev, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + for (i = 0; i < total_events; i++) { + ret = inject_events( + rte_rand() % info.max_event_queue_flows /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + rte_rand() % 256 /* sub_event_type */, + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), + rte_rand() % queue_count /* queue */, + 0 /* port */, + 1 /* events */); + if (ret) + return -1; + } + return ret; +} + + +static int +validate_event(struct rte_event *ev) +{ + struct event_attr *attr; + + attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); + RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + "flow_id mismatch enq=%d deq =%d", + attr->flow_id, ev->flow_id); + RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + "event_type mismatch enq=%d deq =%d", + attr->event_type, ev->event_type); + RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + "sub_event_type mismatch enq=%d deq =%d", + attr->sub_event_type, ev->sub_event_type); + RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + "sched_type mismatch enq=%d deq =%d", + attr->sched_type, ev->sched_type); + RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + "queue mismatch enq=%d deq =%d", + attr->queue, ev->queue_id); + return 0; +} + +typedef int (*validate_event_cb)(uint32_t index, uint8_t port, + struct rte_event *ev); + +static int +consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) +{ + int ret; + uint16_t valid_event; + uint32_t events = 0, forward_progress_cnt = 0, index = 0; + struct rte_event ev; + + while (1) { + if (++forward_progress_cnt > UINT16_MAX) { + dpaa2_evdev_err("Detected deadlock"); + return -1; + } + + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + forward_progress_cnt = 0; + ret = validate_event(&ev); + if (ret) + return -1; + + if (fn != NULL) { + ret = fn(index, port, &ev); + RTE_TEST_ASSERT_SUCCESS(ret, + "Failed to validate test specific event"); + } + + ++index; + + rte_pktmbuf_free(ev.mbuf); + if (++events >= total_events) + break; + } + + return check_excess_events(port); +} + +static int +validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) +{ + struct event_attr *attr; + + attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); + + RTE_SET_USED(port); + RTE_TEST_ASSERT_EQUAL(index, attr->seq, + "index=%d != seqn=%d", index, attr->seq); + return 0; +} + +static int +test_simple_enqdeq(uint8_t sched_type) +{ + int ret; + + ret = inject_events(0 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + 0 /* sub_event_type */, + sched_type, + 0 /* queue */, + 0 /* port */, + MAX_EVENTS); + if (ret) + return -1; + + return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); +} + +static int +test_simple_enqdeq_atomic(void) +{ + return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_simple_enqdeq_parallel(void) +{ + return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL); +} + +/* + * Generate a prescribed number of events and spread them across available + * queues. On dequeue, using single event port(port 0) verify the enqueued + * event attributes + */ +static int +test_multi_queue_enq_single_port_deq(void) +{ + int ret; + + ret = generate_random_events(MAX_EVENTS); + if (ret) + return -1; + + return consume_events(0 /* port */, MAX_EVENTS, NULL); +} + +static int +worker_multi_port_fn(void *arg) +{ + struct test_core_param *param = arg; + struct rte_event ev; + uint16_t valid_event; + uint8_t port = param->port; + rte_atomic32_t *total_events = param->total_events; + int ret; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + ret = validate_event(&ev); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } + return 0; +} + +static int +wait_workers_to_join(int lcore, const rte_atomic32_t *count) +{ + uint64_t cycles, print_cycles; + + RTE_SET_USED(count); + + print_cycles = cycles = rte_get_timer_cycles(); + while (rte_eal_get_lcore_state(lcore) != FINISHED) { + uint64_t new_cycles = rte_get_timer_cycles(); + + if (new_cycles - print_cycles > rte_get_timer_hz()) { + dpaa2_evdev_dbg("\r%s: events %d", __func__, + rte_atomic32_read(count)); + print_cycles = new_cycles; + } + if (new_cycles - cycles > rte_get_timer_hz() * 10) { + dpaa2_evdev_info( + "%s: No schedules for seconds, deadlock (%d)", + __func__, + rte_atomic32_read(count)); + rte_event_dev_dump(evdev, stdout); + cycles = new_cycles; + return -1; + } + } + rte_eal_mp_wait_lcore(); + return 0; +} + + +static int +launch_workers_and_wait(int (*master_worker)(void *), + int (*slave_workers)(void *), uint32_t total_events, + uint8_t nb_workers, uint8_t sched_type) +{ + uint8_t port = 0; + int w_lcore; + int ret; + struct test_core_param *param; + rte_atomic32_t atomic_total_events; + uint64_t dequeue_tmo_ticks; + + if (!nb_workers) + return 0; + + rte_atomic32_set(&atomic_total_events, total_events); + seqn_list_init(); + + param = malloc(sizeof(struct test_core_param) * nb_workers); + if (!param) + return -1; + + ret = rte_event_dequeue_timeout_ticks(evdev, + rte_rand() % 10000000/* 10ms */, &dequeue_tmo_ticks); + if (ret) { + free(param); + return -1; + } + + param[0].total_events = &atomic_total_events; + param[0].sched_type = sched_type; + param[0].port = 0; + param[0].dequeue_tmo_ticks = dequeue_tmo_ticks; + rte_smp_wmb(); + + w_lcore = rte_get_next_lcore( + /* start core */ -1, + /* skip master */ 1, + /* wrap */ 0); + rte_eal_remote_launch(master_worker, ¶m[0], w_lcore); + + for (port = 1; port < nb_workers; port++) { + param[port].total_events = &atomic_total_events; + param[port].sched_type = sched_type; + param[port].port = port; + param[port].dequeue_tmo_ticks = dequeue_tmo_ticks; + rte_smp_wmb(); + w_lcore = rte_get_next_lcore(w_lcore, 1, 0); + rte_eal_remote_launch(slave_workers, ¶m[port], w_lcore); + } + + ret = wait_workers_to_join(w_lcore, &atomic_total_events); + free(param); + return ret; +} + +/* + * Generate a prescribed number of events and spread them across available + * queues. Dequeue the events through multiple ports and verify the enqueued + * event attributes + */ +static int +test_multi_queue_enq_multi_port_deq(void) +{ + const unsigned int total_events = MAX_EVENTS; + uint32_t nr_ports; + int ret; + + ret = generate_random_events(total_events); + if (ret) + return -1; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, + &nr_ports), "Port count get failed"); + nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); + + if (!nr_ports) { + dpaa2_evdev_err("%s: Not enough ports=%d or workers=%d", + __func__, nr_ports, rte_lcore_count() - 1); + return 0; + } + + return launch_workers_and_wait(worker_multi_port_fn, + worker_multi_port_fn, total_events, + nr_ports, 0xff /* invalid */); +} + +static +void flush(uint8_t dev_id, struct rte_event event, void *arg) +{ + unsigned int *count = arg; + + RTE_SET_USED(dev_id); + if (event.event_type == RTE_EVENT_TYPE_CPU) + *count = *count + 1; + +} + +static int +test_dev_stop_flush(void) +{ + unsigned int total_events = MAX_EVENTS, count = 0; + int ret; + + ret = generate_random_events(total_events); + if (ret) + return -1; + + ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count); + if (ret) + return -2; + rte_event_dev_stop(evdev); + ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL); + if (ret) + return -3; + RTE_TEST_ASSERT_EQUAL(total_events, count, + "count mismatch total_events=%d count=%d", + total_events, count); + return 0; +} + +static int +validate_queue_to_port_single_link(uint32_t index, uint8_t port, + struct rte_event *ev) +{ + RTE_SET_USED(index); + RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, + "queue mismatch enq=%d deq =%d", + port, ev->queue_id); + return 0; +} + +/* + * Link queue x to port x and check correctness of link by checking + * queue_id == x on dequeue on the specific port x + */ +static int +test_queue_to_port_single_link(void) +{ + int i, nr_links, ret; + + uint32_t port_count; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, + &port_count), "Port count get failed"); + + /* Unlink all connections that created in eventdev_setup */ + for (i = 0; i < (int)port_count; i++) { + ret = rte_event_port_unlink(evdev, i, NULL, 0); + RTE_TEST_ASSERT(ret >= 0, + "Failed to unlink all queues port=%d", i); + } + + uint32_t queue_count; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &queue_count), "Queue count get failed"); + + nr_links = RTE_MIN(port_count, queue_count); + const unsigned int total_events = MAX_EVENTS / nr_links; + + /* Link queue x to port x and inject events to queue x through port x */ + for (i = 0; i < nr_links; i++) { + uint8_t queue = (uint8_t)i; + + ret = rte_event_port_link(evdev, i, &queue, NULL, 1); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + + ret = inject_events( + 0x100 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + rte_rand() % 256 /* sub_event_type */, + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), + queue /* queue */, + i /* port */, + total_events /* events */); + if (ret) + return -1; + } + + /* Verify the events generated from correct queue */ + for (i = 0; i < nr_links; i++) { + ret = consume_events(i /* port */, total_events, + validate_queue_to_port_single_link); + if (ret) + return -1; + } + + return 0; +} + +static int +validate_queue_to_port_multi_link(uint32_t index, uint8_t port, + struct rte_event *ev) +{ + RTE_SET_USED(index); + RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + "queue mismatch enq=%d deq =%d", + port, ev->queue_id); + return 0; +} + +/* + * Link all even number of queues to port 0 and all odd number of queues to + * port 1 and verify the link connection on dequeue + */ +static int +test_queue_to_port_multi_link(void) +{ + int ret, port0_events = 0, port1_events = 0; + uint8_t queue, port; + uint32_t nr_queues = 0; + uint32_t nr_ports = 0; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &nr_queues), "Queue count get failed"); + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, + &nr_queues), "Queue count get failed"); + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, + &nr_ports), "Port count get failed"); + + if (nr_ports < 2) { + dpaa2_evdev_err("%s: Not enough ports to test ports=%d", + __func__, nr_ports); + return 0; + } + + /* Unlink all connections that created in eventdev_setup */ + for (port = 0; port < nr_ports; port++) { + ret = rte_event_port_unlink(evdev, port, NULL, 0); + RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + port); + } + + const unsigned int total_events = MAX_EVENTS / nr_queues; + + /* Link all even number of queues to port0 and odd numbers to port 1*/ + for (queue = 0; queue < nr_queues; queue++) { + port = queue & 0x1; + ret = rte_event_port_link(evdev, port, &queue, NULL, 1); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + queue, port); + + ret = inject_events( + 0x100 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + rte_rand() % 256 /* sub_event_type */, + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), + queue /* queue */, + port /* port */, + total_events /* events */); + if (ret) + return -1; + + if (port == 0) + port0_events += total_events; + else + port1_events += total_events; + } + + ret = consume_events(0 /* port */, port0_events, + validate_queue_to_port_multi_link); + if (ret) + return -1; + ret = consume_events(1 /* port */, port1_events, + validate_queue_to_port_multi_link); + if (ret) + return -1; + + return 0; +} + +static void dpaa2_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + RTE_LOG(INFO, PMD, "Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + RTE_LOG(INFO, PMD, "%s Failed\n", name); + } else { + passed++; + RTE_LOG(INFO, PMD, "%s Passed", name); + } + } + + total++; + tdown(); +} + +int +test_eventdev_dpaa2(void) +{ + testsuite_setup(); + + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_dev_stop_flush); + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + DPAA2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + + DPAA2_EVENTDEV_INFO("Total tests : %d", total); + DPAA2_EVENTDEV_INFO("Passed : %d", passed); + DPAA2_EVENTDEV_INFO("Failed : %d", failed); + DPAA2_EVENTDEV_INFO("Not supported : %d", unsupported); + + testsuite_teardown(); + + if (failed) + return -1; + + return 0; +} diff --git a/drivers/event/dpaa2/meson.build b/drivers/event/dpaa2/meson.build index f7da7fad5..72f97d4c1 100644 --- a/drivers/event/dpaa2/meson.build +++ b/drivers/event/dpaa2/meson.build @@ -9,7 +9,8 @@ if not is_linux endif deps += ['bus_vdev', 'pmd_dpaa2', 'pmd_dpaa2_sec'] sources = files('dpaa2_hw_dpcon.c', - 'dpaa2_eventdev.c') + 'dpaa2_eventdev.c', + 'dpaa2_eventdev_selftest.c') allow_experimental_apis = true includes += include_directories('../../crypto/dpaa2_sec/') From patchwork Mon Sep 30 08:32:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 174726 Delivered-To: patch@linaro.org Received: by 2002:ac9:19ad:0:0:0:0:0 with SMTP id d45csp6527942oce; Mon, 30 Sep 2019 01:35:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqw9Zy0nWNg8xlMjuAU6aHWQek0B7cfC8/cDPvUyTthZ92QVgEANao06hOzvUaZKe8mhNxLi X-Received: by 2002:a50:d808:: with SMTP id o8mr18265822edj.74.1569832528909; Mon, 30 Sep 2019 01:35:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569832528; cv=none; d=google.com; s=arc-20160816; b=k3gApYteSiuUzDMP5XVh6BBO4UkHYKsxnZsW6Rc7gKq1Vt3pvZoXa/HloudscWAkfP gTbVOnoNct7hy8GfqcQFVLczc3auNmbE1H6WpVU231GPBePmU2C6iz3R7sY/XCj0KAar oZ9RGwnXYP9yuaVp/P46xm329YWWA6CCZT+2DjnWyG4QwtGA3y0qkijgPlEYGvyIuvGP 46eU10n44S0awb4ne78+HeXNSILbK2IN2GKmemY10ObtSaPfWECfZpDe4Ri0qyK4tfI8 C8hyBXxLazugXIeyE1mX4zLdY/WLwzZArQH4+bDt1d7Vb06RCweKIMj7gmibuBxvRLLT fgtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=yPezUwidQw5fYS+HmuC2HZX3rwd9y937bsgXSCGKrXE=; b=C5Rn9E9Vk0yvss7jwTAu+nRo/0rum6dVrEeNGEE6/qtceFyzB4+woeR1gLHKIvjDMA Mf2/Qb8gGSzBcqCIqwp/r+WnEZLZAB0Tnm3++XzF6O79dYc8MdTdirxcscDdAEgsIxmS UejZE/eaRpHM1yJ1Uunrq+D90YmQAf3uCQsW+CqYkQwLCiDRWPWojp/0s3tLTv0SPLf+ zTFqcBZhgzyuUXC3CdUFXjlv0CLhSC1l8oS63FGFYrZqRpFpzi5OaJUTF9E52MHTsqwv 38qVPnq1diofHxYmoBY/kMlfUFHHwjsRGo+RxqT6845l2DkxTOAxZvx5Vj6sJiUD+N4F ARdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g22si6434034edf.398.2019.09.30.01.35.28; Mon, 30 Sep 2019 01:35:28 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A16E11B94A; Mon, 30 Sep 2019 10:35:04 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by dpdk.org (Postfix) with ESMTP id 9B829378E for ; Mon, 30 Sep 2019 10:34:43 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 59CCE1A07FF; Mon, 30 Sep 2019 10:34:43 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id EAC2A1A07F7; Mon, 30 Sep 2019 10:34:41 +0200 (CEST) Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.133.63]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 2BE7340313; Mon, 30 Sep 2019 16:34:39 +0800 (SGT) From: Hemant Agrawal To: dev@dpdk.org Cc: jerinj@marvell.com Date: Mon, 30 Sep 2019 14:02:15 +0530 Message-Id: <20190930083215.8241-7-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190930083215.8241-1-hemant.agrawal@nxp.com> References: <20190927075841.21841-1-hemant.agrawal@nxp.com> <20190930083215.8241-1-hemant.agrawal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v4 6/6] test/event: enable dpaa2 self test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add the support to include dpaa2 event test from the test framework. Signed-off-by: Hemant Agrawal --- app/test/test_eventdev.c | 7 +++++++ 1 file changed, 7 insertions(+) -- 2.17.1 diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 783140dfe..427dbbf77 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1020,9 +1020,16 @@ test_eventdev_selftest_octeontx2(void) return test_eventdev_selftest_impl("otx2_eventdev", ""); } +static int +test_eventdev_selftest_dpaa2(void) +{ + return test_eventdev_selftest_impl("event_dpaa2", ""); +} + REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common); REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw); REGISTER_TEST_COMMAND(eventdev_selftest_octeontx, test_eventdev_selftest_octeontx); REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2, test_eventdev_selftest_octeontx2); +REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);