From patchwork Thu Apr 4 11:50:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 161793 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1497115jan; Thu, 4 Apr 2019 04:51:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqyUAD+nh5YJoA8c7jE5CW+ECti6LWMaT096iKBndBb9Yz5GnDaT3+gvj7FDEV3dHf3l+ohV X-Received: by 2002:a50:9179:: with SMTP id f54mr3424966eda.207.1554378687424; Thu, 04 Apr 2019 04:51:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554378687; cv=none; d=google.com; s=arc-20160816; b=K4ovgJ2btFzRB2Ik8IDFqy56httMN2WXOqzuEIrcLIM4YBYdoFn8ppSnf6qCnvc/gl u0PmWlrRQKoB9ETbzGU1bMHefERM9pqN0uIMPjkFhn0JhkRYvzRsBUqDOhK4a6E7B4q9 h2TWq2kdqbfYxdWXGeG27TsvPvBCZeAPsGEUYaVegpOzQnGN+W2Ibwz+ZkIdyBehwZUc yDttLPOlU8hEqciScjCrMnCF+UUXw7T1MnpBfiKaMGnQs/89lBAv5rDV6sMvAzAAQp/w eaRPFihN50brGFNd18pieJYT60kfRIMgnH5Bpyg62PwdrT8F1tMZ+LmAhmueGDnKzUqc RqYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:mime-version :content-transfer-encoding:content-language:accept-language :in-reply-to:references:message-id:date:thread-index:thread-topic:cc :to:from:dkim-signature; bh=VLSfqRGouvN8UqM9V4Z3fpnjfrctXTuHZzcSuYBJ3lo=; b=LMdeTP2W8w6oEKZ/164E2g7QYHSu8icdNjt0oz2ijF0okrW6UW32u7cBSPGI/mg8c5 xAsRCtti4FOl5ZL/Js99AqJaJEhHXv0oLhYK/+8/s9r3oeZl4es5TASUdO4zljZYjEap /a2YZehdTeK9Ls0ldz1tVMa9tkyev9x5cTayZBsJO3NT89ksZh3M200fddPDYt8tugKq PRqzLXuWtPuJQHTkCj2DX1HeiNZpVEfLezPin5MdagRfAGTdMepT6Pp3bQteIMZrWMZu D6r08hHYnXsZ9h1q+zXY6utxy7tXwOhaRpSKzu5xIfGwR+IXX4nWY7Yhr5dmU4mk53m0 PlMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@nxp.com header.s=selector1 header.b=crYRsUNC; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id y22si3717957ejp.90.2019.04.04.04.51.27; Thu, 04 Apr 2019 04:51:27 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; dkim=fail header.i=@nxp.com header.s=selector1 header.b=crYRsUNC; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4AF091B3F5; Thu, 4 Apr 2019 13:50:40 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00043.outbound.protection.outlook.com [40.107.0.43]) by dpdk.org (Postfix) with ESMTP id 9D4861B3A8 for ; Thu, 4 Apr 2019 13:50:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VLSfqRGouvN8UqM9V4Z3fpnjfrctXTuHZzcSuYBJ3lo=; b=crYRsUNCTU5kuEiUuAF2pOQZgDgYjhXUA1KaoCIM4FutqZZWyC09FmLreakGrICpUwOfwgSOkqdUjyfCzI9TCO2FNXDOrWzKaAeaPAQX0nIZouruKqGhbtviCt3VvC14lfNfSCLIR5J8Ojm3w5QE4wBsh3b6QNclCdicEpO3mDQ= Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com (10.168.65.19) by VI1PR0401MB1965.eurprd04.prod.outlook.com (10.166.140.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1750.15; Thu, 4 Apr 2019 11:50:28 +0000 Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18]) by VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::18e3:39b6:c61d:3f18%12]) with mapi id 15.20.1750.017; Thu, 4 Apr 2019 11:50:28 +0000 From: Hemant Agrawal To: "dev@dpdk.org" CC: "thomas@monjalon.net" , Shreyansh Jain Thread-Topic: [PATCH v3 7/7] raw/dpaa2_qdma: add support for non prefetch mode Thread-Index: AQHU6tyZKB+FGZNmQUCESGm5dY+4eg== Date: Thu, 4 Apr 2019 11:50:28 +0000 Message-ID: <20190404114818.21286-7-hemant.agrawal@nxp.com> References: <20190404110215.14410-1-hemant.agrawal@nxp.com> <20190404114818.21286-1-hemant.agrawal@nxp.com> In-Reply-To: <20190404114818.21286-1-hemant.agrawal@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [92.120.1.72] x-mailer: git-send-email 2.17.1 x-clientproxiedby: LO2P265CA0082.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:8::22) To VI1PR0401MB2541.eurprd04.prod.outlook.com (2603:10a6:800:56::19) authentication-results: spf=none (sender IP is ) smtp.mailfrom=hemant.agrawal@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 79ab8d2f-916c-4e94-8f66-08d6b8f3bb6e x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600139)(711020)(4605104)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:VI1PR0401MB1965; x-ms-traffictypediagnostic: VI1PR0401MB1965: x-microsoft-antispam-prvs: x-forefront-prvs: 0997523C40 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(346002)(376002)(136003)(39860400002)(396003)(366004)(199004)(189003)(44832011)(8936002)(71190400001)(25786009)(305945005)(6512007)(52116002)(478600001)(68736007)(105586002)(4326008)(7736002)(486006)(53936002)(2906002)(1076003)(5660300002)(102836004)(5640700003)(76176011)(446003)(186003)(476003)(386003)(66066001)(11346002)(26005)(316002)(106356001)(2616005)(6916009)(2501003)(14454004)(3846002)(81166006)(6486002)(71200400001)(6436002)(99286004)(6116002)(6506007)(81156014)(8676002)(97736004)(50226002)(54906003)(1730700003)(86362001)(256004)(14444005)(36756003)(2351001); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB1965; H:VI1PR0401MB2541.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: jk6yTJXAoA4Jy3sHJDT//FuGLs2TxjXwtyxdLBERsJYyEnMatyU8vAEn1s/eqO3s9Ze/2BAA9y/bRjXLpSZN2vLlDQl7dMOWKU4LPNLz7ajpbKp/c7NykHlu9j4Ru5s9S/GiT0TuckOMsqDyWPEV63ThMtge/3yrWUWkqG4fJ9rlcFX7NXWBlC5SFz4XmGmYm3A+avhM0t84ptOYw1ad2/Wa+aTwxNUhmfdr+SjuqwAh2TGuMW5R45uNpcN0LBm36960inXRRoU2RBUb/XMUtNTstUKojQXrmGz7r2CJkNaNBK4xYE6cfcoFWc9z02cUP8dWQ9X/tw7vhQvV+73qMZdagdV/pqzTYFTTTqz2Z9Ogs11z8eMjqIma53Ej/DoBH5tlnMYo42YnYwmAblVDZnMsOT5goJqKuGYlqbSVkcE= MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 79ab8d2f-916c-4e94-8f66-08d6b8f3bb6e X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2019 11:50:28.4856 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB1965 Subject: [dpdk-dev] [PATCH v3 7/7] raw/dpaa2_qdma: add support for non prefetch mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add support for non prefetch mode in Rx functions. Signed-off-by: Hemant Agrawal --- drivers/raw/dpaa2_qdma/Makefile | 1 + drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 215 +++++++++++++++++++++++++++- drivers/raw/dpaa2_qdma/meson.build | 2 +- 3 files changed, 212 insertions(+), 6 deletions(-) -- 2.17.1 diff --git a/drivers/raw/dpaa2_qdma/Makefile b/drivers/raw/dpaa2_qdma/Makefile index ee95662f1..450c76e76 100644 --- a/drivers/raw/dpaa2_qdma/Makefile +++ b/drivers/raw/dpaa2_qdma/Makefile @@ -21,6 +21,7 @@ LDLIBS += -lrte_eal LDLIBS += -lrte_mempool LDLIBS += -lrte_mempool_dpaa2 LDLIBS += -lrte_rawdev +LDLIBS += -lrte_kvargs LDLIBS += -lrte_ring LDLIBS += -lrte_common_dpaax diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c index 38f329a50..a41c1e385 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -23,6 +24,8 @@ #include "dpaa2_qdma.h" #include "dpaa2_qdma_logs.h" +#define DPAA2_QDMA_NO_PREFETCH "no_prefetch" + /* Dynamic log type identifier */ int dpaa2_qdma_logtype; @@ -43,6 +46,14 @@ static struct qdma_virt_queue *qdma_vqs; /* QDMA per core data */ static struct qdma_per_core_info qdma_core_info[RTE_MAX_LCORE]; +typedef int (dpdmai_dev_dequeue_multijob_t)(struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs); + +dpdmai_dev_dequeue_multijob_t *dpdmai_dev_dequeue_multijob; + static struct qdma_hw_queue * alloc_hw_queue(uint32_t lcore_id) { @@ -608,12 +619,156 @@ static inline uint16_t dpdmai_dev_get_job(const struct qbman_fd *fd, return vqid; } +/* Function to receive a QDMA job for a given device and queue*/ static int -dpdmai_dev_dequeue_multijob(struct dpaa2_dpdmai_dev *dpdmai_dev, - uint16_t rxq_id, - uint16_t *vq_id, - struct rte_qdma_job **job, - uint16_t nb_jobs) +dpdmai_dev_dequeue_multijob_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) +{ + struct dpaa2_queue *rxq; + struct qbman_result *dq_storage, *dq_storage1 = NULL; + struct qbman_pull_desc pulldesc; + struct qbman_swp *swp; + struct queue_storage_info_t *q_storage; + uint32_t fqid; + uint8_t status, pending; + uint8_t num_rx = 0; + const struct qbman_fd *fd; + uint16_t vqid; + int ret, pull_size; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_QDMA_ERR("Failure in affining portal"); + return 0; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + pull_size = (nb_jobs > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_jobs; + rxq = &(dpdmai_dev->rx_queue[rxq_id]); + fqid = rxq->fqid; + q_storage = rxq->q_storage; + + if (unlikely(!q_storage->active_dqs)) { + q_storage->toggle = 0; + dq_storage = q_storage->dq_storage[q_storage->toggle]; + q_storage->last_num_pkts = pull_size; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, + q_storage->last_num_pkts); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs( + DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN( + "VDQ command not issued.QBMAN busy\n"); + /* Portal was busy, try again */ + continue; + } + break; + } + q_storage->active_dqs = dq_storage; + q_storage->active_dpio_id = DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, + dq_storage); + } + + dq_storage = q_storage->active_dqs; + rte_prefetch0((void *)(size_t)(dq_storage)); + rte_prefetch0((void *)(size_t)(dq_storage + 1)); + + /* Prepare next pull descriptor. This will give space for the + * prefething done on DQRR entries + */ + q_storage->toggle ^= 1; + dq_storage1 = q_storage->dq_storage[q_storage->toggle]; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, pull_size); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage1, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage1)), 1); + + /* Check if the previous issued command is completed. + * Also seems like the SWP is shared between the Ethernet Driver + * and the SEC driver. + */ + while (!qbman_check_command_complete(dq_storage)) + ; + if (dq_storage == get_swp_active_dqs(q_storage->active_dpio_id)) + clear_swp_active_dqs(q_storage->active_dpio_id); + + pending = 1; + + do { + /* Loop until the dq_storage is updated with + * new token by QBMAN + */ + while (!qbman_check_new_result(dq_storage)) + ; + rte_prefetch0((void *)((size_t)(dq_storage + 2))); + /* Check whether Last Pull command is Expired and + * setting Condition for Loop termination + */ + if (qbman_result_DQ_is_pull_complete(dq_storage)) { + pending = 0; + /* Check for valid frame. */ + status = qbman_result_DQ_flags(dq_storage); + if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) + continue; + } + fd = qbman_result_DQ_fd(dq_storage); + + vqid = dpdmai_dev_get_job(fd, &job[num_rx]); + if (vq_id) + vq_id[num_rx] = vqid; + + dq_storage++; + num_rx++; + } while (pending); + + if (check_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index); + } + /* issue a volatile dequeue command for next pull */ + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_QDMA_DP_WARN("VDQ command is not issued." + "QBMAN is busy (2)\n"); + continue; + } + break; + } + + q_storage->active_dqs = dq_storage1; + q_storage->active_dpio_id = DPAA2_PER_LCORE_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_DPIO->index, dq_storage1); + + return num_rx; +} + +static int +dpdmai_dev_dequeue_multijob_no_prefetch( + struct dpaa2_dpdmai_dev *dpdmai_dev, + uint16_t rxq_id, + uint16_t *vq_id, + struct rte_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_queue *rxq; struct qbman_result *dq_storage; @@ -958,6 +1113,43 @@ dpaa2_dpdmai_dev_uninit(struct rte_rawdev *rawdev) return 0; } +static int +check_devargs_handler(__rte_unused const char *key, const char *value, + __rte_unused void *opaque) +{ + if (strcmp(value, "1")) + return -1; + + return 0; +} + +static int +dpaa2_get_devargs(struct rte_devargs *devargs, const char *key) +{ + struct rte_kvargs *kvlist; + + if (!devargs) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (!kvlist) + return 0; + + if (!rte_kvargs_count(kvlist, key)) { + rte_kvargs_free(kvlist); + return 0; + } + + if (rte_kvargs_process(kvlist, key, + check_devargs_handler, NULL) < 0) { + rte_kvargs_free(kvlist); + return 0; + } + rte_kvargs_free(kvlist); + + return 1; +} + static int dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int dpdmai_id) { @@ -1060,6 +1252,17 @@ dpaa2_dpdmai_dev_init(struct rte_rawdev *rawdev, int dpdmai_id) goto init_err; } + if (dpaa2_get_devargs(rawdev->device->devargs, + DPAA2_QDMA_NO_PREFETCH)) { + /* If no prefetch is configured. */ + dpdmai_dev_dequeue_multijob = + dpdmai_dev_dequeue_multijob_no_prefetch; + DPAA2_QDMA_INFO("No Prefetch RX Mode enabled"); + } else { + dpdmai_dev_dequeue_multijob = + dpdmai_dev_dequeue_multijob_prefetch; + } + if (!dpaa2_coherent_no_alloc_cache) { if (dpaa2_svr_family == SVR_LX2160A) { dpaa2_coherent_no_alloc_cache = @@ -1139,6 +1342,8 @@ static struct rte_dpaa2_driver rte_dpaa2_qdma_pmd = { }; RTE_PMD_REGISTER_DPAA2(dpaa2_qdma, rte_dpaa2_qdma_pmd); +RTE_PMD_REGISTER_PARAM_STRING(dpaa2_qdma, + "no_prefetch= "); RTE_INIT(dpaa2_qdma_init_log) { diff --git a/drivers/raw/dpaa2_qdma/meson.build b/drivers/raw/dpaa2_qdma/meson.build index 2a4b69c16..1577946fa 100644 --- a/drivers/raw/dpaa2_qdma/meson.build +++ b/drivers/raw/dpaa2_qdma/meson.build @@ -4,7 +4,7 @@ version = 2 build = dpdk_conf.has('RTE_LIBRTE_DPAA2_MEMPOOL') -deps += ['rawdev', 'mempool_dpaa2', 'ring'] +deps += ['rawdev', 'mempool_dpaa2', 'ring', 'kvargs'] sources = files('dpaa2_qdma.c') allow_experimental_apis = true