From patchwork Mon Jan 13 17:25:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182815 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp4564463och; Mon, 13 Jan 2020 09:41:35 -0800 (PST) X-Google-Smtp-Source: APXvYqw4fOBPRvOiUoS8y0E6ew9CxfNnNlRiH1MsGFs66la3r5uy00mA1mcZT5Ae2YsvtSSsfHXI X-Received: by 2002:aa7:d34d:: with SMTP id m13mr18791644edr.140.1578936345866; Mon, 13 Jan 2020 09:25:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936345; cv=none; d=google.com; s=arc-20160816; b=K8iy38p05MdqSsRD57bqj0ootys/TzMLTdmNFatNFDtf1TEzWHCTS4+4bSLhsjzWJp UeDfT4Jasw1RJbhbLXduuSzq2XRjrGRC5wKudZNxC2a3aUaLnSj7+NdGyCHq6N6NbjCd j9t19avT+Nz9BWxOGIFH0DC7U0WGTb0QR0jR4kWcSqgVBBS/29x4E9ar/xvbP8xOv39h 6yYSyieBjI1BYlIe7f+v78mEPjICxhTUMKpWxw/GqkvvF02g/FovgZREXOQQR0kHEcKB t+RsiCP9v58ygKkv7oOwmPTXV2lonHqjUcxiiIj7iB2j0IN9/iByfl6ErYY8oBtzqbyY v0Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=kd7M+tGi71+uowfc/4q4UKVLJ2kQBU3mxqjzQTW71bM=; b=kRyu1BRVNFpwf6hiCVnpkmrTRm6f4t8kXvKgPTmfBY3tAgPq831xDaQ9fyp9P4UDdd 4eAUbg7I9lfwFLE5WYvObrbggG9+ilVJfMhDKF5ryQOak/cWt1tH7o7bZKHVsjbCx3qy OMwNeZHX1uVch2RJBI3FMI1AZyfAc7holchl/cVMN09JZIhcaWdqsiz5wwz6K8teLfL8 ounYEtz24YQJKufIjkZHMlF80MDaCjbOyaWuh0shlPSU8mnSLdyqZQsF05UFX4DxpQ7L UW0TbYJMRuGiFrzgctnrr/wPXab8aF1K8aFZPyM3BH1nWtzlr1EeP7p5ON8xZg94nea7 rV+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id i8si7694926edv.38.2020.01.13.09.25.45; Mon, 13 Jan 2020 09:25:45 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 01F4B1D170; Mon, 13 Jan 2020 18:25:41 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 0ADE51D16E for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7C93411B3; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6A2D73F534; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:13 -0600 Message-Id: <20200113172518.37815-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 1/6] test/ring: use division for cycle count calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use division instead of modulo operation to calculate more accurate cycle count. Signed-off-by: Honnappa Nagarahalli Acked-by: Olivier Matz --- app/test/test_ring_perf.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 70ee46ffe..6c2aca483 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -357,10 +357,10 @@ test_single_enqueue_dequeue(struct rte_ring *r) } const uint64_t mc_end = rte_rdtsc(); - printf("SP/SC single enq/dequeue: %"PRIu64"\n", - (sc_end-sc_start) >> iter_shift); - printf("MP/MC single enq/dequeue: %"PRIu64"\n", - (mc_end-mc_start) >> iter_shift); + printf("SP/SC single enq/dequeue: %.2F\n", + ((double)(sc_end-sc_start)) / iterations); + printf("MP/MC single enq/dequeue: %.2F\n", + ((double)(mc_end-mc_start)) / iterations); } /* @@ -395,13 +395,15 @@ test_burst_enqueue_dequeue(struct rte_ring *r) } const uint64_t mc_end = rte_rdtsc(); - uint64_t mc_avg = ((mc_end-mc_start) >> iter_shift) / bulk_sizes[sz]; - uint64_t sc_avg = ((sc_end-sc_start) >> iter_shift) / bulk_sizes[sz]; + double mc_avg = ((double)(mc_end-mc_start) / iterations) / + bulk_sizes[sz]; + double sc_avg = ((double)(sc_end-sc_start) / iterations) / + bulk_sizes[sz]; - printf("SP/SC burst enq/dequeue (size: %u): %"PRIu64"\n", bulk_sizes[sz], - sc_avg); - printf("MP/MC burst enq/dequeue (size: %u): %"PRIu64"\n", bulk_sizes[sz], - mc_avg); + printf("SP/SC burst enq/dequeue (size: %u): %.2F\n", + bulk_sizes[sz], sc_avg); + printf("MP/MC burst enq/dequeue (size: %u): %.2F\n", + bulk_sizes[sz], mc_avg); } } From patchwork Mon Jan 13 17:25:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182808 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp4392233ile; Mon, 13 Jan 2020 09:27:36 -0800 (PST) X-Google-Smtp-Source: APXvYqzEZ/hOc7+7o07WFFQwOqCH83SgLNsL0CIQiJWZXVfG5g6NGBAI1ssED/Bml+B6/CeQNelz X-Received: by 2002:a17:906:1117:: with SMTP id h23mr17632374eja.88.1578936354092; Mon, 13 Jan 2020 09:25:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936354; cv=none; d=google.com; s=arc-20160816; b=Nv78XspdnMBIMtWNk89tXuLWAIhEOYqn4SxxneP16AQVqEWQlRACKXuPVBiBj2NSnK o1YAMS9S/O2T74SXUtWhZH8sQrPDOkeVZqnV6m23PUOgqJCSSiuDsWagbaIwBnvzXqX9 Zs6e0y6NTQYuTJuvSCufKE0HG/Ty8xCmqnt+FRjVpG9CCtDtbh0YFQANGQy36kYCUYXD izh0Z9OSUbSlvil954VgK+dvdeG7tZ7BBBNjkeCik/i63sfCkVcLyWrPdBEdeBjeozzR QINbwdglHIIAClON8JvPtkoQfF0+bl8W5BzTs2SeLkYzR39TY5NvmMuJ+vJGTYtfDSKW 0YbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=RR7MtgNSVB/ANqr8jRlyhkuY0glJJsunExCg3p4oTfI=; b=JzSe3q3P76wmVNyJj2JrdtNDpa4jyBhp22WJ5gGwI9Ztid4kCzqG/+Z5GNGyhD9akK 2IOjbQ+iZpqHbL0WXujkHy/Y0gcNXx7dARyj+nOctGLMrbZgw2/rQ1jJL71RGqSN7GqW dlu/oAfY+hSy4U23VFZFjcywgdk7ft0lchxQH4ZODWApv/cfTR8dqXiEDtdec5tV2BGV M5HvbCaelJTnyAa2xZsiHpYmeV4WGOTJbnSr9dRNu/UstfexCEhiUArqbZT3PtL1vBSD UP3m/ObOa9P8jogajbowtd+2/3U7h7pOfSq/F+5Agn7lH64eqjaQYf088RfFofZhVBLx 39pA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id v8si6545178edq.481.2020.01.13.09.25.53; Mon, 13 Jan 2020 09:25:54 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 567461D176; Mon, 13 Jan 2020 18:25:44 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 21F901D16F for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 93D611424; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7D3B03F85E; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:14 -0600 Message-Id: <20200113172518.37815-3-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 2/6] lib/ring: apis to support configurable element size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Current APIs assume ring elements to be pointers. However, in many use cases, the size can be different. Add new APIs to support configurable ring element sizes. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar Reviewed-by: Gavin Hu Reviewed-by: Ruifeng Wang --- lib/librte_ring/Makefile | 3 +- lib/librte_ring/meson.build | 4 + lib/librte_ring/rte_ring.c | 41 +- lib/librte_ring/rte_ring.h | 1 + lib/librte_ring/rte_ring_elem.h | 1003 ++++++++++++++++++++++++++ lib/librte_ring/rte_ring_version.map | 2 + 6 files changed, 1045 insertions(+), 9 deletions(-) create mode 100644 lib/librte_ring/rte_ring_elem.h -- 2.17.1 diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile index 22454b084..917c560ad 100644 --- a/lib/librte_ring/Makefile +++ b/lib/librte_ring/Makefile @@ -6,7 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_ring.a -CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal EXPORT_MAP := rte_ring_version.map @@ -16,6 +16,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c # install includes SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h \ + rte_ring_elem.h \ rte_ring_generic.h \ rte_ring_c11_mem.h diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build index ca8a435e9..f2f3ccc88 100644 --- a/lib/librte_ring/meson.build +++ b/lib/librte_ring/meson.build @@ -3,5 +3,9 @@ sources = files('rte_ring.c') headers = files('rte_ring.h', + 'rte_ring_elem.h', 'rte_ring_c11_mem.h', 'rte_ring_generic.h') + +# rte_ring_create_elem and rte_ring_get_memsize_elem are experimental +allow_experimental_apis = true diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c index d9b308036..3e15dc398 100644 --- a/lib/librte_ring/rte_ring.c +++ b/lib/librte_ring/rte_ring.c @@ -33,6 +33,7 @@ #include #include "rte_ring.h" +#include "rte_ring_elem.h" TAILQ_HEAD(rte_ring_list, rte_tailq_entry); @@ -46,23 +47,38 @@ EAL_REGISTER_TAILQ(rte_ring_tailq) /* return the size of memory occupied by a ring */ ssize_t -rte_ring_get_memsize(unsigned count) +rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) { ssize_t sz; + /* Check if element size is a multiple of 4B */ + if (esize % 4 != 0) { + RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + + return -EINVAL; + } + /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and " - "do not exceed the size limit %u\n", RTE_RING_SZ_MASK); + "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RTE_RING_SZ_MASK); + return -EINVAL; } - sz = sizeof(struct rte_ring) + count * sizeof(void *); + sz = sizeof(struct rte_ring) + count * esize; sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); return sz; } +/* return the size of memory occupied by a ring */ +ssize_t +rte_ring_get_memsize(unsigned count) +{ + return rte_ring_get_memsize_elem(sizeof(void *), count); +} + void rte_ring_reset(struct rte_ring *r) { @@ -114,10 +130,10 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, return 0; } -/* create the ring */ +/* create the ring for a given element size */ struct rte_ring * -rte_ring_create(const char *name, unsigned count, int socket_id, - unsigned flags) +rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, + int socket_id, unsigned int flags) { char mz_name[RTE_MEMZONE_NAMESIZE]; struct rte_ring *r; @@ -135,7 +151,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id, if (flags & RING_F_EXACT_SZ) count = rte_align32pow2(count + 1); - ring_size = rte_ring_get_memsize(count); + ring_size = rte_ring_get_memsize_elem(esize, count); if (ring_size < 0) { rte_errno = ring_size; return NULL; @@ -182,6 +198,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id, return r; } +/* create the ring */ +struct rte_ring * +rte_ring_create(const char *name, unsigned count, int socket_id, + unsigned flags) +{ + return rte_ring_create_elem(name, sizeof(void *), count, socket_id, + flags); +} + /* free the ring */ void rte_ring_free(struct rte_ring *r) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 2a9f768a1..18fc5d845 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -216,6 +216,7 @@ int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, */ struct rte_ring *rte_ring_create(const char *name, unsigned count, int socket_id, unsigned flags); + /** * De-allocate all memory used by the ring. * diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_elem.h new file mode 100644 index 000000000..15d79bf2a --- /dev/null +++ b/lib/librte_ring/rte_ring_elem.h @@ -0,0 +1,1003 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright (c) 2019 Arm Limited + * Copyright (c) 2010-2017 Intel Corporation + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org + * All rights reserved. + * Derived from FreeBSD's bufring.h + * Used as BSD-3 Licensed with permission from Kip Macy. + */ + +#ifndef _RTE_RING_ELEM_H_ +#define _RTE_RING_ELEM_H_ + +/** + * @file + * RTE Ring with user defined element size + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_ring.h" + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Calculate the memory size needed for a ring with given element size + * + * This function returns the number of bytes needed for a ring, given + * the number of elements in it and the size of the element. This value + * is the sum of the size of the structure rte_ring and the size of the + * memory needed for storing the elements. The value is aligned to a cache + * line size. + * + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param count + * The number of elements in the ring (must be a power of 2). + * @return + * - The memory size needed for the ring on success. + * - -EINVAL - esize is not a multiple of 4 or count provided is not a + * power of 2. + */ +__rte_experimental +ssize_t rte_ring_get_memsize_elem(unsigned int esize, unsigned int count); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create a new ring named *name* that stores elements with given size. + * + * This function uses ``memzone_reserve()`` to allocate memory. Then it + * calls rte_ring_init() to initialize an empty ring. + * + * The new ring size is set to *count*, which must be a power of + * two. Water marking is disabled by default. The real usable ring size + * is *count-1* instead of *count* to differentiate a free ring from an + * empty ring. + * + * The ring is added in RTE_TAILQ_RING list. + * + * @param name + * The name of the ring. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param count + * The number of elements in the ring (must be a power of 2). + * @param socket_id + * The *socket_id* argument is the socket identifier in case of + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA + * constraint for the reserved zone. + * @param flags + * An OR of the following: + * - RING_F_SP_ENQ: If this flag is set, the default behavior when + * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()`` + * is "single-producer". Otherwise, it is "multi-producers". + * - RING_F_SC_DEQ: If this flag is set, the default behavior when + * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` + * is "single-consumer". Otherwise, it is "multi-consumers". + * @return + * On success, the pointer to the new allocated ring. NULL on error with + * rte_errno set appropriately. Possible errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - esize is not a multiple of 4 or count provided is not a + * power of 2. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_ring *rte_ring_create_elem(const char *name, unsigned int esize, + unsigned int count, int socket_id, unsigned int flags); + +static __rte_always_inline void +enqueue_elems_32(struct rte_ring *r, const uint32_t size, uint32_t idx, + const void *obj_table, uint32_t n) +{ + unsigned int i; + uint32_t *ring = (uint32_t *)&r[1]; + const uint32_t *obj = (const uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + ring[idx + 4] = obj[i + 4]; + ring[idx + 5] = obj[i + 5]; + ring[idx + 6] = obj[i + 6]; + ring[idx + 7] = obj[i + 7]; + } + switch (n & 0x7) { + case 7: + ring[idx++] = obj[i++]; /* fallthrough */ + case 6: + ring[idx++] = obj[i++]; /* fallthrough */ + case 5: + ring[idx++] = obj[i++]; /* fallthrough */ + case 4: + ring[idx++] = obj[i++]; /* fallthrough */ + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +enqueue_elems_64(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + const uint64_t *obj = (const uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + } + switch (n & 0x3) { + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +enqueue_elems_128(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + rte_int128_t *ring = (rte_int128_t *)&r[1]; + const rte_int128_t *obj = (const rte_int128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 32); + switch (n & 0x1) { + case 1: + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + } + } else { + for (i = 0; idx < size; i++, idx++) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + } +} + +/* the actual enqueue of elements on the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +enqueue_elems(struct rte_ring *r, uint32_t prod_head, const void *obj_table, + uint32_t esize, uint32_t num) +{ + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + enqueue_elems_64(r, prod_head, obj_table, num); + else if (esize == 16) + enqueue_elems_128(r, prod_head, obj_table, num); + else { + uint32_t idx, scale, nr_idx, nr_num, nr_size; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = prod_head & r->mask; + nr_idx = idx * scale; + nr_size = r->size * scale; + enqueue_elems_32(r, nr_size, nr_idx, obj_table, nr_num); + } +} + +static __rte_always_inline void +dequeue_elems_32(struct rte_ring *r, const uint32_t size, uint32_t idx, + void *obj_table, uint32_t n) +{ + unsigned int i; + uint32_t *ring = (uint32_t *)&r[1]; + uint32_t *obj = (uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + obj[i + 4] = ring[idx + 4]; + obj[i + 5] = ring[idx + 5]; + obj[i + 6] = ring[idx + 6]; + obj[i + 7] = ring[idx + 7]; + } + switch (n & 0x7) { + case 7: + obj[i++] = ring[idx++]; /* fallthrough */ + case 6: + obj[i++] = ring[idx++]; /* fallthrough */ + case 5: + obj[i++] = ring[idx++]; /* fallthrough */ + case 4: + obj[i++] = ring[idx++]; /* fallthrough */ + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +dequeue_elems_64(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + uint64_t *obj = (uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + } + switch (n & 0x3) { + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +dequeue_elems_128(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + rte_int128_t *ring = (rte_int128_t *)&r[1]; + rte_int128_t *obj = (rte_int128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) + memcpy((void *)(obj + i), (void *)(ring + idx), 32); + switch (n & 0x1) { + case 1: + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + } + } else { + for (i = 0; idx < size; i++, idx++) + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + } +} + +/* the actual dequeue of elements from the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +dequeue_elems(struct rte_ring *r, uint32_t cons_head, void *obj_table, + uint32_t esize, uint32_t num) +{ + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + dequeue_elems_64(r, cons_head, obj_table, num); + else if (esize == 16) + dequeue_elems_128(r, cons_head, obj_table, num); + else { + uint32_t idx, scale, nr_idx, nr_num, nr_size; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = cons_head & r->mask; + nr_idx = idx * scale; + nr_size = r->size * scale; + dequeue_elems_32(r, nr_size, nr_idx, obj_table, nr_num); + } +} + +/* Between load and load. there might be cpu reorder in weak model + * (powerpc/arm). + * There are 2 choices for the users + * 1.use rmb() memory barrier + * 2.use one-direction load_acquire/store_release barrier,defined by + * CONFIG_RTE_USE_C11_MEM_MODEL=y + * It depends on performance test results. + * By default, move common functions to rte_ring_generic.h + */ +#ifdef RTE_USE_C11_MEM_MODEL +#include "rte_ring_c11_mem.h" +#else +#include "rte_ring_generic.h" +#endif + +/** + * @internal Enqueue several objects on the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param is_sp + * Indicates whether to use single producer or multi-producer head update + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_enqueue_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sp, + unsigned int *free_space) +{ + uint32_t prod_head, prod_next; + uint32_t free_entries; + + n = __rte_ring_move_prod_head(r, is_sp, n, behavior, + &prod_head, &prod_next, &free_entries); + if (n == 0) + goto end; + + enqueue_elems(r, prod_head, obj_table, esize, n); + + update_tail(&r->prod, prod_head, prod_next, is_sp, 1); +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal Dequeue several objects from the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param is_sc + * Indicates whether to use single consumer or multi-consumer head update + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_dequeue_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sc, + unsigned int *available) +{ + uint32_t cons_head, cons_next; + uint32_t entries; + + n = __rte_ring_move_cons_head(r, (int)is_sc, n, behavior, + &cons_head, &cons_next, &entries); + if (n == 0) + goto end; + + dequeue_elems(r, cons_head, obj_table, esize, n); + + update_tail(&r->cons, cons_head, cons_next, is_sc, 0); + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * Enqueue several objects on the ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_MP, free_space); +} + +/** + * Enqueue several objects on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_SP, free_space); +} + +/** + * Enqueue several objects on a ring. + * + * This function calls the multi-producer or the single-producer + * version depending on the default behavior that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, r->prod.single, free_space); +} + +/** + * Enqueue one object on a ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_mp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Enqueue one object on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_sp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Enqueue one object on a ring. + * + * This function calls the multi-producer or the single-producer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Dequeue several objects from a ring (multi-consumers safe). + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_MC, available); +} + +/** + * Dequeue several objects from a ring (NOT multi-consumers safe). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table, + * must be strictly positive. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_SC, available); +} + +/** + * Dequeue several objects from a ring. + * + * This function calls the multi-consumers or the single-consumer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, r->cons.single, available); +} + +/** + * Dequeue one object from a ring (multi-consumers safe). + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue; no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_mc_dequeue_elem(struct rte_ring *r, void *obj_p, + unsigned int esize) +{ + return rte_ring_mc_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Dequeue one object from a ring (NOT multi-consumers safe). + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue, no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_sc_dequeue_elem(struct rte_ring *r, void *obj_p, + unsigned int esize) +{ + return rte_ring_sc_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Dequeue one object from a ring. + * + * This function calls the multi-consumers or the single-consumer + * version depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success, objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue, no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) +{ + return rte_ring_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Enqueue several objects on the ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space); +} + +/** + * Enqueue several objects on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space); +} + +/** + * Enqueue several objects on a ring. + * + * This function calls the multi-producer or the single-producer + * version depending on the default behavior that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, r->prod.single, free_space); +} + +/** + * Dequeue several objects from a ring (multi-consumers safe). When the request + * objects are more than the available objects, only dequeue the actual number + * of objects + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - n: Actual number of objects dequeued, 0 if ring is empty + */ +static __rte_always_inline unsigned +rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_MC, available); +} + +/** + * Dequeue several objects from a ring (NOT multi-consumers safe).When the + * request objects are more than the available objects, only dequeue the + * actual number of objects + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - n: Actual number of objects dequeued, 0 if ring is empty + */ +static __rte_always_inline unsigned +rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_SC, available); +} + +/** + * Dequeue multiple objects from a ring up to a maximum number. + * + * This function calls the multi-consumers or the single-consumer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - Number of objects dequeued + */ +static __rte_always_inline unsigned int +rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, + r->cons.single, available); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_RING_ELEM_H_ */ diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map index 89d84bcf4..7a5328dd5 100644 --- a/lib/librte_ring/rte_ring_version.map +++ b/lib/librte_ring/rte_ring_version.map @@ -15,6 +15,8 @@ DPDK_20.0 { EXPERIMENTAL { global: + rte_ring_create_elem; + rte_ring_get_memsize_elem; rte_ring_reset; }; From patchwork Mon Jan 13 17:25:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182809 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp4392527ile; Mon, 13 Jan 2020 09:27:52 -0800 (PST) X-Google-Smtp-Source: APXvYqxKQuB0dqM+NTmV4ZAVkX5OQLp7uDZlsmCue+5RgHS3eLeSEnc4PBSjwL58pmBs1J7if3CA X-Received: by 2002:a17:906:260b:: with SMTP id h11mr17981116ejc.327.1578936369793; Mon, 13 Jan 2020 09:26:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936369; cv=none; d=google.com; s=arc-20160816; b=D/o15rXNkaw9Cg1hrBslJRR6ts1c2DddTTy5e5KJ9EssdxcGUOLOh1x1/n95Ip4WKW NQFObgzT80FucWyNgciplz05NQBBV8mYG1biBYJPp3AQWFQgz7kP+kbznC8ztctOFANi B1y/v2J3Be3U+Qzp19GTbfooUK7SZEN0cNYVH2KmpFiMSzOJTupLYIiaY5WgLau3zgbj ELLQ5BEVz5ubIsM2vWxZrNM5fHrWVKkUthlrV5DJulIxzEUAQr6t/fAmSJ0FjL7mzbLl FsCPJkin+Fi0NwsTXNN4EOrE1fzz9wixjayWZZ0Z6OMbST/jj0Sw320lMSsQvjAJ+dF6 jaOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=v052vrZka9z71o5R1nVetEfHtwOKWkcQ/aco7gJ82to=; b=o5iz//HxFZ4wCb24ApbGovYbJBbtRsTy8A4KW4NgUEQBIVIrYcfed/vqc8aW77+grC U+KaaJmNgEdYb3//zUR56AEpEjfeO3y06IAsx3MXGGHb7MtS7Q4p5u+SWPyHVd84iGTe 4NgiYI3pScoS8Z5IuF39aSsA0pGZ8pwy9FNq+Y4n5N6gw9Zb00TI/H+Xy2DzSelxOsxE 3L7fr89diAywAT4Cs80E9HLw0fYZhLLXABLHXbP5SV1j3sF6Om9eQ2EnWdghqXXu/X/E e9J2z4V5bbHi2KBcLEUkNblRtro1iP8GWqT6MpWVAjRt3KgppEZqyM7+fN+Mm2Xx5Ztr MVQA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id d11si8138942edn.216.2020.01.13.09.26.04; Mon, 13 Jan 2020 09:26:09 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 95D0A1D17C; Mon, 13 Jan 2020 18:25:46 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 2F4081D170 for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE91B142F; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 939BF3F534; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:15 -0600 Message-Id: <20200113172518.37815-4-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 3/6] test/ring: add functional tests for rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add basic infrastructure to test rte_ring_xxx_elem APIs. Adjust the existing test cases to test for various ring element sizes. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 1244 ++++++++++++++++++------------------------ app/test/test_ring.h | 187 +++++++ 2 files changed, 722 insertions(+), 709 deletions(-) create mode 100644 app/test/test_ring.h -- 2.17.1 diff --git a/app/test/test_ring.c b/app/test/test_ring.c index aaf1e70ad..649f65d38 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -23,11 +23,13 @@ #include #include #include +#include #include #include #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -55,8 +57,6 @@ #define RING_SIZE 4096 #define MAX_BULK 32 -static rte_atomic32_t synchro; - #define TEST_RING_VERIFY(exp) \ if (!(exp)) { \ printf("error at %s:%d\tcondition " #exp " failed\n", \ @@ -67,795 +67,624 @@ static rte_atomic32_t synchro; #define TEST_RING_FULL_EMTPY_ITER 8 -/* - * helper routine for test_ring_basic - */ -static int -test_ring_basic_full_empty(struct rte_ring *r, void * const src[], void *dst[]) -{ - unsigned i, rand; - const unsigned rsz = RING_SIZE - 1; - - printf("Basic full/empty test\n"); - - for (i = 0; TEST_RING_FULL_EMTPY_ITER != i; i++) { - - /* random shift in the ring */ - rand = RTE_MAX(rte_rand() % RING_SIZE, 1UL); - printf("%s: iteration %u, random shift: %u;\n", - __func__, i, rand); - TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rand, - NULL) != 0); - TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rand, - NULL) == rand); - - /* fill the ring */ - TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rsz, NULL) != 0); - TEST_RING_VERIFY(0 == rte_ring_free_count(r)); - TEST_RING_VERIFY(rsz == rte_ring_count(r)); - TEST_RING_VERIFY(rte_ring_full(r)); - TEST_RING_VERIFY(0 == rte_ring_empty(r)); - - /* empty the ring */ - TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rsz, - NULL) == rsz); - TEST_RING_VERIFY(rsz == rte_ring_free_count(r)); - TEST_RING_VERIFY(0 == rte_ring_count(r)); - TEST_RING_VERIFY(0 == rte_ring_full(r)); - TEST_RING_VERIFY(rte_ring_empty(r)); +static int esize[] = {-1, 4, 8, 16, 20}; - /* check data */ - TEST_RING_VERIFY(0 == memcmp(src, dst, rsz)); - rte_ring_dump(stdout, r); - } - return 0; +static void** +test_ring_inc_ptr(void **obj, int esize, unsigned int n) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + return ((void **)obj) + n; + else + return (void **)(((uint32_t *)obj) + + (n * esize / sizeof(uint32_t))); } -static int -test_ring_basic(struct rte_ring *r) +static void +test_ring_mem_init(void *obj, unsigned int count, int esize) { - void **src = NULL, **cur_src = NULL, **dst = NULL, **cur_dst = NULL; - int ret; - unsigned i, num_elems; - - /* alloc dummy object pointers */ - src = malloc(RING_SIZE*2*sizeof(void *)); - if (src == NULL) - goto fail; - - for (i = 0; i < RING_SIZE*2 ; i++) { - src[i] = (void *)(unsigned long)i; - } - cur_src = src; - - /* alloc some room for copied objects */ - dst = malloc(RING_SIZE*2*sizeof(void *)); - if (dst == NULL) - goto fail; - - memset(dst, 0, RING_SIZE*2*sizeof(void *)); - cur_dst = dst; - - printf("enqueue 1 obj\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1, NULL); - cur_src += 1; - if (ret == 0) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2, NULL); - cur_src += 2; - if (ret == 0) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret == 0) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret == 0) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret == 0) - goto fail; - - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret == 0) - goto fail; - - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } - cur_src = src; - cur_dst = dst; - - printf("enqueue 1 obj\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1, NULL); - cur_src += 1; - if (ret == 0) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2, NULL); - cur_src += 2; - if (ret == 0) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret == 0) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret == 0) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret == 0) - goto fail; - - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret == 0) - goto fail; - - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } - cur_src = src; - cur_dst = dst; - - printf("fill and empty the ring\n"); - for (i = 0; i= rte_ring_get_size(exact_sz_r)) { + printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n", + __func__, + rte_ring_get_size(std_r), + rte_ring_get_size(exact_sz_r)); + goto test_fail; + } + /* + * check that the exact_sz_ring can hold one more element + * than the standard ring. (16 vs 15 elements) + */ + for (j = 0; j < ring_sz - 1; j++) { + test_ring_enqueue(std_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + test_ring_enqueue(exact_sz_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + } + ret = test_ring_enqueue(std_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + if (ret != -ENOBUFS) { + printf("%s: error, unexpected successful enqueue\n", + __func__); + goto test_fail; + } + ret = test_ring_enqueue(exact_sz_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + if (ret == -ENOBUFS) { + printf("%s: error, enqueue failed\n", __func__); + goto test_fail; + } + + /* check that dequeue returns the expected number of elements */ + ret = test_ring_dequeue(exact_sz_r, obj, esize[i], ring_sz, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST); + if (ret != (int)ring_sz) { + printf("%s: error, failed to dequeue expected nb of elements\n", + __func__); + goto test_fail; + } - /* - * Check that the exact size ring is bigger than the standard ring - */ - if (rte_ring_get_size(std_ring) >= rte_ring_get_size(exact_sz_ring)) { - printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n", - __func__, - rte_ring_get_size(std_ring), - rte_ring_get_size(exact_sz_ring)); - goto end; - } - /* - * check that the exact_sz_ring can hold one more element than the - * standard ring. (16 vs 15 elements) - */ - for (i = 0; i < ring_sz - 1; i++) { - rte_ring_enqueue(std_ring, NULL); - rte_ring_enqueue(exact_sz_ring, NULL); - } - if (rte_ring_enqueue(std_ring, NULL) != -ENOBUFS) { - printf("%s: error, unexpected successful enqueue\n", __func__); - goto end; - } - if (rte_ring_enqueue(exact_sz_ring, NULL) == -ENOBUFS) { - printf("%s: error, enqueue failed\n", __func__); - goto end; - } + /* check that the capacity function returns expected value */ + if (rte_ring_get_capacity(exact_sz_r) != ring_sz) { + printf("%s: error, incorrect ring capacity reported\n", + __func__); + goto test_fail; + } - /* check that dequeue returns the expected number of elements */ - if (rte_ring_dequeue_burst(exact_sz_ring, ptr_array, - RTE_DIM(ptr_array), NULL) != ring_sz) { - printf("%s: error, failed to dequeue expected nb of elements\n", - __func__); - goto end; + rte_free(obj); + rte_ring_free(std_r); + rte_ring_free(exact_sz_r); } - /* check that the capacity function returns expected value */ - if (rte_ring_get_capacity(exact_sz_ring) != ring_sz) { - printf("%s: error, incorrect ring capacity reported\n", - __func__); - goto end; - } + return 0; - ret = 0; /* all ok if we get here */ -end: - rte_ring_free(std_ring); - rte_ring_free(exact_sz_ring); - return ret; +test_fail: + rte_free(obj); + rte_ring_free(std_r); + rte_ring_free(exact_sz_r); + return -1; } static int test_ring(void) { - struct rte_ring *r = NULL; + unsigned int i, j; - /* some more basic operations */ - if (test_ring_basic_ex() < 0) - goto test_fail; - - rte_atomic32_init(&synchro); - - r = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); - if (r == NULL) - goto test_fail; - - /* retrieve the ring from its name */ - if (rte_ring_lookup("test") != r) { - printf("Cannot lookup ring from its name\n"); - goto test_fail; - } - - /* burst operations */ - if (test_ring_burst_basic(r) < 0) - goto test_fail; - - /* basic operations */ - if (test_ring_basic(r) < 0) - goto test_fail; - - /* basic operations */ - if ( test_create_count_odd() < 0){ - printf("Test failed to detect odd count\n"); + /* Negative test cases */ + if (test_ring_negative_tests() < 0) goto test_fail; - } else - printf("Test detected odd count\n"); - if ( test_lookup_null() < 0){ - printf("Test failed to detect NULL ring lookup\n"); - goto test_fail; - } else - printf("Test detected NULL ring lookup\n"); - - /* test of creating ring with wrong size */ - if (test_ring_creation_with_wrong_size() < 0) + /* some more basic operations */ + if (test_ring_basic_ex() < 0) goto test_fail; - /* test of creation ring with an used name */ - if (test_ring_creation_with_an_used_name() < 0) - goto test_fail; + /* Burst and bulk operations with sp/sc, mp/mc and default */ + for (j = TEST_RING_ELEM_BULK; j <= TEST_RING_ELEM_BURST; j <<= 1) + for (i = TEST_RING_THREAD_DEF; + i <= TEST_RING_THREAD_MPMC; i <<= 1) + if (test_ring_burst_bulk_tests(i | j) < 0) + goto test_fail; if (test_ring_with_exact_size() < 0) goto test_fail; @@ -863,12 +692,9 @@ test_ring(void) /* dump the ring status */ rte_ring_list_dump(stdout); - rte_ring_free(r); - return 0; test_fail: - rte_ring_free(r); return -1; } diff --git a/app/test/test_ring.h b/app/test/test_ring.h new file mode 100644 index 000000000..26716e4f8 --- /dev/null +++ b/app/test/test_ring.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Arm Limited + */ + +#include +#include +#include + +/* API type to call + * rte_ring__enqueue_ + * TEST_RING_THREAD_DEF - Uses configured SPSC/MPMC calls + * TEST_RING_THREAD_SPSC - Calls SP or SC API + * TEST_RING_THREAD_MPMC - Calls MP or MC API + */ +#define TEST_RING_THREAD_DEF 1 +#define TEST_RING_THREAD_SPSC 2 +#define TEST_RING_THREAD_MPMC 4 + +/* API type to call + * SL - Calls single element APIs + * BL - Calls bulk APIs + * BR - Calls burst APIs + */ +#define TEST_RING_ELEM_SINGLE 8 +#define TEST_RING_ELEM_BULK 16 +#define TEST_RING_ELEM_BURST 32 + +#define TEST_RING_IGNORE_API_TYPE ~0U + +/* This function is placed here as it is required for both + * performance and functional tests. + */ +static inline struct rte_ring* +test_ring_create(const char *name, int esize, unsigned int count, + int socket_id, unsigned int flags) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + return rte_ring_create((name), (count), (socket_id), (flags)); + else + return rte_ring_create_elem((name), (esize), (count), + (socket_id), (flags)); +} + +static __rte_always_inline unsigned int +test_ring_enqueue(struct rte_ring *r, void **obj, int esize, unsigned int n, + unsigned int api_type) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_enqueue(r, obj); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sp_enqueue(r, obj); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mp_enqueue(r, obj); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_enqueue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sp_enqueue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mp_enqueue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_enqueue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sp_enqueue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mp_enqueue_burst(r, obj, n, NULL); + default: + printf("Invalid API type\n"); + return 0; + } + else + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_enqueue_elem(r, obj, esize); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sp_enqueue_elem(r, obj, esize); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mp_enqueue_elem(r, obj, esize); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_enqueue_bulk_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_enqueue_burst_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sp_enqueue_burst_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mp_enqueue_burst_elem(r, obj, esize, n, + NULL); + default: + printf("Invalid API type\n"); + return 0; + } +} + +static __rte_always_inline unsigned int +test_ring_dequeue(struct rte_ring *r, void **obj, int esize, unsigned int n, + unsigned int api_type) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_dequeue(r, obj); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sc_dequeue(r, obj); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mc_dequeue(r, obj); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_dequeue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sc_dequeue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mc_dequeue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_dequeue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sc_dequeue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mc_dequeue_burst(r, obj, n, NULL); + default: + printf("Invalid API type\n"); + return 0; + } + else + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_dequeue_elem(r, obj, esize); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sc_dequeue_elem(r, obj, esize); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mc_dequeue_elem(r, obj, esize); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_dequeue_bulk_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sc_dequeue_bulk_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mc_dequeue_bulk_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_dequeue_burst_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sc_dequeue_burst_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mc_dequeue_burst_elem(r, obj, esize, + n, NULL); + default: + printf("Invalid API type\n"); + return 0; + } +} + +/* This function is placed here as it is required for both + * performance and functional tests. + */ +static __rte_always_inline void * +test_ring_calloc(unsigned int rsize, int esize) +{ + unsigned int sz; + void *p; + + /* Legacy queue APIs? */ + if (esize == -1) + sz = sizeof(void *); + else + sz = esize; + + p = rte_zmalloc(NULL, rsize * sz, RTE_CACHE_LINE_SIZE); + if (p == NULL) + printf("Failed to allocate memory\n"); + + return p; +} From patchwork Mon Jan 13 17:25:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182812 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp4551168och; Mon, 13 Jan 2020 09:28:30 -0800 (PST) X-Google-Smtp-Source: APXvYqwpsoNHubLXCiS9w5q/7uV5pP3nk8/7vD3wtxl/aSXByzIIkdIYdCHb3tR8l5S6GKEAHw1l X-Received: by 2002:aa7:c611:: with SMTP id h17mr912020edq.155.1578936385999; Mon, 13 Jan 2020 09:26:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936385; cv=none; d=google.com; s=arc-20160816; b=oVOSZX1HfTAVvZCtNWvtyqs+80DOtw8EWAwfKJ1MT4WV67uN+Hyd6LI1xh0L0Mvyn2 3WzeYdid0o9z6XufDm+GX2JO0ek7QRkbsWDFjHUuWc6VkjtA0aiUStJ8/OqeflXjxk3r RarInuN457xNQJNVg5ZAzz8YyKj1BY0YbJ5syv+DrZfHTVdHoSXvKz7pxh/4LlmOWQAS Lj62ujJPf30KO3/tANJru5yc1YRu9TzT3i4CekmlCNjXgPwTEC/MIXx/HX61D0ZANvaS aNqFIMoFSvcaFN/pXyNjP1za+51PqOjknO/siPjz04jeNeLziztiaigluyBnWdKBhUYt D+VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=jVtwypcYNzhD/+MqXWNfFF3kUA/Zw6eCjRKX6TFAoAs=; b=aoWo0tLf/3SXOK3F0XCfEuZSetv07UMIXIBC5tHN12+xmkqJjYCF/2gnOEp4toOMJZ wGZwns/IWOIci0naHpDsFs8J2/jc53gYy0o+PtY2WL2PmOhxSghSveGYXcDeLB2bUHhd fS0fSsrw+GX2xUbOcv0Uybrc/qW947t8FR3rqMUONF+OFFaRlWv/IkPoWs460U1Nu0AE hJff13rShIkMhxIIBk+sFVNGWmF46ua0FoDFVBF52TCRpQFVGQPUUg3Se+H4GOd3sZQQ wxUFH6m25KsoKX9Nf8nVyJryoML2xhbMQ3VbBpOcfsP+daTy8q2dIwy4FVpphXo0eBBU lgWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id v13si8106715edl.174.2020.01.13.09.26.24; Mon, 13 Jan 2020 09:26:25 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2379F1D37E; Mon, 13 Jan 2020 18:25:50 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 534D61D171 for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C792C1435; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ADE8F3F85E; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:16 -0600 Message-Id: <20200113172518.37815-5-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 4/6] test/ring: modify perf test cases to use rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adjust the performance test cases to test rte_ring_xxx_elem APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 454 +++++++++++++++++++++++--------------- 1 file changed, 273 insertions(+), 181 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 6c2aca483..8d1217951 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -13,6 +13,7 @@ #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -41,6 +42,35 @@ struct lcore_pair { static volatile unsigned lcore_count = 0; +static void +test_ring_print_test_string(unsigned int api_type, int esize, + unsigned int bsz, double value) +{ + if (esize == -1) + printf("legacy APIs"); + else + printf("elem APIs: element size %dB", esize); + + if (api_type == TEST_RING_IGNORE_API_TYPE) + return; + + if ((api_type & TEST_RING_THREAD_DEF) == TEST_RING_THREAD_DEF) + printf(": default enqueue/dequeue: "); + else if ((api_type & TEST_RING_THREAD_SPSC) == TEST_RING_THREAD_SPSC) + printf(": SP/SC: "); + else if ((api_type & TEST_RING_THREAD_MPMC) == TEST_RING_THREAD_MPMC) + printf(": MP/MC: "); + + if ((api_type & TEST_RING_ELEM_SINGLE) == TEST_RING_ELEM_SINGLE) + printf("single: "); + else if ((api_type & TEST_RING_ELEM_BULK) == TEST_RING_ELEM_BULK) + printf("bulk (size: %u): ", bsz); + else if ((api_type & TEST_RING_ELEM_BURST) == TEST_RING_ELEM_BURST) + printf("burst (size: %u): ", bsz); + + printf("%.2F\n", value); +} + /**** Functions to analyse our core mask to get cores for different tests ***/ static int @@ -117,27 +147,21 @@ get_two_sockets(struct lcore_pair *lcp) /* Get cycle counts for dequeuing from an empty ring. Should be 2 or 3 cycles */ static void -test_empty_dequeue(struct rte_ring *r) +test_empty_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 26; - const unsigned iterations = 1< enqueue + * flag == 1 -> dequeue */ -static int -enqueue_bulk(void *p) +static __rte_always_inline int +enqueue_dequeue_bulk_helper(const unsigned int flag, const int esize, + struct thread_params *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; + int ret; + const unsigned int iter_shift = 23; + const unsigned int iterations = 1 << iter_shift; + struct rte_ring *r = p->r; + unsigned int bsize = p->size; + unsigned int i; + void *burst = NULL; #ifdef RTE_USE_C11_MEM_MODEL if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) @@ -173,23 +199,67 @@ enqueue_bulk(void *p) while(lcore_count != 2) rte_pause(); + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; + const uint64_t sp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_sp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + ret = test_ring_enqueue(r, burst, esize, bsize, + TEST_RING_THREAD_SPSC | + TEST_RING_ELEM_BULK); + else if (flag == 1) + ret = test_ring_dequeue(r, burst, esize, bsize, + TEST_RING_THREAD_SPSC | + TEST_RING_ELEM_BULK); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t sp_end = rte_rdtsc(); const uint64_t mp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_mp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + ret = test_ring_enqueue(r, burst, esize, bsize, + TEST_RING_THREAD_MPMC | + TEST_RING_ELEM_BULK); + else if (flag == 1) + ret = test_ring_dequeue(r, burst, esize, bsize, + TEST_RING_THREAD_MPMC | + TEST_RING_ELEM_BULK); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t mp_end = rte_rdtsc(); - params->spsc = ((double)(sp_end - sp_start))/(iterations*size); - params->mpmc = ((double)(mp_end - mp_start))/(iterations*size); + p->spsc = ((double)(sp_end - sp_start))/(iterations * bsize); + p->mpmc = ((double)(mp_end - mp_start))/(iterations * bsize); return 0; } +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, -1, params); +} + +static int +enqueue_bulk_16B(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, 16, params); +} + /* * Function that uses rdtsc to measure timing for ring dequeue. Needs pair * thread running enqueue_bulk function @@ -197,49 +267,38 @@ enqueue_bulk(void *p) static int dequeue_bulk(void *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; - -#ifdef RTE_USE_C11_MEM_MODEL - if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) -#else - if (__sync_add_and_fetch(&lcore_count, 1) != 2) -#endif - while(lcore_count != 2) - rte_pause(); - const uint64_t sc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_sc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t sc_end = rte_rdtsc(); + return enqueue_dequeue_bulk_helper(1, -1, params); +} - const uint64_t mc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_mc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t mc_end = rte_rdtsc(); +static int +dequeue_bulk_16B(void *p) +{ + struct thread_params *params = p; - params->spsc = ((double)(sc_end - sc_start))/(iterations*size); - params->mpmc = ((double)(mc_end - mc_start))/(iterations*size); - return 0; + return enqueue_dequeue_bulk_helper(1, 16, params); } /* * Function that calls the enqueue and dequeue bulk functions on pairs of cores. * used to measure ring perf between hyperthreads, cores and sockets. */ -static void -run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, - lcore_function_t f1, lcore_function_t f2) +static int +run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, const int esize) { + lcore_function_t *f1, *f2; struct thread_params param1 = {0}, param2 = {0}; unsigned i; + + if (esize == -1) { + f1 = enqueue_bulk; + f2 = dequeue_bulk; + } else { + f1 = enqueue_bulk_16B; + f2 = dequeue_bulk_16B; + } + for (i = 0; i < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); i++) { lcore_count = 0; param1.size = param2.size = bulk_sizes[i]; @@ -251,14 +310,20 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, } else { rte_eal_remote_launch(f1, ¶m1, cores->c1); rte_eal_remote_launch(f2, ¶m2, cores->c2); - rte_eal_wait_lcore(cores->c1); - rte_eal_wait_lcore(cores->c2); + if (rte_eal_wait_lcore(cores->c1) < 0) + return -1; + if (rte_eal_wait_lcore(cores->c2) < 0) + return -1; } - printf("SP/SC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.spsc + param2.spsc); - printf("MP/MC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.mpmc + param2.mpmc); + test_ring_print_test_string( + TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK, + esize, bulk_sizes[i], param1.spsc + param2.spsc); + test_ring_print_test_string( + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK, + esize, bulk_sizes[i], param1.mpmc + param2.mpmc); } + + return 0; } static rte_atomic32_t synchro; @@ -267,7 +332,7 @@ static uint64_t queue_count[RTE_MAX_LCORE]; #define TIME_MS 100 static int -load_loop_fn(void *p) +load_loop_fn_helper(struct thread_params *p, const int esize) { uint64_t time_diff = 0; uint64_t begin = 0; @@ -275,7 +340,11 @@ load_loop_fn(void *p) uint64_t lcount = 0; const unsigned int lcore = rte_lcore_id(); struct thread_params *params = p; - void *burst[MAX_BURST] = {0}; + void *burst = NULL; + + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; /* wait synchro for slaves */ if (lcore != rte_get_master_lcore()) @@ -284,22 +353,49 @@ load_loop_fn(void *p) begin = rte_get_timer_cycles(); while (time_diff < hz * TIME_MS / 1000) { - rte_ring_mp_enqueue_bulk(params->r, burst, params->size, NULL); - rte_ring_mc_dequeue_bulk(params->r, burst, params->size, NULL); + test_ring_enqueue(params->r, burst, esize, params->size, + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK); + test_ring_dequeue(params->r, burst, esize, params->size, + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK); lcount++; time_diff = rte_get_timer_cycles() - begin; } queue_count[lcore] = lcount; + + rte_free(burst); + return 0; } static int -run_on_all_cores(struct rte_ring *r) +load_loop_fn(void *p) +{ + struct thread_params *params = p; + + return load_loop_fn_helper(params, -1); +} + +static int +load_loop_fn_16B(void *p) +{ + struct thread_params *params = p; + + return load_loop_fn_helper(params, 16); +} + +static int +run_on_all_cores(struct rte_ring *r, const int esize) { uint64_t total = 0; struct thread_params param; + lcore_function_t *lcore_f; unsigned int i, c; + if (esize == -1) + lcore_f = load_loop_fn; + else + lcore_f = load_loop_fn_16B; + memset(¶m, 0, sizeof(struct thread_params)); for (i = 0; i < RTE_DIM(bulk_sizes); i++) { printf("\nBulk enq/dequeue count on size %u\n", bulk_sizes[i]); @@ -308,13 +404,12 @@ run_on_all_cores(struct rte_ring *r) /* clear synchro and start slaves */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(load_loop_fn, ¶m, - SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_MASTER) < 0) return -1; /* start synchro and launch test on master */ rte_atomic32_set(&synchro, 1); - load_loop_fn(¶m); + lcore_f(¶m); rte_eal_mp_wait_lcore(); @@ -335,155 +430,152 @@ run_on_all_cores(struct rte_ring *r) * Test function that determines how long an enqueue + dequeue of a single item * takes on a single lcore. Result is for comparison with the bulk enq+deq. */ -static void -test_single_enqueue_dequeue(struct rte_ring *r) +static int +test_single_enqueue_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 24; - const unsigned iterations = 1< X-Patchwork-Id: 182807 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp4390865ile; Mon, 13 Jan 2020 09:26:14 -0800 (PST) X-Google-Smtp-Source: APXvYqw7gnEp7QTEgu8vvJMUldPWfm2+Pgkpn+wnk0bPmLBIXSuI5HmHJJPtRUGrXVGkmEXwcqDp X-Received: by 2002:a17:906:80d8:: with SMTP id a24mr17964976ejx.84.1578936374731; Mon, 13 Jan 2020 09:26:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936374; cv=none; d=google.com; s=arc-20160816; b=C3dh1FV8DJTxB52OAJtuk6ju3jvNestLwbQDTraAefeFDbhCtpqCRFtk67SFU4aoAD cg6QUiqSTeg0EpMayzmtqawR2xoJ24g5zmhuP6GZVDRWBa4jJLU4NJ/jqYZbgcntPPWC o06zKxA9jxIaJMx7hAabNgGo6E+veBFoJtvg7ODBTVP02MfGFxgF0NToAssKfrMCmya+ VgHkAaSfaT10jhJ5/xTXvLXl+WGwC2cazamx1/iSjJd2BdCn+mcxeHWq2RLHxceZo+y0 zphlUX3vVAKJVFU+Nstj+i4tp6abhdp+8+O45yidiBrIyfTMUmOveyQOQ6dt63ICwe0q 6qeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=evRZgWQq0FizoXs70Yjka80tn7kLNLeXW3wHU7PViFI=; b=EmB5lC3sQsngwQLlyZNikz2AJ7fx7G4lUWycoDbtgrF9xhWxWL5uyz2WyxqwW5T3dx xyUTmZ8sVkKmasddJaeVLLm/7FsEV8Z9xg40tDJkRoLMb+HUyGr6ctv/GdGrjrI3J3x7 BLGDCxlaJF7KIVusrADRYC+V7GhfLt5/PYliVuaE53H7zn4v5s3hwtCXP16is0TWt/wq 96gIKm0vRmk+8vuI6tDTWlTwOsRFjky3saZm+sKqgf/boogB6oianngTlFh2QwfGm+S3 /3rg20vSakryF0CI3hCPTgPKFMJX/radlLTg6i/wuinHHgBLKAy8RBGPESk0jAFqvgMj npaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id k22si6325980eds.453.2020.01.13.09.26.14; Mon, 13 Jan 2020 09:26:14 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5042D1D181; Mon, 13 Jan 2020 18:25:48 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 60D161D172 for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD28F143D; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C6FCB3F534; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:17 -0600 Message-Id: <20200113172518.37815-6-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 5/6] lib/hash: use ring with 32b element size to save memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The freelist and external bucket indices are 32b. Using rings that use 32b element sizes will save memory. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl --- lib/librte_hash/rte_cuckoo_hash.c | 97 ++++++++++++++++--------------- lib/librte_hash/rte_cuckoo_hash.h | 2 +- 2 files changed, 51 insertions(+), 48 deletions(-) -- 2.17.1 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index 87a4c01f2..734bec2ac 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -136,7 +136,6 @@ rte_hash_create(const struct rte_hash_parameters *params) char ring_name[RTE_RING_NAMESIZE]; char ext_ring_name[RTE_RING_NAMESIZE]; unsigned num_key_slots; - unsigned i; unsigned int hw_trans_mem_support = 0, use_local_cache = 0; unsigned int ext_table_support = 0; unsigned int readwrite_concur_support = 0; @@ -213,8 +212,8 @@ rte_hash_create(const struct rte_hash_parameters *params) snprintf(ring_name, sizeof(ring_name), "HT_%s", params->name); /* Create ring (Dummy slot index is not enqueued) */ - r = rte_ring_create(ring_name, rte_align32pow2(num_key_slots), - params->socket_id, 0); + r = rte_ring_create_elem(ring_name, sizeof(uint32_t), + rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { RTE_LOG(ERR, HASH, "memory allocation failed\n"); goto err; @@ -227,7 +226,7 @@ rte_hash_create(const struct rte_hash_parameters *params) if (ext_table_support) { snprintf(ext_ring_name, sizeof(ext_ring_name), "HT_EXT_%s", params->name); - r_ext = rte_ring_create(ext_ring_name, + r_ext = rte_ring_create_elem(ext_ring_name, sizeof(uint32_t), rte_align32pow2(num_buckets + 1), params->socket_id, 0); @@ -294,8 +293,8 @@ rte_hash_create(const struct rte_hash_parameters *params) * use bucket index for the linked list and 0 means NULL * for next bucket */ - for (i = 1; i <= num_buckets; i++) - rte_ring_sp_enqueue(r_ext, (void *)((uintptr_t) i)); + for (uint32_t i = 1; i <= num_buckets; i++) + rte_ring_sp_enqueue_elem(r_ext, &i, sizeof(uint32_t)); if (readwrite_concur_lf_support) { ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * @@ -433,8 +432,8 @@ rte_hash_create(const struct rte_hash_parameters *params) } /* Populate free slots ring. Entry zero is reserved for key misses. */ - for (i = 1; i < num_key_slots; i++) - rte_ring_sp_enqueue(r, (void *)((uintptr_t) i)); + for (uint32_t i = 1; i < num_key_slots; i++) + rte_ring_sp_enqueue_elem(r, &i, sizeof(uint32_t)); te->data = (void *) h; TAILQ_INSERT_TAIL(hash_list, te, next); @@ -598,13 +597,13 @@ rte_hash_reset(struct rte_hash *h) tot_ring_cnt = h->entries; for (i = 1; i < tot_ring_cnt + 1; i++) - rte_ring_sp_enqueue(h->free_slots, (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(h->free_slots, &i, sizeof(uint32_t)); /* Repopulate the free ext bkt ring. */ if (h->ext_table_support) { for (i = 1; i <= h->num_buckets; i++) - rte_ring_sp_enqueue(h->free_ext_bkts, - (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &i, + sizeof(uint32_t)); } if (h->use_local_cache) { @@ -623,13 +622,14 @@ rte_hash_reset(struct rte_hash *h) static inline void enqueue_slot_back(const struct rte_hash *h, struct lcore_cache *cached_free_slots, - void *slot_id) + uint32_t slot_id) { if (h->use_local_cache) { cached_free_slots->objs[cached_free_slots->len] = slot_id; cached_free_slots->len++; } else - rte_ring_sp_enqueue(h->free_slots, slot_id); + rte_ring_sp_enqueue_elem(h->free_slots, &slot_id, + sizeof(uint32_t)); } /* Search a key from bucket and update its data. @@ -923,9 +923,8 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, uint32_t prim_bucket_idx, sec_bucket_idx; struct rte_hash_bucket *prim_bkt, *sec_bkt, *cur_bkt; struct rte_hash_key *new_k, *keys = h->key_store; - void *slot_id = NULL; - void *ext_bkt_id = NULL; - uint32_t new_idx, bkt_id; + uint32_t slot_id; + uint32_t ext_bkt_id; int ret; unsigned n_slots; unsigned lcore_id; @@ -968,8 +967,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Try to get a free slot from the local cache */ if (cached_free_slots->len == 0) { /* Need to get another burst of free slots from global ring */ - n_slots = rte_ring_mc_dequeue_burst(h->free_slots, + n_slots = rte_ring_mc_dequeue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); if (n_slots == 0) { return -ENOSPC; @@ -982,13 +982,13 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, cached_free_slots->len--; slot_id = cached_free_slots->objs[cached_free_slots->len]; } else { - if (rte_ring_sc_dequeue(h->free_slots, &slot_id) != 0) { + if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, + sizeof(uint32_t)) != 0) { return -ENOSPC; } } - new_k = RTE_PTR_ADD(keys, (uintptr_t)slot_id * h->key_entry_size); - new_idx = (uint32_t)((uintptr_t) slot_id); + new_k = RTE_PTR_ADD(keys, slot_id * h->key_entry_size); /* The store to application data (by the application) at *data should * not leak after the store of pdata in the key store. i.e. pdata is * the guard variable. Release the application data to the readers. @@ -1001,9 +1001,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Find an empty slot and insert */ ret = rte_hash_cuckoo_insert_mw(h, prim_bkt, sec_bkt, key, data, - short_sig, new_idx, &ret_val); + short_sig, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1011,9 +1011,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Primary bucket full, need to make space for new entry */ ret = rte_hash_cuckoo_make_space_mw(h, prim_bkt, sec_bkt, key, data, - short_sig, prim_bucket_idx, new_idx, &ret_val); + short_sig, prim_bucket_idx, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1021,10 +1021,10 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Also search secondary bucket to get better occupancy */ ret = rte_hash_cuckoo_make_space_mw(h, sec_bkt, prim_bkt, key, data, - short_sig, sec_bucket_idx, new_idx, &ret_val); + short_sig, sec_bucket_idx, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1067,10 +1067,10 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, * and key. */ __atomic_store_n(&cur_bkt->key_idx[i], - new_idx, + slot_id, __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); - return new_idx - 1; + return slot_id - 1; } } } @@ -1078,26 +1078,26 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Failed to get an empty entry from extendable buckets. Link a new * extendable bucket. We first get a free bucket from ring. */ - if (rte_ring_sc_dequeue(h->free_ext_bkts, &ext_bkt_id) != 0) { + if (rte_ring_sc_dequeue_elem(h->free_ext_bkts, &ext_bkt_id, + sizeof(uint32_t)) != 0) { ret = -ENOSPC; goto failure; } - bkt_id = (uint32_t)((uintptr_t)ext_bkt_id) - 1; /* Use the first location of the new bucket */ - (h->buckets_ext[bkt_id]).sig_current[0] = short_sig; + (h->buckets_ext[ext_bkt_id - 1]).sig_current[0] = short_sig; /* Store to signature and key should not leak after * the store to key_idx. i.e. key_idx is the guard variable * for signature and key. */ - __atomic_store_n(&(h->buckets_ext[bkt_id]).key_idx[0], - new_idx, + __atomic_store_n(&(h->buckets_ext[ext_bkt_id - 1]).key_idx[0], + slot_id, __ATOMIC_RELEASE); /* Link the new bucket to sec bucket linked list */ last = rte_hash_get_last_bkt(sec_bkt); - last->next = &h->buckets_ext[bkt_id]; + last->next = &h->buckets_ext[ext_bkt_id - 1]; __hash_rw_writer_unlock(h); - return new_idx - 1; + return slot_id - 1; failure: __hash_rw_writer_unlock(h); @@ -1373,8 +1373,9 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) /* Cache full, need to free it. */ if (cached_free_slots->len == LCORE_CACHE_SIZE) { /* Need to enqueue the free slots in global ring. */ - n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + n_slots = rte_ring_mp_enqueue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); ERR_IF_TRUE((n_slots == 0), "%s: could not enqueue free slots in global ring\n", @@ -1383,11 +1384,11 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) } /* Put index of new free slot in cache. */ cached_free_slots->objs[cached_free_slots->len] = - (void *)((uintptr_t)bkt->key_idx[i]); + bkt->key_idx[i]; cached_free_slots->len++; } else { - rte_ring_sp_enqueue(h->free_slots, - (void *)((uintptr_t)bkt->key_idx[i])); + rte_ring_sp_enqueue_elem(h->free_slots, + &bkt->key_idx[i], sizeof(uint32_t)); } } @@ -1551,7 +1552,8 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, */ h->ext_bkt_to_free[ret] = index; else - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, + sizeof(uint32_t)); } __hash_rw_writer_unlock(h); return ret; @@ -1614,7 +1616,8 @@ rte_hash_free_key_with_position(const struct rte_hash *h, uint32_t index = h->ext_bkt_to_free[position]; if (index) { /* Recycle empty ext bkt to free list. */ - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, + sizeof(uint32_t)); h->ext_bkt_to_free[position] = 0; } } @@ -1625,19 +1628,19 @@ rte_hash_free_key_with_position(const struct rte_hash *h, /* Cache full, need to free it. */ if (cached_free_slots->len == LCORE_CACHE_SIZE) { /* Need to enqueue the free slots in global ring. */ - n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + n_slots = rte_ring_mp_enqueue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); RETURN_IF_TRUE((n_slots == 0), -EFAULT); cached_free_slots->len -= n_slots; } /* Put index of new free slot in cache. */ - cached_free_slots->objs[cached_free_slots->len] = - (void *)((uintptr_t)key_idx); + cached_free_slots->objs[cached_free_slots->len] = key_idx; cached_free_slots->len++; } else { - rte_ring_sp_enqueue(h->free_slots, - (void *)((uintptr_t)key_idx)); + rte_ring_sp_enqueue_elem(h->free_slots, &key_idx, + sizeof(uint32_t)); } return 0; diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index fb19bb27d..345de6bf9 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -124,7 +124,7 @@ const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = { struct lcore_cache { unsigned len; /**< Cache len */ - void *objs[LCORE_CACHE_SIZE]; /**< Cache objects */ + uint32_t objs[LCORE_CACHE_SIZE]; /**< Cache objects */ } __rte_cache_aligned; /* Structure that stores key-value pair */ From patchwork Mon Jan 13 17:25:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182813 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp4551220och; Mon, 13 Jan 2020 09:28:33 -0800 (PST) X-Google-Smtp-Source: APXvYqwn9/OGx4zZXNalxE0y5XiNCrSdayY8otrBfUmI1+bVR2xIlHikpnJIrJmF1pQdcgBzgATP X-Received: by 2002:a05:6402:1771:: with SMTP id da17mr18336128edb.68.1578936395306; Mon, 13 Jan 2020 09:26:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578936395; cv=none; d=google.com; s=arc-20160816; b=c908m8p5Ath+nEAifjgTsImqrihr7tmMMvFeoUx88WuHGzkR19lMn+Emx52aAcEIzn O0bdw7Uq7yuhPpwROuXPrsmBJJhyOTnwrILcKxXhkx7+/s4NgwkeC0fGIdNuH0bjiZpH 4igZWAKpmUQpk9FUHGBAfu1WQruL1GuxKovxIafOGxLjjwyUuRntgDbjrY6cyyLC9nJ3 bnGggGOrSagXOKMfvFCKP78gikbeaKO3+CSQAF71VmYKMEHAZb2lrehHoc8sHmWWfyBt CJpIKyknuwqgsO2FOBwXGsp29e7FJD8SmZabgmmvmzK6y06GBE+gIYRbQ4Dp494plRFB Ofsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=q3hKgETABzf/OKQ70y6rkYoNDa2XquOzxU1SG/W59yo=; b=OQ+650JPsq+2wIJLhwRllS/M0XDNEFAZf6kI7Q90bSMuBCL4EW7O1wKtHCSLRH3oqk 1GKN5AbB7JrXsjtnsxo+b5kkoSfNdClSvwKMmCibvCzR6vtBkCdnrAQc9M0T6OntOLZf C6yytjGUTsT4E0GHhyIxfuqnupkxwgT4KFMqAv1I5AiYNmuho9XcNoA5od5Xg5pVeztB Vl783/xugzy644IvngakrbtX2+jRmtjRSfp2bdcGFfygIzKVyTVg698yOdeS4u3Y5ltQ Pt22K9eF3iD5vTnf6HtBlRt8dY4u9IFeW5Vu8QVIlQ5IkdTkwgW1f1F/2T9qmjoS05g+ 9W0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id g20si7213912edm.150.2020.01.13.09.26.34; Mon, 13 Jan 2020 09:26:35 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CD4111D404; Mon, 13 Jan 2020 18:25:51 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 981561D16E for ; Mon, 13 Jan 2020 18:25:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E88B81474; Mon, 13 Jan 2020 09:25:39 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DC6C63F85E; Mon, 13 Jan 2020 09:25:39 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Mon, 13 Jan 2020 11:25:18 -0600 Message-Id: <20200113172518.37815-7-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200113172518.37815-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200113172518.37815-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v8 6/6] lib/eventdev: use custom element size ring for event rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use custom element size ring APIs to replace event ring implementation. This avoids code duplication. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl --- lib/librte_eventdev/rte_event_ring.c | 147 ++------------------------- lib/librte_eventdev/rte_event_ring.h | 45 ++++---- 2 files changed, 24 insertions(+), 168 deletions(-) -- 2.17.1 diff --git a/lib/librte_eventdev/rte_event_ring.c b/lib/librte_eventdev/rte_event_ring.c index 50190de01..d27e23901 100644 --- a/lib/librte_eventdev/rte_event_ring.c +++ b/lib/librte_eventdev/rte_event_ring.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2017 Intel Corporation + * Copyright(c) 2019 Arm Limited */ #include @@ -11,13 +12,6 @@ #include #include "rte_event_ring.h" -TAILQ_HEAD(rte_event_ring_list, rte_tailq_entry); - -static struct rte_tailq_elem rte_event_ring_tailq = { - .name = RTE_TAILQ_EVENT_RING_NAME, -}; -EAL_REGISTER_TAILQ(rte_event_ring_tailq) - int rte_event_ring_init(struct rte_event_ring *r, const char *name, unsigned int count, unsigned int flags) @@ -35,150 +29,21 @@ struct rte_event_ring * rte_event_ring_create(const char *name, unsigned int count, int socket_id, unsigned int flags) { - char mz_name[RTE_MEMZONE_NAMESIZE]; - struct rte_event_ring *r; - struct rte_tailq_entry *te; - const struct rte_memzone *mz; - ssize_t ring_size; - int mz_flags = 0; - struct rte_event_ring_list *ring_list = NULL; - const unsigned int requested_count = count; - int ret; - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - - /* for an exact size ring, round up from count to a power of two */ - if (flags & RING_F_EXACT_SZ) - count = rte_align32pow2(count + 1); - else if (!rte_is_power_of_2(count)) { - rte_errno = EINVAL; - return NULL; - } - - ring_size = sizeof(*r) + (count * sizeof(struct rte_event)); - - ret = snprintf(mz_name, sizeof(mz_name), "%s%s", - RTE_RING_MZ_PREFIX, name); - if (ret < 0 || ret >= (int)sizeof(mz_name)) { - rte_errno = ENAMETOOLONG; - return NULL; - } - - te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); - if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); - rte_errno = ENOMEM; - return NULL; - } - - rte_mcfg_tailq_write_lock(); - - /* - * reserve a memory zone for this ring. If we can't get rte_config or - * we are secondary process, the memzone_reserve function will set - * rte_errno for us appropriately - hence no check in this this function - */ - mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags); - if (mz != NULL) { - r = mz->addr; - /* Check return value in case rte_ring_init() fails on size */ - int err = rte_event_ring_init(r, name, requested_count, flags); - if (err) { - RTE_LOG(ERR, RING, "Ring init failed\n"); - if (rte_memzone_free(mz) != 0) - RTE_LOG(ERR, RING, "Cannot free memzone\n"); - rte_free(te); - rte_mcfg_tailq_write_unlock(); - return NULL; - } - - te->data = (void *) r; - r->r.memzone = mz; - - TAILQ_INSERT_TAIL(ring_list, te, next); - } else { - r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); - rte_free(te); - } - rte_mcfg_tailq_write_unlock(); - - return r; + return (struct rte_event_ring *)rte_ring_create_elem(name, + sizeof(struct rte_event), + count, socket_id, flags); } struct rte_event_ring * rte_event_ring_lookup(const char *name) { - struct rte_tailq_entry *te; - struct rte_event_ring *r = NULL; - struct rte_event_ring_list *ring_list; - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - - rte_mcfg_tailq_read_lock(); - - TAILQ_FOREACH(te, ring_list, next) { - r = (struct rte_event_ring *) te->data; - if (strncmp(name, r->r.name, RTE_RING_NAMESIZE) == 0) - break; - } - - rte_mcfg_tailq_read_unlock(); - - if (te == NULL) { - rte_errno = ENOENT; - return NULL; - } - - return r; + return (struct rte_event_ring *)rte_ring_lookup(name); } /* free the ring */ void rte_event_ring_free(struct rte_event_ring *r) { - struct rte_event_ring_list *ring_list = NULL; - struct rte_tailq_entry *te; - - if (r == NULL) - return; - - /* - * Ring was not created with rte_event_ring_create, - * therefore, there is no memzone to free. - */ - if (r->r.memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring (not created with rte_event_ring_create()"); - return; - } - - if (rte_memzone_free(r->r.memzone) != 0) { - RTE_LOG(ERR, RING, "Cannot free memory\n"); - return; - } - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - rte_mcfg_tailq_write_lock(); - - /* find out tailq entry */ - TAILQ_FOREACH(te, ring_list, next) { - if (te->data == (void *) r) - break; - } - - if (te == NULL) { - rte_mcfg_tailq_write_unlock(); - return; - } - - TAILQ_REMOVE(ring_list, te, next); - - rte_mcfg_tailq_write_unlock(); - - rte_free(te); + rte_ring_free((struct rte_ring *)r); } diff --git a/lib/librte_eventdev/rte_event_ring.h b/lib/librte_eventdev/rte_event_ring.h index 827a3209e..c0861b0ec 100644 --- a/lib/librte_eventdev/rte_event_ring.h +++ b/lib/librte_eventdev/rte_event_ring.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2019 Arm Limited */ /** @@ -19,6 +20,7 @@ #include #include #include +#include #include "rte_eventdev.h" #define RTE_TAILQ_EVENT_RING_NAME "RTE_EVENT_RING" @@ -88,22 +90,17 @@ rte_event_ring_enqueue_burst(struct rte_event_ring *r, const struct rte_event *events, unsigned int n, uint16_t *free_space) { - uint32_t prod_head, prod_next; - uint32_t free_entries; + unsigned int num; + uint32_t space; - n = __rte_ring_move_prod_head(&r->r, r->r.prod.single, n, - RTE_RING_QUEUE_VARIABLE, - &prod_head, &prod_next, &free_entries); - if (n == 0) - goto end; + num = rte_ring_enqueue_burst_elem(&r->r, events, + sizeof(struct rte_event), n, + &space); - ENQUEUE_PTRS(&r->r, &r[1], prod_head, events, n, struct rte_event); - - update_tail(&r->r.prod, prod_head, prod_next, r->r.prod.single, 1); -end: if (free_space != NULL) - *free_space = free_entries - n; - return n; + *free_space = space; + + return num; } /** @@ -129,23 +126,17 @@ rte_event_ring_dequeue_burst(struct rte_event_ring *r, struct rte_event *events, unsigned int n, uint16_t *available) { - uint32_t cons_head, cons_next; - uint32_t entries; - - n = __rte_ring_move_cons_head(&r->r, r->r.cons.single, n, - RTE_RING_QUEUE_VARIABLE, - &cons_head, &cons_next, &entries); - if (n == 0) - goto end; + unsigned int num; + uint32_t remaining; - DEQUEUE_PTRS(&r->r, &r[1], cons_head, events, n, struct rte_event); + num = rte_ring_dequeue_burst_elem(&r->r, events, + sizeof(struct rte_event), n, + &remaining); - update_tail(&r->r.cons, cons_head, cons_next, r->r.cons.single, 0); - -end: if (available != NULL) - *available = entries - n; - return n; + *available = remaining; + + return num; } /*