From patchwork Thu Jan 16 05:25:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182849 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp7830892ile; Wed, 15 Jan 2020 21:25:34 -0800 (PST) X-Google-Smtp-Source: APXvYqxw2AMPwav/w7CrqE2hoh7yWeWMPQVMVKg9KQCra0z/92rT+Zc5u5oEQcI31GRcrMF6VlbZ X-Received: by 2002:a17:906:4a03:: with SMTP id w3mr1078874eju.263.1579152334002; Wed, 15 Jan 2020 21:25:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579152333; cv=none; d=google.com; s=arc-20160816; b=cX2eTmk1RP3vweFwKEYU1JlX32DqKlu39cvtYnH92pwLEracPzRQwDnFVDePaPycN/ Cmi7jTQf99T+ZC7WJck591wRm19WM5OCRKTOIcBmIPVHubrNM+0ZpszOLsKDS9Ubxjlc 2z6S96YWgT2g/ZcQALz1vtRu4J03MwpwYJgmHTstnjmfqeBZHcg9DsjLLdocVD3EjaO/ v4o/Y1ESZIzW5vviQJuKRdSd9nWQ8J6xCuTOI6SXjUWs0j7YSbj9vM/2BPVCTgobVcmJ jyI3bCi0whUisdB/M47uyMQcffDbOiL7lY40KSA8HKX8MbzQW6ebt9sDB+diMYeHLv9T I+PA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=kd7M+tGi71+uowfc/4q4UKVLJ2kQBU3mxqjzQTW71bM=; b=O/fF4yh9DHfLd7RAPJ4FYe1KGbq2h/ShtFpTSIKIOE+SjyWqMTbzlDSK+cwh+z+UFc 5LMQrU7fTk41fiwTAaGrw5DllRzV42T4pGh/p/AWViBNtPLlB3S+1Mn69el1DMU7aRht MxQgjIlahy/8NnUiAWcJLoAwpmBApDBExokaIApoGMm/wcHzBsi7CqHCo5/gOawHW2u0 canaUNd1db0oPa4tCIuIDL0xv1VA9l8LMbeRWky1mMuqpWMy4hcmEkgfzt8TTxtNDA01 O8qSJc7O4AEkOB22pUVW5C1OiUr11uxYi3qW/iBBA90D6pz1LFDgF+wWpak6YIe9rySH Wwxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id r23si15093708edo.79.2020.01.15.21.25.33; Wed, 15 Jan 2020 21:25:33 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 182DC1BFF7; Thu, 16 Jan 2020 06:25:27 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id A892D1BFEC for ; Thu, 16 Jan 2020 06:25:25 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C6AC11007; Wed, 15 Jan 2020 21:25:24 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B44A23F534; Wed, 15 Jan 2020 21:25:24 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Wed, 15 Jan 2020 23:25:06 -0600 Message-Id: <20200116052511.8557-2-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200116052511.8557-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200116052511.8557-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v9 1/6] test/ring: use division for cycle count calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use division instead of modulo operation to calculate more accurate cycle count. Signed-off-by: Honnappa Nagarahalli Acked-by: Olivier Matz --- app/test/test_ring_perf.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 70ee46ffe..6c2aca483 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -357,10 +357,10 @@ test_single_enqueue_dequeue(struct rte_ring *r) } const uint64_t mc_end = rte_rdtsc(); - printf("SP/SC single enq/dequeue: %"PRIu64"\n", - (sc_end-sc_start) >> iter_shift); - printf("MP/MC single enq/dequeue: %"PRIu64"\n", - (mc_end-mc_start) >> iter_shift); + printf("SP/SC single enq/dequeue: %.2F\n", + ((double)(sc_end-sc_start)) / iterations); + printf("MP/MC single enq/dequeue: %.2F\n", + ((double)(mc_end-mc_start)) / iterations); } /* @@ -395,13 +395,15 @@ test_burst_enqueue_dequeue(struct rte_ring *r) } const uint64_t mc_end = rte_rdtsc(); - uint64_t mc_avg = ((mc_end-mc_start) >> iter_shift) / bulk_sizes[sz]; - uint64_t sc_avg = ((sc_end-sc_start) >> iter_shift) / bulk_sizes[sz]; + double mc_avg = ((double)(mc_end-mc_start) / iterations) / + bulk_sizes[sz]; + double sc_avg = ((double)(sc_end-sc_start) / iterations) / + bulk_sizes[sz]; - printf("SP/SC burst enq/dequeue (size: %u): %"PRIu64"\n", bulk_sizes[sz], - sc_avg); - printf("MP/MC burst enq/dequeue (size: %u): %"PRIu64"\n", bulk_sizes[sz], - mc_avg); + printf("SP/SC burst enq/dequeue (size: %u): %.2F\n", + bulk_sizes[sz], sc_avg); + printf("MP/MC burst enq/dequeue (size: %u): %.2F\n", + bulk_sizes[sz], mc_avg); } } From patchwork Thu Jan 16 05:25:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182850 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp7830981ile; Wed, 15 Jan 2020 21:25:42 -0800 (PST) X-Google-Smtp-Source: APXvYqwiZcw4ANR5pf2bd6AF54lkHCrpCbEe0+7Tgac/bwGmhxF6jMIjMaLHylVEkZf4686SX9bK X-Received: by 2002:aa7:cfd3:: with SMTP id r19mr31614242edy.334.1579152342691; Wed, 15 Jan 2020 21:25:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579152342; cv=none; d=google.com; s=arc-20160816; b=bPXGVrjdxz/Uf0dCv0zjZKaQ9611+GGpobufdfYnG+p1fT0STh41uFsmAz3Hc4nzwj m8+riKUZP7nPC0KBTC4DudcAsljB8NJViUcfxhNbCFNVaCyFUGQMeEDhMCsN0qH17tfD j1s+j5mARIoRSu+bhC3CCMNQ9M8wSp3HkttGu1OeobvHb0WF1SlAfYO46n63DHtibXaX aS+3pHy0Pe3FalDNghnRWktbNJSxUq05bJ/NHl29DE4dNSnWO0EvIP9/gPgoHQj2wsif vcCIe9W/F+0JHqCYbUm3Yj5HZpyk1NzF/kE2plb+6wIi+auUhVH3QAxLokjW4C/b3LVF 8kbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=RR7MtgNSVB/ANqr8jRlyhkuY0glJJsunExCg3p4oTfI=; b=reYSzwjYUes0IgRYqw9pVju7GJDZYugUSB8dmFIfDYzxf/qx1fJ+cIRdkfU931i8E4 Vf5LQK2r+eT7Qnrcg+41x64tUR/k/wTwP1DuOIUgk1lumuh2bEVtK0TZYDZWgDXuLBrh klSKlEPMTpqoU/ZE2vleBjmGVW5goaRyigXacaRlW+R0YuI2w9pWO69j+MXYsxq7A1J5 9uBUP/zD1WMT0vXnLH2zWONK4FnWEerE4pem3Sn8bs7IN5rF8+m1AdX9kvB9b0U3wPJT Dx3MXQetJXVS63DPmVLiaL9slxaSfY1S1LvETciu2W01Mx1FvLOqysUE7SOT22TaEYWp Z9Bw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id z65si15145812ede.84.2020.01.15.21.25.42; Wed, 15 Jan 2020 21:25:42 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C38641BFFF; Thu, 16 Jan 2020 06:25:29 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 132611BFF6 for ; Thu, 16 Jan 2020 06:25:27 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 783CA328; Wed, 15 Jan 2020 21:25:26 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5DE653F534; Wed, 15 Jan 2020 21:25:26 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Wed, 15 Jan 2020 23:25:07 -0600 Message-Id: <20200116052511.8557-3-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200116052511.8557-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200116052511.8557-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v9 2/6] lib/ring: apis to support configurable element size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Current APIs assume ring elements to be pointers. However, in many use cases, the size can be different. Add new APIs to support configurable ring element sizes. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar Reviewed-by: Gavin Hu Reviewed-by: Ruifeng Wang --- lib/librte_ring/Makefile | 3 +- lib/librte_ring/meson.build | 4 + lib/librte_ring/rte_ring.c | 41 +- lib/librte_ring/rte_ring.h | 1 + lib/librte_ring/rte_ring_elem.h | 1003 ++++++++++++++++++++++++++ lib/librte_ring/rte_ring_version.map | 2 + 6 files changed, 1045 insertions(+), 9 deletions(-) create mode 100644 lib/librte_ring/rte_ring_elem.h -- 2.17.1 diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile index 22454b084..917c560ad 100644 --- a/lib/librte_ring/Makefile +++ b/lib/librte_ring/Makefile @@ -6,7 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_ring.a -CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal EXPORT_MAP := rte_ring_version.map @@ -16,6 +16,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c # install includes SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h \ + rte_ring_elem.h \ rte_ring_generic.h \ rte_ring_c11_mem.h diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build index ca8a435e9..f2f3ccc88 100644 --- a/lib/librte_ring/meson.build +++ b/lib/librte_ring/meson.build @@ -3,5 +3,9 @@ sources = files('rte_ring.c') headers = files('rte_ring.h', + 'rte_ring_elem.h', 'rte_ring_c11_mem.h', 'rte_ring_generic.h') + +# rte_ring_create_elem and rte_ring_get_memsize_elem are experimental +allow_experimental_apis = true diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c index d9b308036..3e15dc398 100644 --- a/lib/librte_ring/rte_ring.c +++ b/lib/librte_ring/rte_ring.c @@ -33,6 +33,7 @@ #include #include "rte_ring.h" +#include "rte_ring_elem.h" TAILQ_HEAD(rte_ring_list, rte_tailq_entry); @@ -46,23 +47,38 @@ EAL_REGISTER_TAILQ(rte_ring_tailq) /* return the size of memory occupied by a ring */ ssize_t -rte_ring_get_memsize(unsigned count) +rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) { ssize_t sz; + /* Check if element size is a multiple of 4B */ + if (esize % 4 != 0) { + RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + + return -EINVAL; + } + /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and " - "do not exceed the size limit %u\n", RTE_RING_SZ_MASK); + "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RTE_RING_SZ_MASK); + return -EINVAL; } - sz = sizeof(struct rte_ring) + count * sizeof(void *); + sz = sizeof(struct rte_ring) + count * esize; sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); return sz; } +/* return the size of memory occupied by a ring */ +ssize_t +rte_ring_get_memsize(unsigned count) +{ + return rte_ring_get_memsize_elem(sizeof(void *), count); +} + void rte_ring_reset(struct rte_ring *r) { @@ -114,10 +130,10 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, return 0; } -/* create the ring */ +/* create the ring for a given element size */ struct rte_ring * -rte_ring_create(const char *name, unsigned count, int socket_id, - unsigned flags) +rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, + int socket_id, unsigned int flags) { char mz_name[RTE_MEMZONE_NAMESIZE]; struct rte_ring *r; @@ -135,7 +151,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id, if (flags & RING_F_EXACT_SZ) count = rte_align32pow2(count + 1); - ring_size = rte_ring_get_memsize(count); + ring_size = rte_ring_get_memsize_elem(esize, count); if (ring_size < 0) { rte_errno = ring_size; return NULL; @@ -182,6 +198,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id, return r; } +/* create the ring */ +struct rte_ring * +rte_ring_create(const char *name, unsigned count, int socket_id, + unsigned flags) +{ + return rte_ring_create_elem(name, sizeof(void *), count, socket_id, + flags); +} + /* free the ring */ void rte_ring_free(struct rte_ring *r) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 2a9f768a1..18fc5d845 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -216,6 +216,7 @@ int rte_ring_init(struct rte_ring *r, const char *name, unsigned count, */ struct rte_ring *rte_ring_create(const char *name, unsigned count, int socket_id, unsigned flags); + /** * De-allocate all memory used by the ring. * diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_elem.h new file mode 100644 index 000000000..15d79bf2a --- /dev/null +++ b/lib/librte_ring/rte_ring_elem.h @@ -0,0 +1,1003 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright (c) 2019 Arm Limited + * Copyright (c) 2010-2017 Intel Corporation + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org + * All rights reserved. + * Derived from FreeBSD's bufring.h + * Used as BSD-3 Licensed with permission from Kip Macy. + */ + +#ifndef _RTE_RING_ELEM_H_ +#define _RTE_RING_ELEM_H_ + +/** + * @file + * RTE Ring with user defined element size + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_ring.h" + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Calculate the memory size needed for a ring with given element size + * + * This function returns the number of bytes needed for a ring, given + * the number of elements in it and the size of the element. This value + * is the sum of the size of the structure rte_ring and the size of the + * memory needed for storing the elements. The value is aligned to a cache + * line size. + * + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param count + * The number of elements in the ring (must be a power of 2). + * @return + * - The memory size needed for the ring on success. + * - -EINVAL - esize is not a multiple of 4 or count provided is not a + * power of 2. + */ +__rte_experimental +ssize_t rte_ring_get_memsize_elem(unsigned int esize, unsigned int count); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create a new ring named *name* that stores elements with given size. + * + * This function uses ``memzone_reserve()`` to allocate memory. Then it + * calls rte_ring_init() to initialize an empty ring. + * + * The new ring size is set to *count*, which must be a power of + * two. Water marking is disabled by default. The real usable ring size + * is *count-1* instead of *count* to differentiate a free ring from an + * empty ring. + * + * The ring is added in RTE_TAILQ_RING list. + * + * @param name + * The name of the ring. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * @param count + * The number of elements in the ring (must be a power of 2). + * @param socket_id + * The *socket_id* argument is the socket identifier in case of + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA + * constraint for the reserved zone. + * @param flags + * An OR of the following: + * - RING_F_SP_ENQ: If this flag is set, the default behavior when + * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()`` + * is "single-producer". Otherwise, it is "multi-producers". + * - RING_F_SC_DEQ: If this flag is set, the default behavior when + * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()`` + * is "single-consumer". Otherwise, it is "multi-consumers". + * @return + * On success, the pointer to the new allocated ring. NULL on error with + * rte_errno set appropriately. Possible errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - esize is not a multiple of 4 or count provided is not a + * power of 2. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_ring *rte_ring_create_elem(const char *name, unsigned int esize, + unsigned int count, int socket_id, unsigned int flags); + +static __rte_always_inline void +enqueue_elems_32(struct rte_ring *r, const uint32_t size, uint32_t idx, + const void *obj_table, uint32_t n) +{ + unsigned int i; + uint32_t *ring = (uint32_t *)&r[1]; + const uint32_t *obj = (const uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + ring[idx + 4] = obj[i + 4]; + ring[idx + 5] = obj[i + 5]; + ring[idx + 6] = obj[i + 6]; + ring[idx + 7] = obj[i + 7]; + } + switch (n & 0x7) { + case 7: + ring[idx++] = obj[i++]; /* fallthrough */ + case 6: + ring[idx++] = obj[i++]; /* fallthrough */ + case 5: + ring[idx++] = obj[i++]; /* fallthrough */ + case 4: + ring[idx++] = obj[i++]; /* fallthrough */ + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +enqueue_elems_64(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + const uint64_t *obj = (const uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + ring[idx] = obj[i]; + ring[idx + 1] = obj[i + 1]; + ring[idx + 2] = obj[i + 2]; + ring[idx + 3] = obj[i + 3]; + } + switch (n & 0x3) { + case 3: + ring[idx++] = obj[i++]; /* fallthrough */ + case 2: + ring[idx++] = obj[i++]; /* fallthrough */ + case 1: + ring[idx++] = obj[i++]; + } + } else { + for (i = 0; idx < size; i++, idx++) + ring[idx] = obj[i]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + ring[idx] = obj[i]; + } +} + +static __rte_always_inline void +enqueue_elems_128(struct rte_ring *r, uint32_t prod_head, + const void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + rte_int128_t *ring = (rte_int128_t *)&r[1]; + const rte_int128_t *obj = (const rte_int128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 32); + switch (n & 0x1) { + case 1: + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + } + } else { + for (i = 0; idx < size; i++, idx++) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + memcpy((void *)(ring + idx), + (const void *)(obj + i), 16); + } +} + +/* the actual enqueue of elements on the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +enqueue_elems(struct rte_ring *r, uint32_t prod_head, const void *obj_table, + uint32_t esize, uint32_t num) +{ + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + enqueue_elems_64(r, prod_head, obj_table, num); + else if (esize == 16) + enqueue_elems_128(r, prod_head, obj_table, num); + else { + uint32_t idx, scale, nr_idx, nr_num, nr_size; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = prod_head & r->mask; + nr_idx = idx * scale; + nr_size = r->size * scale; + enqueue_elems_32(r, nr_size, nr_idx, obj_table, nr_num); + } +} + +static __rte_always_inline void +dequeue_elems_32(struct rte_ring *r, const uint32_t size, uint32_t idx, + void *obj_table, uint32_t n) +{ + unsigned int i; + uint32_t *ring = (uint32_t *)&r[1]; + uint32_t *obj = (uint32_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x7); i += 8, idx += 8) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + obj[i + 4] = ring[idx + 4]; + obj[i + 5] = ring[idx + 5]; + obj[i + 6] = ring[idx + 6]; + obj[i + 7] = ring[idx + 7]; + } + switch (n & 0x7) { + case 7: + obj[i++] = ring[idx++]; /* fallthrough */ + case 6: + obj[i++] = ring[idx++]; /* fallthrough */ + case 5: + obj[i++] = ring[idx++]; /* fallthrough */ + case 4: + obj[i++] = ring[idx++]; /* fallthrough */ + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +dequeue_elems_64(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + uint64_t *ring = (uint64_t *)&r[1]; + uint64_t *obj = (uint64_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x3); i += 4, idx += 4) { + obj[i] = ring[idx]; + obj[i + 1] = ring[idx + 1]; + obj[i + 2] = ring[idx + 2]; + obj[i + 3] = ring[idx + 3]; + } + switch (n & 0x3) { + case 3: + obj[i++] = ring[idx++]; /* fallthrough */ + case 2: + obj[i++] = ring[idx++]; /* fallthrough */ + case 1: + obj[i++] = ring[idx++]; /* fallthrough */ + } + } else { + for (i = 0; idx < size; i++, idx++) + obj[i] = ring[idx]; + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + obj[i] = ring[idx]; + } +} + +static __rte_always_inline void +dequeue_elems_128(struct rte_ring *r, uint32_t prod_head, + void *obj_table, uint32_t n) +{ + unsigned int i; + const uint32_t size = r->size; + uint32_t idx = prod_head & r->mask; + rte_int128_t *ring = (rte_int128_t *)&r[1]; + rte_int128_t *obj = (rte_int128_t *)obj_table; + if (likely(idx + n < size)) { + for (i = 0; i < (n & ~0x1); i += 2, idx += 2) + memcpy((void *)(obj + i), (void *)(ring + idx), 32); + switch (n & 0x1) { + case 1: + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + } + } else { + for (i = 0; idx < size; i++, idx++) + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + /* Start at the beginning */ + for (idx = 0; i < n; i++, idx++) + memcpy((void *)(obj + i), (void *)(ring + idx), 16); + } +} + +/* the actual dequeue of elements from the ring. + * Placed here since identical code needed in both + * single and multi producer enqueue functions. + */ +static __rte_always_inline void +dequeue_elems(struct rte_ring *r, uint32_t cons_head, void *obj_table, + uint32_t esize, uint32_t num) +{ + /* 8B and 16B copies implemented individually to retain + * the current performance. + */ + if (esize == 8) + dequeue_elems_64(r, cons_head, obj_table, num); + else if (esize == 16) + dequeue_elems_128(r, cons_head, obj_table, num); + else { + uint32_t idx, scale, nr_idx, nr_num, nr_size; + + /* Normalize to uint32_t */ + scale = esize / sizeof(uint32_t); + nr_num = num * scale; + idx = cons_head & r->mask; + nr_idx = idx * scale; + nr_size = r->size * scale; + dequeue_elems_32(r, nr_size, nr_idx, obj_table, nr_num); + } +} + +/* Between load and load. there might be cpu reorder in weak model + * (powerpc/arm). + * There are 2 choices for the users + * 1.use rmb() memory barrier + * 2.use one-direction load_acquire/store_release barrier,defined by + * CONFIG_RTE_USE_C11_MEM_MODEL=y + * It depends on performance test results. + * By default, move common functions to rte_ring_generic.h + */ +#ifdef RTE_USE_C11_MEM_MODEL +#include "rte_ring_c11_mem.h" +#else +#include "rte_ring_generic.h" +#endif + +/** + * @internal Enqueue several objects on the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param behavior + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring + * @param is_sp + * Indicates whether to use single producer or multi-producer head update + * @param free_space + * returns the amount of space after the enqueue operation has finished + * @return + * Actual number of objects enqueued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_enqueue_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sp, + unsigned int *free_space) +{ + uint32_t prod_head, prod_next; + uint32_t free_entries; + + n = __rte_ring_move_prod_head(r, is_sp, n, behavior, + &prod_head, &prod_next, &free_entries); + if (n == 0) + goto end; + + enqueue_elems(r, prod_head, obj_table, esize, n); + + update_tail(&r->prod, prod_head, prod_next, is_sp, 1); +end: + if (free_space != NULL) + *free_space = free_entries - n; + return n; +} + +/** + * @internal Dequeue several objects from the ring + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to pull from the ring. + * @param behavior + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a ring + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring + * @param is_sc + * Indicates whether to use single consumer or multi-consumer head update + * @param available + * returns the number of remaining ring entries after the dequeue has finished + * @return + * - Actual number of objects dequeued. + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. + */ +static __rte_always_inline unsigned int +__rte_ring_do_dequeue_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, + enum rte_ring_queue_behavior behavior, unsigned int is_sc, + unsigned int *available) +{ + uint32_t cons_head, cons_next; + uint32_t entries; + + n = __rte_ring_move_cons_head(r, (int)is_sc, n, behavior, + &cons_head, &cons_next, &entries); + if (n == 0) + goto end; + + dequeue_elems(r, cons_head, obj_table, esize, n); + + update_tail(&r->cons, cons_head, cons_next, is_sc, 0); + +end: + if (available != NULL) + *available = entries - n; + return n; +} + +/** + * Enqueue several objects on the ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_MP, free_space); +} + +/** + * Enqueue several objects on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_SP, free_space); +} + +/** + * Enqueue several objects on a ring. + * + * This function calls the multi-producer or the single-producer + * version depending on the default behavior that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * The number of objects enqueued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, r->prod.single, free_space); +} + +/** + * Enqueue one object on a ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_mp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Enqueue one object on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_sp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Enqueue one object on a ring. + * + * This function calls the multi-producer or the single-producer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj + * A pointer to the object to be added. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects enqueued. + * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. + */ +static __rte_always_inline int +rte_ring_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) +{ + return rte_ring_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : + -ENOBUFS; +} + +/** + * Dequeue several objects from a ring (multi-consumers safe). + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_MC, available); +} + +/** + * Dequeue several objects from a ring (NOT multi-consumers safe). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table, + * must be strictly positive. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, __IS_SC, available); +} + +/** + * Dequeue several objects from a ring. + * + * This function calls the multi-consumers or the single-consumer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * The number of objects dequeued, either 0 or n + */ +static __rte_always_inline unsigned int +rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_FIXED, r->cons.single, available); +} + +/** + * Dequeue one object from a ring (multi-consumers safe). + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue; no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_mc_dequeue_elem(struct rte_ring *r, void *obj_p, + unsigned int esize) +{ + return rte_ring_mc_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Dequeue one object from a ring (NOT multi-consumers safe). + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success; objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue, no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_sc_dequeue_elem(struct rte_ring *r, void *obj_p, + unsigned int esize) +{ + return rte_ring_sc_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Dequeue one object from a ring. + * + * This function calls the multi-consumers or the single-consumer + * version depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_p + * A pointer to a void * pointer (object) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @return + * - 0: Success, objects dequeued. + * - -ENOENT: Not enough entries in the ring to dequeue, no object is + * dequeued. + */ +static __rte_always_inline int +rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) +{ + return rte_ring_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : + -ENOENT; +} + +/** + * Enqueue several objects on the ring (multi-producers safe). + * + * This function uses a "compare and set" instruction to move the + * producer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_MP, free_space); +} + +/** + * Enqueue several objects on a ring + * + * @warning This API is NOT multi-producers safe + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_SP, free_space); +} + +/** + * Enqueue several objects on a ring. + * + * This function calls the multi-producer or the single-producer + * version depending on the default behavior that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects). + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to add in the ring from the obj_table. + * @param free_space + * if non-NULL, returns the amount of space in the ring after the + * enqueue operation has finished. + * @return + * - n: Actual number of objects enqueued. + */ +static __rte_always_inline unsigned +rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, r->prod.single, free_space); +} + +/** + * Dequeue several objects from a ring (multi-consumers safe). When the request + * objects are more than the available objects, only dequeue the actual number + * of objects + * + * This function uses a "compare and set" instruction to move the + * consumer index atomically. + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - n: Actual number of objects dequeued, 0 if ring is empty + */ +static __rte_always_inline unsigned +rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_MC, available); +} + +/** + * Dequeue several objects from a ring (NOT multi-consumers safe).When the + * request objects are more than the available objects, only dequeue the + * actual number of objects + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - n: Actual number of objects dequeued, 0 if ring is empty + */ +static __rte_always_inline unsigned +rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, __IS_SC, available); +} + +/** + * Dequeue multiple objects from a ring up to a maximum number. + * + * This function calls the multi-consumers or the single-consumer + * version, depending on the default behaviour that was specified at + * ring creation time (see flags). + * + * @param r + * A pointer to the ring structure. + * @param obj_table + * A pointer to a table of void * pointers (objects) that will be filled. + * @param esize + * The size of ring element, in bytes. It must be a multiple of 4. + * This must be the same value used while creating the ring. Otherwise + * the results are undefined. + * @param n + * The number of objects to dequeue from the ring to the obj_table. + * @param available + * If non-NULL, returns the number of remaining ring entries after the + * dequeue has finished. + * @return + * - Number of objects dequeued + */ +static __rte_always_inline unsigned int +rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, + RTE_RING_QUEUE_VARIABLE, + r->cons.single, available); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_RING_ELEM_H_ */ diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map index 89d84bcf4..7a5328dd5 100644 --- a/lib/librte_ring/rte_ring_version.map +++ b/lib/librte_ring/rte_ring_version.map @@ -15,6 +15,8 @@ DPDK_20.0 { EXPERIMENTAL { global: + rte_ring_create_elem; + rte_ring_get_memsize_elem; rte_ring_reset; }; From patchwork Thu Jan 16 05:25:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182851 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp7831110ile; Wed, 15 Jan 2020 21:25:55 -0800 (PST) X-Google-Smtp-Source: APXvYqwIbtzCXzAtpwpqb/I0ZVS2PtOxyLNbjSneTabKtPocI7s3dFQy3AWBBgWPxrPOJYNI0+ta X-Received: by 2002:a05:6402:355:: with SMTP id r21mr34492138edw.60.1579152354979; Wed, 15 Jan 2020 21:25:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579152354; cv=none; d=google.com; s=arc-20160816; b=b0dJ9v0Rh6fZRX8aEMX2YYaxaGDPcaiiJKf1rUD6UzBBa0ed62svHaHSJJBfeOH3yU WUF0XEyxQipHoXcJEMV3VB/DRHiSyXq1ulNplWOG53nDWMlHpoEn8qR3BCrtkZCyOPdg /wNOeIUwJyArzBI+SasQ73RcxJ3UC9fHfhWq/nsi5rqpsq2wFoAoV0vLDDgP/MDBK3hU cnAysQxdDGv2Z1RVyICaM0OOgW2fXlwRcu0AQB94Wt2u+pGZr2CfpAkXh2nM1+3XWKXT IBGxYifnm22H+GU4w8A/RRAEEoBf1rGGL9Ax/Q8sM7oOKPg1iYkXXcER0oXM6esteBg7 ZN8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=JrBgi9RFKtFtk3gv57rwwQOmz7WpIDGm+rBhoB6+2zc=; b=c/pYAEaCziT5wIKnrAYjfA3PFMroZyG10FspKXL9h3ArL53pq58TyGJ1MfQzBn9SH9 H0bOorVj/zur+pCR5CzAUaf8ZhpeWwYOgt/d4+H/GXSzeFDxGD7K8Kx/z5y5QuGIq+PN 19Fc5c1iRYu2cSVN3aAUnOG5inn5dOTGgAPyJAHt9mKJBKPOHeN14EsGBVEgfNp/6lE5 mYS2cCSWZPTq/B9PBEwcD4pNC7PP/x08Qz1oOAEjqSs6xn1UnsAkcTf8QA3lMUblw2hz CvcDzPC8uF1yvzEbemETdpTBimjLQrxg7g0BcXBW+mJ93cUzdg7JRdgRbSRoV4zATzEJ AjmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id v25si14875978edb.327.2020.01.15.21.25.54; Wed, 15 Jan 2020 21:25:54 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2F7FF1C0BF; Thu, 16 Jan 2020 06:25:32 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 37F461BFFE for ; Thu, 16 Jan 2020 06:25:29 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9EBCA328; Wed, 15 Jan 2020 21:25:28 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 78EF53F534; Wed, 15 Jan 2020 21:25:28 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Wed, 15 Jan 2020 23:25:08 -0600 Message-Id: <20200116052511.8557-4-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200116052511.8557-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200116052511.8557-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v9 3/6] test/ring: add functional tests for rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add basic infrastructure to test rte_ring_xxx_elem APIs. Adjust the existing test cases to test for various ring element sizes. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring.c | 1342 +++++++++++++++++++++--------------------- app/test/test_ring.h | 187 ++++++ 2 files changed, 850 insertions(+), 679 deletions(-) create mode 100644 app/test/test_ring.h -- 2.17.1 diff --git a/app/test/test_ring.c b/app/test/test_ring.c index aaf1e70ad..c08500eca 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -23,11 +23,13 @@ #include #include #include +#include #include #include #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -55,8 +57,6 @@ #define RING_SIZE 4096 #define MAX_BULK 32 -static rte_atomic32_t synchro; - #define TEST_RING_VERIFY(exp) \ if (!(exp)) { \ printf("error at %s:%d\tcondition " #exp " failed\n", \ @@ -67,808 +67,792 @@ static rte_atomic32_t synchro; #define TEST_RING_FULL_EMTPY_ITER 8 -/* - * helper routine for test_ring_basic - */ -static int -test_ring_basic_full_empty(struct rte_ring *r, void * const src[], void *dst[]) +static int esize[] = {-1, 4, 8, 16, 20}; + +static void** +test_ring_inc_ptr(void **obj, int esize, unsigned int n) { - unsigned i, rand; - const unsigned rsz = RING_SIZE - 1; - - printf("Basic full/empty test\n"); - - for (i = 0; TEST_RING_FULL_EMTPY_ITER != i; i++) { - - /* random shift in the ring */ - rand = RTE_MAX(rte_rand() % RING_SIZE, 1UL); - printf("%s: iteration %u, random shift: %u;\n", - __func__, i, rand); - TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rand, - NULL) != 0); - TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rand, - NULL) == rand); - - /* fill the ring */ - TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rsz, NULL) != 0); - TEST_RING_VERIFY(0 == rte_ring_free_count(r)); - TEST_RING_VERIFY(rsz == rte_ring_count(r)); - TEST_RING_VERIFY(rte_ring_full(r)); - TEST_RING_VERIFY(0 == rte_ring_empty(r)); - - /* empty the ring */ - TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rsz, - NULL) == rsz); - TEST_RING_VERIFY(rsz == rte_ring_free_count(r)); - TEST_RING_VERIFY(0 == rte_ring_count(r)); - TEST_RING_VERIFY(0 == rte_ring_full(r)); - TEST_RING_VERIFY(rte_ring_empty(r)); + /* Legacy queue APIs? */ + if ((esize) == -1) + return ((void **)obj) + n; + else + return (void **)(((uint32_t *)obj) + + (n * esize / sizeof(uint32_t))); +} - /* check data */ - TEST_RING_VERIFY(0 == memcmp(src, dst, rsz)); - rte_ring_dump(stdout, r); - } - return 0; +static void +test_ring_mem_init(void *obj, unsigned int count, int esize) +{ + unsigned int i; + + /* Legacy queue APIs? */ + if (esize == -1) + for (i = 0; i < count; i++) + ((void **)obj)[i] = (void *)(unsigned long)i; + else + for (i = 0; i < (count * esize / sizeof(uint32_t)); i++) + ((uint32_t *)obj)[i] = i; } -static int -test_ring_basic(struct rte_ring *r) +static void +test_ring_print_test_string(const char *istr, unsigned int api_type, int esize) { - void **src = NULL, **cur_src = NULL, **dst = NULL, **cur_dst = NULL; - int ret; - unsigned i, num_elems; + printf("\n%s: ", istr); + + if (esize == -1) + printf("legacy APIs: "); + else + printf("elem APIs: element size %dB ", esize); + + if (api_type == TEST_RING_IGNORE_API_TYPE) + return; + + if (api_type & TEST_RING_THREAD_DEF) + printf(": default enqueue/dequeue: "); + else if (api_type & TEST_RING_THREAD_SPSC) + printf(": SP/SC: "); + else if (api_type & TEST_RING_THREAD_MPMC) + printf(": MP/MC: "); + + if (api_type & TEST_RING_ELEM_SINGLE) + printf("single\n"); + else if (api_type & TEST_RING_ELEM_BULK) + printf("bulk\n"); + else if (api_type & TEST_RING_ELEM_BURST) + printf("burst\n"); +} - /* alloc dummy object pointers */ - src = malloc(RING_SIZE*2*sizeof(void *)); - if (src == NULL) - goto fail; +/* + * Various negative test cases. + */ +static int +test_ring_negative_tests(void) +{ + struct rte_ring *rp = NULL; + struct rte_ring *rt = NULL; + unsigned int i; - for (i = 0; i < RING_SIZE*2 ; i++) { - src[i] = (void *)(unsigned long)i; - } - cur_src = src; - - /* alloc some room for copied objects */ - dst = malloc(RING_SIZE*2*sizeof(void *)); - if (dst == NULL) - goto fail; - - memset(dst, 0, RING_SIZE*2*sizeof(void *)); - cur_dst = dst; - - printf("enqueue 1 obj\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1, NULL); - cur_src += 1; - if (ret == 0) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2, NULL); - cur_src += 2; - if (ret == 0) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret == 0) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret == 0) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret == 0) - goto fail; - - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret == 0) - goto fail; - - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } - cur_src = src; - cur_dst = dst; - - printf("enqueue 1 obj\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1, NULL); - cur_src += 1; - if (ret == 0) - goto fail; - - printf("enqueue 2 objs\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2, NULL); - cur_src += 2; - if (ret == 0) - goto fail; - - printf("enqueue MAX_BULK objs\n"); - ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK, NULL); - cur_src += MAX_BULK; - if (ret == 0) - goto fail; - - printf("dequeue 1 obj\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1, NULL); - cur_dst += 1; - if (ret == 0) - goto fail; - - printf("dequeue 2 objs\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2, NULL); - cur_dst += 2; - if (ret == 0) - goto fail; - - printf("dequeue MAX_BULK objs\n"); - ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK, NULL); - cur_dst += MAX_BULK; - if (ret == 0) - goto fail; - - /* check data */ - if (memcmp(src, dst, cur_dst - dst)) { - rte_hexdump(stdout, "src", src, cur_src - src); - rte_hexdump(stdout, "dst", dst, cur_dst - dst); - printf("data after dequeue is not the same\n"); - goto fail; - } - cur_src = src; - cur_dst = dst; - - printf("fill and empty the ring\n"); - for (i = 0; i= rte_ring_get_size(exact_sz_r)) { + printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n", + __func__, + rte_ring_get_size(std_r), + rte_ring_get_size(exact_sz_r)); + goto test_fail; + } + /* + * check that the exact_sz_ring can hold one more element + * than the standard ring. (16 vs 15 elements) + */ + for (j = 0; j < ring_sz - 1; j++) { + test_ring_enqueue(std_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + test_ring_enqueue(exact_sz_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + } + ret = test_ring_enqueue(std_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + if (ret != -ENOBUFS) { + printf("%s: error, unexpected successful enqueue\n", + __func__); + goto test_fail; + } + ret = test_ring_enqueue(exact_sz_r, obj, esize[i], 1, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE); + if (ret == -ENOBUFS) { + printf("%s: error, enqueue failed\n", __func__); + goto test_fail; + } + + /* check that dequeue returns the expected number of elements */ + ret = test_ring_dequeue(exact_sz_r, obj, esize[i], ring_sz, + TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST); + if (ret != (int)ring_sz) { + printf("%s: error, failed to dequeue expected nb of elements\n", + __func__); + goto test_fail; + } - /* - * Check that the exact size ring is bigger than the standard ring - */ - if (rte_ring_get_size(std_ring) >= rte_ring_get_size(exact_sz_ring)) { - printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n", - __func__, - rte_ring_get_size(std_ring), - rte_ring_get_size(exact_sz_ring)); - goto end; - } - /* - * check that the exact_sz_ring can hold one more element than the - * standard ring. (16 vs 15 elements) - */ - for (i = 0; i < ring_sz - 1; i++) { - rte_ring_enqueue(std_ring, NULL); - rte_ring_enqueue(exact_sz_ring, NULL); - } - if (rte_ring_enqueue(std_ring, NULL) != -ENOBUFS) { - printf("%s: error, unexpected successful enqueue\n", __func__); - goto end; - } - if (rte_ring_enqueue(exact_sz_ring, NULL) == -ENOBUFS) { - printf("%s: error, enqueue failed\n", __func__); - goto end; - } + /* check that the capacity function returns expected value */ + if (rte_ring_get_capacity(exact_sz_r) != ring_sz) { + printf("%s: error, incorrect ring capacity reported\n", + __func__); + goto test_fail; + } - /* check that dequeue returns the expected number of elements */ - if (rte_ring_dequeue_burst(exact_sz_ring, ptr_array, - RTE_DIM(ptr_array), NULL) != ring_sz) { - printf("%s: error, failed to dequeue expected nb of elements\n", - __func__); - goto end; + rte_free(obj); + rte_ring_free(std_r); + rte_ring_free(exact_sz_r); } - /* check that the capacity function returns expected value */ - if (rte_ring_get_capacity(exact_sz_ring) != ring_sz) { - printf("%s: error, incorrect ring capacity reported\n", - __func__); - goto end; - } + return 0; - ret = 0; /* all ok if we get here */ -end: - rte_ring_free(std_ring); - rte_ring_free(exact_sz_ring); - return ret; +test_fail: + rte_free(obj); + rte_ring_free(std_r); + rte_ring_free(exact_sz_r); + return -1; } static int test_ring(void) { - struct rte_ring *r = NULL; - - /* some more basic operations */ - if (test_ring_basic_ex() < 0) - goto test_fail; - - rte_atomic32_init(&synchro); - - r = rte_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0); - if (r == NULL) - goto test_fail; - - /* retrieve the ring from its name */ - if (rte_ring_lookup("test") != r) { - printf("Cannot lookup ring from its name\n"); - goto test_fail; - } - - /* burst operations */ - if (test_ring_burst_basic(r) < 0) - goto test_fail; + unsigned int i, j; - /* basic operations */ - if (test_ring_basic(r) < 0) + /* Negative test cases */ + if (test_ring_negative_tests() < 0) goto test_fail; - /* basic operations */ - if ( test_create_count_odd() < 0){ - printf("Test failed to detect odd count\n"); - goto test_fail; - } else - printf("Test detected odd count\n"); - - if ( test_lookup_null() < 0){ - printf("Test failed to detect NULL ring lookup\n"); - goto test_fail; - } else - printf("Test detected NULL ring lookup\n"); - - /* test of creating ring with wrong size */ - if (test_ring_creation_with_wrong_size() < 0) - goto test_fail; - - /* test of creation ring with an used name */ - if (test_ring_creation_with_an_used_name() < 0) + /* Some basic operations */ + if (test_ring_basic_ex() < 0) goto test_fail; if (test_ring_with_exact_size() < 0) goto test_fail; + /* Burst and bulk operations with sp/sc, mp/mc and default. + * The test cases are split into smaller test cases to + * help clang compile faster. + */ + for (j = TEST_RING_ELEM_BULK; j <= TEST_RING_ELEM_BURST; j <<= 1) + for (i = TEST_RING_THREAD_DEF; + i <= TEST_RING_THREAD_MPMC; i <<= 1) + if (test_ring_burst_bulk_tests1(i | j) < 0) + goto test_fail; + + for (j = TEST_RING_ELEM_BULK; j <= TEST_RING_ELEM_BURST; j <<= 1) + for (i = TEST_RING_THREAD_DEF; + i <= TEST_RING_THREAD_MPMC; i <<= 1) + if (test_ring_burst_bulk_tests2(i | j) < 0) + goto test_fail; + + for (j = TEST_RING_ELEM_BULK; j <= TEST_RING_ELEM_BURST; j <<= 1) + for (i = TEST_RING_THREAD_DEF; + i <= TEST_RING_THREAD_MPMC; i <<= 1) + if (test_ring_burst_bulk_tests3(i | j) < 0) + goto test_fail; + + for (j = TEST_RING_ELEM_BULK; j <= TEST_RING_ELEM_BURST; j <<= 1) + for (i = TEST_RING_THREAD_DEF; + i <= TEST_RING_THREAD_MPMC; i <<= 1) + if (test_ring_burst_bulk_tests4(i | j) < 0) + goto test_fail; + /* dump the ring status */ rte_ring_list_dump(stdout); - rte_ring_free(r); - return 0; test_fail: - rte_ring_free(r); return -1; } diff --git a/app/test/test_ring.h b/app/test/test_ring.h new file mode 100644 index 000000000..26716e4f8 --- /dev/null +++ b/app/test/test_ring.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Arm Limited + */ + +#include +#include +#include + +/* API type to call + * rte_ring__enqueue_ + * TEST_RING_THREAD_DEF - Uses configured SPSC/MPMC calls + * TEST_RING_THREAD_SPSC - Calls SP or SC API + * TEST_RING_THREAD_MPMC - Calls MP or MC API + */ +#define TEST_RING_THREAD_DEF 1 +#define TEST_RING_THREAD_SPSC 2 +#define TEST_RING_THREAD_MPMC 4 + +/* API type to call + * SL - Calls single element APIs + * BL - Calls bulk APIs + * BR - Calls burst APIs + */ +#define TEST_RING_ELEM_SINGLE 8 +#define TEST_RING_ELEM_BULK 16 +#define TEST_RING_ELEM_BURST 32 + +#define TEST_RING_IGNORE_API_TYPE ~0U + +/* This function is placed here as it is required for both + * performance and functional tests. + */ +static inline struct rte_ring* +test_ring_create(const char *name, int esize, unsigned int count, + int socket_id, unsigned int flags) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + return rte_ring_create((name), (count), (socket_id), (flags)); + else + return rte_ring_create_elem((name), (esize), (count), + (socket_id), (flags)); +} + +static __rte_always_inline unsigned int +test_ring_enqueue(struct rte_ring *r, void **obj, int esize, unsigned int n, + unsigned int api_type) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_enqueue(r, obj); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sp_enqueue(r, obj); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mp_enqueue(r, obj); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_enqueue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sp_enqueue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mp_enqueue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_enqueue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sp_enqueue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mp_enqueue_burst(r, obj, n, NULL); + default: + printf("Invalid API type\n"); + return 0; + } + else + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_enqueue_elem(r, obj, esize); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sp_enqueue_elem(r, obj, esize); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mp_enqueue_elem(r, obj, esize); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_enqueue_bulk_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_enqueue_burst_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sp_enqueue_burst_elem(r, obj, esize, n, + NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mp_enqueue_burst_elem(r, obj, esize, n, + NULL); + default: + printf("Invalid API type\n"); + return 0; + } +} + +static __rte_always_inline unsigned int +test_ring_dequeue(struct rte_ring *r, void **obj, int esize, unsigned int n, + unsigned int api_type) +{ + /* Legacy queue APIs? */ + if ((esize) == -1) + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_dequeue(r, obj); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sc_dequeue(r, obj); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mc_dequeue(r, obj); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_dequeue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sc_dequeue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mc_dequeue_bulk(r, obj, n, NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_dequeue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sc_dequeue_burst(r, obj, n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mc_dequeue_burst(r, obj, n, NULL); + default: + printf("Invalid API type\n"); + return 0; + } + else + switch (api_type) { + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_SINGLE): + return rte_ring_dequeue_elem(r, obj, esize); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_SINGLE): + return rte_ring_sc_dequeue_elem(r, obj, esize); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_SINGLE): + return rte_ring_mc_dequeue_elem(r, obj, esize); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BULK): + return rte_ring_dequeue_bulk_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK): + return rte_ring_sc_dequeue_bulk_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK): + return rte_ring_mc_dequeue_bulk_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_DEF | TEST_RING_ELEM_BURST): + return rte_ring_dequeue_burst_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BURST): + return rte_ring_sc_dequeue_burst_elem(r, obj, esize, + n, NULL); + case (TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BURST): + return rte_ring_mc_dequeue_burst_elem(r, obj, esize, + n, NULL); + default: + printf("Invalid API type\n"); + return 0; + } +} + +/* This function is placed here as it is required for both + * performance and functional tests. + */ +static __rte_always_inline void * +test_ring_calloc(unsigned int rsize, int esize) +{ + unsigned int sz; + void *p; + + /* Legacy queue APIs? */ + if (esize == -1) + sz = sizeof(void *); + else + sz = esize; + + p = rte_zmalloc(NULL, rsize * sz, RTE_CACHE_LINE_SIZE); + if (p == NULL) + printf("Failed to allocate memory\n"); + + return p; +} From patchwork Thu Jan 16 05:25:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182852 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp7831218ile; Wed, 15 Jan 2020 21:26:06 -0800 (PST) X-Google-Smtp-Source: APXvYqzgcYAddIWwe25s72KYqIUfT/FJ7e4a1+ChhtaJa28twtT7Zd11qwH4CnUsti3SUKPe0+O2 X-Received: by 2002:a05:6402:1343:: with SMTP id y3mr31173908edw.324.1579152365939; Wed, 15 Jan 2020 21:26:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579152365; cv=none; d=google.com; s=arc-20160816; b=iMpqB0bko7L/n2v4NxjdTfZLiqDqL8gpYBaaxHfSrrmMEIrRmwiEJYzd/eRzef+cG7 H6kRmEqX6UkisLs5oCiU4MNGBKwVIP5LseDTn2fBDygLF3XV9AJV58mWFq+nSk3faGlg NNucEOfrk8JFXCPXWRlM+jOXA4hlETUSjl/dipoo7aEF8Xrmh1IYZUHxe+WjpDvkh/5L m0imcwZ3dxdP/YPLow3YIaX2rvDTOF03+3WuzTJawDZz+JclPJ3t7bIRcbkprubOqsnf k/UXNa8lI6Wdj6ch5yrn3TijP6Ff+xPA+auOGwSGcSiubG1oL63TKlHWybX4oTB03I0k Ki/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=jVtwypcYNzhD/+MqXWNfFF3kUA/Zw6eCjRKX6TFAoAs=; b=0KIpwgLV0iZD/LMM6DqjzrWPrsZh7QLP0cYZVAq/9tjI45aAJgNPDZdEl1IHtXcrAs G10aGVEvINVHPykw8spLQlUqWwaxVAQ5MaS7UWtV9yHCuDYCsIwPniR/ZmhmCBu7hHsj v5CPCasjvWJUlUYjnJAlPu/ULwdHi1gqNxNIu1528JTMo4pSMipUBOYlfMe/iZEn/7ow 19vKXixwNAtwSj7nJKgmZbjXfVJ5fvth8lX9RxdhDnoAdK5DCUy3/VJu9Zny2fS/It04 I8ZnYLCmas1roDHfMvguTRaE81BSKQ9JPF2zCYkGosTQP5qUkHMlxSNHx25NbZMTbcAe bCRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id s26si12655107edr.517.2020.01.15.21.26.05; Wed, 15 Jan 2020 21:26:05 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 099FB1C0D7; Thu, 16 Jan 2020 06:25:34 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 246D41C014 for ; Thu, 16 Jan 2020 06:25:30 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B41E11007; Wed, 15 Jan 2020 21:25:29 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 996903F534; Wed, 15 Jan 2020 21:25:29 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Wed, 15 Jan 2020 23:25:09 -0600 Message-Id: <20200116052511.8557-5-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200116052511.8557-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200116052511.8557-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v9 4/6] test/ring: modify perf test cases to use rte_ring_xxx_elem APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adjust the performance test cases to test rte_ring_xxx_elem APIs. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 454 +++++++++++++++++++++++--------------- 1 file changed, 273 insertions(+), 181 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 6c2aca483..8d1217951 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -13,6 +13,7 @@ #include #include "test.h" +#include "test_ring.h" /* * Ring @@ -41,6 +42,35 @@ struct lcore_pair { static volatile unsigned lcore_count = 0; +static void +test_ring_print_test_string(unsigned int api_type, int esize, + unsigned int bsz, double value) +{ + if (esize == -1) + printf("legacy APIs"); + else + printf("elem APIs: element size %dB", esize); + + if (api_type == TEST_RING_IGNORE_API_TYPE) + return; + + if ((api_type & TEST_RING_THREAD_DEF) == TEST_RING_THREAD_DEF) + printf(": default enqueue/dequeue: "); + else if ((api_type & TEST_RING_THREAD_SPSC) == TEST_RING_THREAD_SPSC) + printf(": SP/SC: "); + else if ((api_type & TEST_RING_THREAD_MPMC) == TEST_RING_THREAD_MPMC) + printf(": MP/MC: "); + + if ((api_type & TEST_RING_ELEM_SINGLE) == TEST_RING_ELEM_SINGLE) + printf("single: "); + else if ((api_type & TEST_RING_ELEM_BULK) == TEST_RING_ELEM_BULK) + printf("bulk (size: %u): ", bsz); + else if ((api_type & TEST_RING_ELEM_BURST) == TEST_RING_ELEM_BURST) + printf("burst (size: %u): ", bsz); + + printf("%.2F\n", value); +} + /**** Functions to analyse our core mask to get cores for different tests ***/ static int @@ -117,27 +147,21 @@ get_two_sockets(struct lcore_pair *lcp) /* Get cycle counts for dequeuing from an empty ring. Should be 2 or 3 cycles */ static void -test_empty_dequeue(struct rte_ring *r) +test_empty_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 26; - const unsigned iterations = 1< enqueue + * flag == 1 -> dequeue */ -static int -enqueue_bulk(void *p) +static __rte_always_inline int +enqueue_dequeue_bulk_helper(const unsigned int flag, const int esize, + struct thread_params *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; + int ret; + const unsigned int iter_shift = 23; + const unsigned int iterations = 1 << iter_shift; + struct rte_ring *r = p->r; + unsigned int bsize = p->size; + unsigned int i; + void *burst = NULL; #ifdef RTE_USE_C11_MEM_MODEL if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) @@ -173,23 +199,67 @@ enqueue_bulk(void *p) while(lcore_count != 2) rte_pause(); + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; + const uint64_t sp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_sp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + ret = test_ring_enqueue(r, burst, esize, bsize, + TEST_RING_THREAD_SPSC | + TEST_RING_ELEM_BULK); + else if (flag == 1) + ret = test_ring_dequeue(r, burst, esize, bsize, + TEST_RING_THREAD_SPSC | + TEST_RING_ELEM_BULK); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t sp_end = rte_rdtsc(); const uint64_t mp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_mp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + ret = test_ring_enqueue(r, burst, esize, bsize, + TEST_RING_THREAD_MPMC | + TEST_RING_ELEM_BULK); + else if (flag == 1) + ret = test_ring_dequeue(r, burst, esize, bsize, + TEST_RING_THREAD_MPMC | + TEST_RING_ELEM_BULK); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t mp_end = rte_rdtsc(); - params->spsc = ((double)(sp_end - sp_start))/(iterations*size); - params->mpmc = ((double)(mp_end - mp_start))/(iterations*size); + p->spsc = ((double)(sp_end - sp_start))/(iterations * bsize); + p->mpmc = ((double)(mp_end - mp_start))/(iterations * bsize); return 0; } +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, -1, params); +} + +static int +enqueue_bulk_16B(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, 16, params); +} + /* * Function that uses rdtsc to measure timing for ring dequeue. Needs pair * thread running enqueue_bulk function @@ -197,49 +267,38 @@ enqueue_bulk(void *p) static int dequeue_bulk(void *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; - -#ifdef RTE_USE_C11_MEM_MODEL - if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) -#else - if (__sync_add_and_fetch(&lcore_count, 1) != 2) -#endif - while(lcore_count != 2) - rte_pause(); - const uint64_t sc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_sc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t sc_end = rte_rdtsc(); + return enqueue_dequeue_bulk_helper(1, -1, params); +} - const uint64_t mc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_mc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t mc_end = rte_rdtsc(); +static int +dequeue_bulk_16B(void *p) +{ + struct thread_params *params = p; - params->spsc = ((double)(sc_end - sc_start))/(iterations*size); - params->mpmc = ((double)(mc_end - mc_start))/(iterations*size); - return 0; + return enqueue_dequeue_bulk_helper(1, 16, params); } /* * Function that calls the enqueue and dequeue bulk functions on pairs of cores. * used to measure ring perf between hyperthreads, cores and sockets. */ -static void -run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, - lcore_function_t f1, lcore_function_t f2) +static int +run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, const int esize) { + lcore_function_t *f1, *f2; struct thread_params param1 = {0}, param2 = {0}; unsigned i; + + if (esize == -1) { + f1 = enqueue_bulk; + f2 = dequeue_bulk; + } else { + f1 = enqueue_bulk_16B; + f2 = dequeue_bulk_16B; + } + for (i = 0; i < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); i++) { lcore_count = 0; param1.size = param2.size = bulk_sizes[i]; @@ -251,14 +310,20 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, } else { rte_eal_remote_launch(f1, ¶m1, cores->c1); rte_eal_remote_launch(f2, ¶m2, cores->c2); - rte_eal_wait_lcore(cores->c1); - rte_eal_wait_lcore(cores->c2); + if (rte_eal_wait_lcore(cores->c1) < 0) + return -1; + if (rte_eal_wait_lcore(cores->c2) < 0) + return -1; } - printf("SP/SC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.spsc + param2.spsc); - printf("MP/MC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.mpmc + param2.mpmc); + test_ring_print_test_string( + TEST_RING_THREAD_SPSC | TEST_RING_ELEM_BULK, + esize, bulk_sizes[i], param1.spsc + param2.spsc); + test_ring_print_test_string( + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK, + esize, bulk_sizes[i], param1.mpmc + param2.mpmc); } + + return 0; } static rte_atomic32_t synchro; @@ -267,7 +332,7 @@ static uint64_t queue_count[RTE_MAX_LCORE]; #define TIME_MS 100 static int -load_loop_fn(void *p) +load_loop_fn_helper(struct thread_params *p, const int esize) { uint64_t time_diff = 0; uint64_t begin = 0; @@ -275,7 +340,11 @@ load_loop_fn(void *p) uint64_t lcount = 0; const unsigned int lcore = rte_lcore_id(); struct thread_params *params = p; - void *burst[MAX_BURST] = {0}; + void *burst = NULL; + + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; /* wait synchro for slaves */ if (lcore != rte_get_master_lcore()) @@ -284,22 +353,49 @@ load_loop_fn(void *p) begin = rte_get_timer_cycles(); while (time_diff < hz * TIME_MS / 1000) { - rte_ring_mp_enqueue_bulk(params->r, burst, params->size, NULL); - rte_ring_mc_dequeue_bulk(params->r, burst, params->size, NULL); + test_ring_enqueue(params->r, burst, esize, params->size, + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK); + test_ring_dequeue(params->r, burst, esize, params->size, + TEST_RING_THREAD_MPMC | TEST_RING_ELEM_BULK); lcount++; time_diff = rte_get_timer_cycles() - begin; } queue_count[lcore] = lcount; + + rte_free(burst); + return 0; } static int -run_on_all_cores(struct rte_ring *r) +load_loop_fn(void *p) +{ + struct thread_params *params = p; + + return load_loop_fn_helper(params, -1); +} + +static int +load_loop_fn_16B(void *p) +{ + struct thread_params *params = p; + + return load_loop_fn_helper(params, 16); +} + +static int +run_on_all_cores(struct rte_ring *r, const int esize) { uint64_t total = 0; struct thread_params param; + lcore_function_t *lcore_f; unsigned int i, c; + if (esize == -1) + lcore_f = load_loop_fn; + else + lcore_f = load_loop_fn_16B; + memset(¶m, 0, sizeof(struct thread_params)); for (i = 0; i < RTE_DIM(bulk_sizes); i++) { printf("\nBulk enq/dequeue count on size %u\n", bulk_sizes[i]); @@ -308,13 +404,12 @@ run_on_all_cores(struct rte_ring *r) /* clear synchro and start slaves */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(load_loop_fn, ¶m, - SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_MASTER) < 0) return -1; /* start synchro and launch test on master */ rte_atomic32_set(&synchro, 1); - load_loop_fn(¶m); + lcore_f(¶m); rte_eal_mp_wait_lcore(); @@ -335,155 +430,152 @@ run_on_all_cores(struct rte_ring *r) * Test function that determines how long an enqueue + dequeue of a single item * takes on a single lcore. Result is for comparison with the bulk enq+deq. */ -static void -test_single_enqueue_dequeue(struct rte_ring *r) +static int +test_single_enqueue_dequeue(struct rte_ring *r, const int esize, + const unsigned int api_type) { - const unsigned iter_shift = 24; - const unsigned iterations = 1< X-Patchwork-Id: 182853 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp7831307ile; Wed, 15 Jan 2020 21:26:16 -0800 (PST) X-Google-Smtp-Source: APXvYqzikXq8JS5eGzSI5OaqUJf8P7kZvgYcFeQPaI1YOPn7zF42/UbJNHj9Cn/bfQb2ZRl6h4ea X-Received: by 2002:a17:906:2651:: with SMTP id i17mr1122884ejc.246.1579152376034; Wed, 15 Jan 2020 21:26:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579152376; cv=none; d=google.com; s=arc-20160816; b=DJkrTTKTSaqrp2RdwjY7kvhEOCdEgMnWa6/9ZTVB09n4Y6tKsSxCIfPkQ9SzQSwnsy MLTWSIPE8TVjWMncL992jfBhztPC9okqf6ew19prcVU4OKI+eV6upxBdW2KYgDic27rW T/X8oWjviYzrrHNDro1Do2fCZP2TzVHHE/oXykN2wbTHUzPMYQ/OZNLpM06lWCr5TIvP f0FBclB/Q3aczK2lRHAD94RFfmfM/YTKz0HLIRuDswNzwtOdYjfZteCGU6X+Tj0E/nOU I28YHYou1+jifbjV6yeFEV0KTxCjiOLZcgENFdv8tQWcZDO/Rliv9MOYNMwlO1jlTWzH B1gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=Wk3HRZK34tT/ik1whThZEMHFFQs10GDViXjYYt82K9E=; b=KkQvKF9KsjHzpNETswHkyrge/C5/7TB1sulA7nTwwBZoEc6B8jfCfx2xXQMccTao4Q ZAIV+P0YZxfS4jzYZlNvVsM3UnAKvFW0oCNnjJm19u6sv7TNqA97YoGpLyFBO2kMvtr6 G6iaZvfgXB6mSPpmAk74+3BhHQNvdUiQ3jCXzA6Ium8wvpMCqyQDV/oIYrtmUwAw1sSZ 9DE0r3PyZiLuoIYVf48M9BWNxWhulnrJ/RelSBJ1qUzrxuKrBLCcv+AMftzTaB/CFU6T MK5pjrPBZ+Hn9OzH53Cj2MvEo+gnZxOL1HF/vB3c8f+X1WtJeJp+yBmzF14i321VBCCz L0UQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id w10si14499742edv.428.2020.01.15.21.26.15; Wed, 15 Jan 2020 21:26:16 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6B6951C128; Thu, 16 Jan 2020 06:25:35 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 092B11C024 for ; Thu, 16 Jan 2020 06:25:31 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 92032328; Wed, 15 Jan 2020 21:25:30 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7B2A03F534; Wed, 15 Jan 2020 21:25:30 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Wed, 15 Jan 2020 23:25:10 -0600 Message-Id: <20200116052511.8557-6-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200116052511.8557-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200116052511.8557-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v9 5/6] lib/hash: use ring with 32b element size to save memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The freelist and external bucket indices are 32b. Using rings that use 32b element sizes will save memory. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl --- lib/librte_hash/rte_cuckoo_hash.c | 94 ++++++++++++++++--------------- lib/librte_hash/rte_cuckoo_hash.h | 2 +- 2 files changed, 50 insertions(+), 46 deletions(-) -- 2.17.1 Acked-by: Yipeng Wang diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index 87a4c01f2..6c292b6f8 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -136,7 +136,6 @@ rte_hash_create(const struct rte_hash_parameters *params) char ring_name[RTE_RING_NAMESIZE]; char ext_ring_name[RTE_RING_NAMESIZE]; unsigned num_key_slots; - unsigned i; unsigned int hw_trans_mem_support = 0, use_local_cache = 0; unsigned int ext_table_support = 0; unsigned int readwrite_concur_support = 0; @@ -145,6 +144,7 @@ rte_hash_create(const struct rte_hash_parameters *params) uint32_t *ext_bkt_to_free = NULL; uint32_t *tbl_chng_cnt = NULL; unsigned int readwrite_concur_lf_support = 0; + uint32_t i; rte_hash_function default_hash_func = (rte_hash_function)rte_jhash; @@ -213,8 +213,8 @@ rte_hash_create(const struct rte_hash_parameters *params) snprintf(ring_name, sizeof(ring_name), "HT_%s", params->name); /* Create ring (Dummy slot index is not enqueued) */ - r = rte_ring_create(ring_name, rte_align32pow2(num_key_slots), - params->socket_id, 0); + r = rte_ring_create_elem(ring_name, sizeof(uint32_t), + rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { RTE_LOG(ERR, HASH, "memory allocation failed\n"); goto err; @@ -227,7 +227,7 @@ rte_hash_create(const struct rte_hash_parameters *params) if (ext_table_support) { snprintf(ext_ring_name, sizeof(ext_ring_name), "HT_EXT_%s", params->name); - r_ext = rte_ring_create(ext_ring_name, + r_ext = rte_ring_create_elem(ext_ring_name, sizeof(uint32_t), rte_align32pow2(num_buckets + 1), params->socket_id, 0); @@ -295,7 +295,7 @@ rte_hash_create(const struct rte_hash_parameters *params) * for next bucket */ for (i = 1; i <= num_buckets; i++) - rte_ring_sp_enqueue(r_ext, (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(r_ext, &i, sizeof(uint32_t)); if (readwrite_concur_lf_support) { ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * @@ -434,7 +434,7 @@ rte_hash_create(const struct rte_hash_parameters *params) /* Populate free slots ring. Entry zero is reserved for key misses. */ for (i = 1; i < num_key_slots; i++) - rte_ring_sp_enqueue(r, (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(r, &i, sizeof(uint32_t)); te->data = (void *) h; TAILQ_INSERT_TAIL(hash_list, te, next); @@ -598,13 +598,13 @@ rte_hash_reset(struct rte_hash *h) tot_ring_cnt = h->entries; for (i = 1; i < tot_ring_cnt + 1; i++) - rte_ring_sp_enqueue(h->free_slots, (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(h->free_slots, &i, sizeof(uint32_t)); /* Repopulate the free ext bkt ring. */ if (h->ext_table_support) { for (i = 1; i <= h->num_buckets; i++) - rte_ring_sp_enqueue(h->free_ext_bkts, - (void *)((uintptr_t) i)); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &i, + sizeof(uint32_t)); } if (h->use_local_cache) { @@ -623,13 +623,14 @@ rte_hash_reset(struct rte_hash *h) static inline void enqueue_slot_back(const struct rte_hash *h, struct lcore_cache *cached_free_slots, - void *slot_id) + uint32_t slot_id) { if (h->use_local_cache) { cached_free_slots->objs[cached_free_slots->len] = slot_id; cached_free_slots->len++; } else - rte_ring_sp_enqueue(h->free_slots, slot_id); + rte_ring_sp_enqueue_elem(h->free_slots, &slot_id, + sizeof(uint32_t)); } /* Search a key from bucket and update its data. @@ -923,9 +924,8 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, uint32_t prim_bucket_idx, sec_bucket_idx; struct rte_hash_bucket *prim_bkt, *sec_bkt, *cur_bkt; struct rte_hash_key *new_k, *keys = h->key_store; - void *slot_id = NULL; - void *ext_bkt_id = NULL; - uint32_t new_idx, bkt_id; + uint32_t slot_id; + uint32_t ext_bkt_id; int ret; unsigned n_slots; unsigned lcore_id; @@ -968,8 +968,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Try to get a free slot from the local cache */ if (cached_free_slots->len == 0) { /* Need to get another burst of free slots from global ring */ - n_slots = rte_ring_mc_dequeue_burst(h->free_slots, + n_slots = rte_ring_mc_dequeue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); if (n_slots == 0) { return -ENOSPC; @@ -982,13 +983,13 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, cached_free_slots->len--; slot_id = cached_free_slots->objs[cached_free_slots->len]; } else { - if (rte_ring_sc_dequeue(h->free_slots, &slot_id) != 0) { + if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, + sizeof(uint32_t)) != 0) { return -ENOSPC; } } - new_k = RTE_PTR_ADD(keys, (uintptr_t)slot_id * h->key_entry_size); - new_idx = (uint32_t)((uintptr_t) slot_id); + new_k = RTE_PTR_ADD(keys, slot_id * h->key_entry_size); /* The store to application data (by the application) at *data should * not leak after the store of pdata in the key store. i.e. pdata is * the guard variable. Release the application data to the readers. @@ -1001,9 +1002,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Find an empty slot and insert */ ret = rte_hash_cuckoo_insert_mw(h, prim_bkt, sec_bkt, key, data, - short_sig, new_idx, &ret_val); + short_sig, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1011,9 +1012,9 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Primary bucket full, need to make space for new entry */ ret = rte_hash_cuckoo_make_space_mw(h, prim_bkt, sec_bkt, key, data, - short_sig, prim_bucket_idx, new_idx, &ret_val); + short_sig, prim_bucket_idx, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1021,10 +1022,10 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Also search secondary bucket to get better occupancy */ ret = rte_hash_cuckoo_make_space_mw(h, sec_bkt, prim_bkt, key, data, - short_sig, sec_bucket_idx, new_idx, &ret_val); + short_sig, sec_bucket_idx, slot_id, &ret_val); if (ret == 0) - return new_idx - 1; + return slot_id - 1; else if (ret == 1) { enqueue_slot_back(h, cached_free_slots, slot_id); return ret_val; @@ -1067,10 +1068,10 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, * and key. */ __atomic_store_n(&cur_bkt->key_idx[i], - new_idx, + slot_id, __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); - return new_idx - 1; + return slot_id - 1; } } } @@ -1078,26 +1079,26 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Failed to get an empty entry from extendable buckets. Link a new * extendable bucket. We first get a free bucket from ring. */ - if (rte_ring_sc_dequeue(h->free_ext_bkts, &ext_bkt_id) != 0) { + if (rte_ring_sc_dequeue_elem(h->free_ext_bkts, &ext_bkt_id, + sizeof(uint32_t)) != 0) { ret = -ENOSPC; goto failure; } - bkt_id = (uint32_t)((uintptr_t)ext_bkt_id) - 1; /* Use the first location of the new bucket */ - (h->buckets_ext[bkt_id]).sig_current[0] = short_sig; + (h->buckets_ext[ext_bkt_id - 1]).sig_current[0] = short_sig; /* Store to signature and key should not leak after * the store to key_idx. i.e. key_idx is the guard variable * for signature and key. */ - __atomic_store_n(&(h->buckets_ext[bkt_id]).key_idx[0], - new_idx, + __atomic_store_n(&(h->buckets_ext[ext_bkt_id - 1]).key_idx[0], + slot_id, __ATOMIC_RELEASE); /* Link the new bucket to sec bucket linked list */ last = rte_hash_get_last_bkt(sec_bkt); - last->next = &h->buckets_ext[bkt_id]; + last->next = &h->buckets_ext[ext_bkt_id - 1]; __hash_rw_writer_unlock(h); - return new_idx - 1; + return slot_id - 1; failure: __hash_rw_writer_unlock(h); @@ -1373,8 +1374,9 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) /* Cache full, need to free it. */ if (cached_free_slots->len == LCORE_CACHE_SIZE) { /* Need to enqueue the free slots in global ring. */ - n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + n_slots = rte_ring_mp_enqueue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); ERR_IF_TRUE((n_slots == 0), "%s: could not enqueue free slots in global ring\n", @@ -1383,11 +1385,11 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) } /* Put index of new free slot in cache. */ cached_free_slots->objs[cached_free_slots->len] = - (void *)((uintptr_t)bkt->key_idx[i]); + bkt->key_idx[i]; cached_free_slots->len++; } else { - rte_ring_sp_enqueue(h->free_slots, - (void *)((uintptr_t)bkt->key_idx[i])); + rte_ring_sp_enqueue_elem(h->free_slots, + &bkt->key_idx[i], sizeof(uint32_t)); } } @@ -1551,7 +1553,8 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, */ h->ext_bkt_to_free[ret] = index; else - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, + sizeof(uint32_t)); } __hash_rw_writer_unlock(h); return ret; @@ -1614,7 +1617,8 @@ rte_hash_free_key_with_position(const struct rte_hash *h, uint32_t index = h->ext_bkt_to_free[position]; if (index) { /* Recycle empty ext bkt to free list. */ - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, + sizeof(uint32_t)); h->ext_bkt_to_free[position] = 0; } } @@ -1625,19 +1629,19 @@ rte_hash_free_key_with_position(const struct rte_hash *h, /* Cache full, need to free it. */ if (cached_free_slots->len == LCORE_CACHE_SIZE) { /* Need to enqueue the free slots in global ring. */ - n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + n_slots = rte_ring_mp_enqueue_burst_elem(h->free_slots, cached_free_slots->objs, + sizeof(uint32_t), LCORE_CACHE_SIZE, NULL); RETURN_IF_TRUE((n_slots == 0), -EFAULT); cached_free_slots->len -= n_slots; } /* Put index of new free slot in cache. */ - cached_free_slots->objs[cached_free_slots->len] = - (void *)((uintptr_t)key_idx); + cached_free_slots->objs[cached_free_slots->len] = key_idx; cached_free_slots->len++; } else { - rte_ring_sp_enqueue(h->free_slots, - (void *)((uintptr_t)key_idx)); + rte_ring_sp_enqueue_elem(h->free_slots, &key_idx, + sizeof(uint32_t)); } return 0; diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index fb19bb27d..345de6bf9 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -124,7 +124,7 @@ const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = { struct lcore_cache { unsigned len; /**< Cache len */ - void *objs[LCORE_CACHE_SIZE]; /**< Cache objects */ + uint32_t objs[LCORE_CACHE_SIZE]; /**< Cache objects */ } __rte_cache_aligned; /* Structure that stores key-value pair */ From patchwork Thu Jan 16 05:25:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182854 Delivered-To: patch@linaro.org Received: by 2002:a92:1f12:0:0:0:0:0 with SMTP id i18csp7831480ile; Wed, 15 Jan 2020 21:26:28 -0800 (PST) X-Google-Smtp-Source: APXvYqwtExySpocQ/2r9UmEjf1Yr2qOnFuYm27jV+XqEaUq9iddOHOgmlcha4l2ds10drn3MadeL X-Received: by 2002:a05:6402:17ac:: with SMTP id j12mr34303925edy.39.1579152388866; Wed, 15 Jan 2020 21:26:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579152388; cv=none; d=google.com; s=arc-20160816; b=D6AgMMMyhBbK2EBnVL2R2rHyJgfsBmVaqw40lhITDuvHUgk7HZvCIGuOp3P0wy+Jkk qmIJ/43FzIOduJr5LdczflvcvhUPfkr6/8+48TLPnZeO6rb7vh6bweqArUFuz4bcrkBO F/emz/+nP94F3N1BBT0T7+MP8HXmne8+aHW8JGgS0MswrgH9vegFIQJQfvB62w5dLgpB Sn0aGc8WKyz0aG2eFI6zEheTuIHiTVbOsEvL6ACAeF5EOPnAnYpXnRp0XxAyjhI9FYlZ dHtWwZr0IbH2J0UKrf7QbmMTqTAaRUVijy/p5w9tuiw5zPMuFUdFhMwv/+Q9X+1XgS8E daFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=q3hKgETABzf/OKQ70y6rkYoNDa2XquOzxU1SG/W59yo=; b=TVmey20GXvvIYRSk4yscNIt/pO8STKIr9h4sT/Qa9uoPo+yjiCxGM9d5ycrXyd3qZA GAE0HWoVDQlaQ9d0nKZFlhaNedtGexCJQGi+SJRSDcQFYO/Bnl+6OwXhzTTO9mRCVnob 32jiDQeBd3Z4qzmskqeIIIsGEPrJVg0TA7CYArE5XLVqjdXB7DF3eUGbxcpyTHxlekR4 4YYMeaDsrChHp47/xW75LIhyy2KxDc9BHxjr0oNi11ZCHT1fpazpFm1oMxFLVQUxsCop C1+NgIdVQOlDIcF2of11hLFSK7Z+iBBoTsooMdhm+a5T5ThASR3/Mt1yc1cHFOIy0jg+ YeSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id p21si11984619edq.525.2020.01.15.21.26.28; Wed, 15 Jan 2020 21:26:28 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9CCAE1C1B5; Thu, 16 Jan 2020 06:25:37 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 07E941C066 for ; Thu, 16 Jan 2020 06:25:32 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 901561007; Wed, 15 Jan 2020 21:25:31 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A4E93F534; Wed, 15 Jan 2020 21:25:31 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Wed, 15 Jan 2020 23:25:11 -0600 Message-Id: <20200116052511.8557-7-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200116052511.8557-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20200116052511.8557-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v9 6/6] lib/eventdev: use custom element size ring for event rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use custom element size ring APIs to replace event ring implementation. This avoids code duplication. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl --- lib/librte_eventdev/rte_event_ring.c | 147 ++------------------------- lib/librte_eventdev/rte_event_ring.h | 45 ++++---- 2 files changed, 24 insertions(+), 168 deletions(-) -- 2.17.1 Reviewed-by: Jerin Jacob diff --git a/lib/librte_eventdev/rte_event_ring.c b/lib/librte_eventdev/rte_event_ring.c index 50190de01..d27e23901 100644 --- a/lib/librte_eventdev/rte_event_ring.c +++ b/lib/librte_eventdev/rte_event_ring.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2017 Intel Corporation + * Copyright(c) 2019 Arm Limited */ #include @@ -11,13 +12,6 @@ #include #include "rte_event_ring.h" -TAILQ_HEAD(rte_event_ring_list, rte_tailq_entry); - -static struct rte_tailq_elem rte_event_ring_tailq = { - .name = RTE_TAILQ_EVENT_RING_NAME, -}; -EAL_REGISTER_TAILQ(rte_event_ring_tailq) - int rte_event_ring_init(struct rte_event_ring *r, const char *name, unsigned int count, unsigned int flags) @@ -35,150 +29,21 @@ struct rte_event_ring * rte_event_ring_create(const char *name, unsigned int count, int socket_id, unsigned int flags) { - char mz_name[RTE_MEMZONE_NAMESIZE]; - struct rte_event_ring *r; - struct rte_tailq_entry *te; - const struct rte_memzone *mz; - ssize_t ring_size; - int mz_flags = 0; - struct rte_event_ring_list *ring_list = NULL; - const unsigned int requested_count = count; - int ret; - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - - /* for an exact size ring, round up from count to a power of two */ - if (flags & RING_F_EXACT_SZ) - count = rte_align32pow2(count + 1); - else if (!rte_is_power_of_2(count)) { - rte_errno = EINVAL; - return NULL; - } - - ring_size = sizeof(*r) + (count * sizeof(struct rte_event)); - - ret = snprintf(mz_name, sizeof(mz_name), "%s%s", - RTE_RING_MZ_PREFIX, name); - if (ret < 0 || ret >= (int)sizeof(mz_name)) { - rte_errno = ENAMETOOLONG; - return NULL; - } - - te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); - if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); - rte_errno = ENOMEM; - return NULL; - } - - rte_mcfg_tailq_write_lock(); - - /* - * reserve a memory zone for this ring. If we can't get rte_config or - * we are secondary process, the memzone_reserve function will set - * rte_errno for us appropriately - hence no check in this this function - */ - mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags); - if (mz != NULL) { - r = mz->addr; - /* Check return value in case rte_ring_init() fails on size */ - int err = rte_event_ring_init(r, name, requested_count, flags); - if (err) { - RTE_LOG(ERR, RING, "Ring init failed\n"); - if (rte_memzone_free(mz) != 0) - RTE_LOG(ERR, RING, "Cannot free memzone\n"); - rte_free(te); - rte_mcfg_tailq_write_unlock(); - return NULL; - } - - te->data = (void *) r; - r->r.memzone = mz; - - TAILQ_INSERT_TAIL(ring_list, te, next); - } else { - r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); - rte_free(te); - } - rte_mcfg_tailq_write_unlock(); - - return r; + return (struct rte_event_ring *)rte_ring_create_elem(name, + sizeof(struct rte_event), + count, socket_id, flags); } struct rte_event_ring * rte_event_ring_lookup(const char *name) { - struct rte_tailq_entry *te; - struct rte_event_ring *r = NULL; - struct rte_event_ring_list *ring_list; - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - - rte_mcfg_tailq_read_lock(); - - TAILQ_FOREACH(te, ring_list, next) { - r = (struct rte_event_ring *) te->data; - if (strncmp(name, r->r.name, RTE_RING_NAMESIZE) == 0) - break; - } - - rte_mcfg_tailq_read_unlock(); - - if (te == NULL) { - rte_errno = ENOENT; - return NULL; - } - - return r; + return (struct rte_event_ring *)rte_ring_lookup(name); } /* free the ring */ void rte_event_ring_free(struct rte_event_ring *r) { - struct rte_event_ring_list *ring_list = NULL; - struct rte_tailq_entry *te; - - if (r == NULL) - return; - - /* - * Ring was not created with rte_event_ring_create, - * therefore, there is no memzone to free. - */ - if (r->r.memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring (not created with rte_event_ring_create()"); - return; - } - - if (rte_memzone_free(r->r.memzone) != 0) { - RTE_LOG(ERR, RING, "Cannot free memory\n"); - return; - } - - ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head, - rte_event_ring_list); - rte_mcfg_tailq_write_lock(); - - /* find out tailq entry */ - TAILQ_FOREACH(te, ring_list, next) { - if (te->data == (void *) r) - break; - } - - if (te == NULL) { - rte_mcfg_tailq_write_unlock(); - return; - } - - TAILQ_REMOVE(ring_list, te, next); - - rte_mcfg_tailq_write_unlock(); - - rte_free(te); + rte_ring_free((struct rte_ring *)r); } diff --git a/lib/librte_eventdev/rte_event_ring.h b/lib/librte_eventdev/rte_event_ring.h index 827a3209e..c0861b0ec 100644 --- a/lib/librte_eventdev/rte_event_ring.h +++ b/lib/librte_eventdev/rte_event_ring.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2019 Arm Limited */ /** @@ -19,6 +20,7 @@ #include #include #include +#include #include "rte_eventdev.h" #define RTE_TAILQ_EVENT_RING_NAME "RTE_EVENT_RING" @@ -88,22 +90,17 @@ rte_event_ring_enqueue_burst(struct rte_event_ring *r, const struct rte_event *events, unsigned int n, uint16_t *free_space) { - uint32_t prod_head, prod_next; - uint32_t free_entries; + unsigned int num; + uint32_t space; - n = __rte_ring_move_prod_head(&r->r, r->r.prod.single, n, - RTE_RING_QUEUE_VARIABLE, - &prod_head, &prod_next, &free_entries); - if (n == 0) - goto end; + num = rte_ring_enqueue_burst_elem(&r->r, events, + sizeof(struct rte_event), n, + &space); - ENQUEUE_PTRS(&r->r, &r[1], prod_head, events, n, struct rte_event); - - update_tail(&r->r.prod, prod_head, prod_next, r->r.prod.single, 1); -end: if (free_space != NULL) - *free_space = free_entries - n; - return n; + *free_space = space; + + return num; } /** @@ -129,23 +126,17 @@ rte_event_ring_dequeue_burst(struct rte_event_ring *r, struct rte_event *events, unsigned int n, uint16_t *available) { - uint32_t cons_head, cons_next; - uint32_t entries; - - n = __rte_ring_move_cons_head(&r->r, r->r.cons.single, n, - RTE_RING_QUEUE_VARIABLE, - &cons_head, &cons_next, &entries); - if (n == 0) - goto end; + unsigned int num; + uint32_t remaining; - DEQUEUE_PTRS(&r->r, &r[1], cons_head, events, n, struct rte_event); + num = rte_ring_dequeue_burst_elem(&r->r, events, + sizeof(struct rte_event), n, + &remaining); - update_tail(&r->r.cons, cons_head, cons_next, r->r.cons.single, 0); - -end: if (available != NULL) - *available = entries - n; - return n; + *available = remaining; + + return num; } /*