From patchwork Wed Oct 9 02:47:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 175586 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp167703ill; Tue, 8 Oct 2019 19:47:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqwbbPvGJaTgW0iGUObHc928174fxOkn/K/dx4Q+i4C6/rGrO6N4fGb93pB9ceoNg9q+0wL9 X-Received: by 2002:a17:906:1505:: with SMTP id b5mr742780ejd.195.1570589259790; Tue, 08 Oct 2019 19:47:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570589259; cv=none; d=google.com; s=arc-20160816; b=APOsP3v9RwSn1SQwJCeoyx93d65LwSdF018hajnU7DoUq9w6CJKD/HwShgOcCax/gt K+Qb+LojQGVZaTyiOdKyYh6PhQ+PXJFoJAOP+TLAM4H80GUU5rHBs97PdSO+bu4WBJPZ b0MBquSyZ0M4bTK7u4+CmdVNgcdAbnZuVXDquRxmx8piAC1Qf0gpctbcCYpJTVE1aW4m NO9BLIJ8lFwjNDd37wvZeP8ym0g+Uxex+Rmg99rAeWKpmjdbjBOLUG2HqY9PfdZRuDgJ xZKvYH3eoTPLSS45lm9u3GenI+d9Alr3kRW+FLt6IfEXoAm1D2MkOcB8pWQw5Zy+Rr5l jBnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=8RyDtQU/BN8oRd3XeChIaTjRhm3l8NQr3gJpYFM752k=; b=f808U9xZoagT3SPuZw//bZk4OXkTMQWj1ZR1JhQ6kMPj5YCsbzBWXjswpkvTib+OYz VwUsdw1eZvxH12keIEaFrv7S01mdUpVNNad/mKkxDx+UBKcfJLbi8yn83QUp+oObxyhA nudrqHw3wYzY8gvAiSE0clKbMDJGsM8l2oGbdQbEmuSwh1lRhxmsFLfP+23Po4EOLRrE 22nI5vDXYl8kkyBFUkgkcOEptmxboQImSSUMrXBZw/fhldSjqtaW5Xkb74M97/WFPqOC flBfgaLPsLmliES07XNO5LSVAo9JCZkpuXHJVsgiG1YUcy3KKm0S7It3xcFYHjjKUH+w OUtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id f35si558783eda.415.2019.10.08.19.47.39; Tue, 08 Oct 2019 19:47:39 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3FCE81C19A; Wed, 9 Oct 2019 04:47:32 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 682891C128 for ; Wed, 9 Oct 2019 04:47:25 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F062928; Tue, 8 Oct 2019 19:47:24 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.34]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E1E333F6C4; Tue, 8 Oct 2019 19:47:24 -0700 (PDT) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com Date: Tue, 8 Oct 2019 21:47:09 -0500 Message-Id: <20191009024709.38144-3-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191009024709.38144-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191009024709.38144-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v4 2/2] test/ring: add test cases for configurable element size ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add test cases to test APIs for configurable element size ring. Signed-off-by: Honnappa Nagarahalli --- app/test/Makefile | 1 + app/test/meson.build | 1 + app/test/test_ring_perf_elem.c | 419 +++++++++++++++++++++++++++++++++ 3 files changed, 421 insertions(+) create mode 100644 app/test/test_ring_perf_elem.c -- 2.17.1 diff --git a/app/test/Makefile b/app/test/Makefile index 26ba6fe2b..e5cb27b75 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -78,6 +78,7 @@ SRCS-y += test_rand_perf.c SRCS-y += test_ring.c SRCS-y += test_ring_perf.c +SRCS-y += test_ring_perf_elem.c SRCS-y += test_pmd_perf.c ifeq ($(CONFIG_RTE_LIBRTE_TABLE),y) diff --git a/app/test/meson.build b/app/test/meson.build index ec40943bd..995ee9bc7 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -101,6 +101,7 @@ test_sources = files('commands.c', 'test_reorder.c', 'test_ring.c', 'test_ring_perf.c', + 'test_ring_perf_elem.c', 'test_rwlock.c', 'test_sched.c', 'test_service_cores.c', diff --git a/app/test/test_ring_perf_elem.c b/app/test/test_ring_perf_elem.c new file mode 100644 index 000000000..fc5b82d71 --- /dev/null +++ b/app/test/test_ring_perf_elem.c @@ -0,0 +1,419 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2014 Intel Corporation + */ + + +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +/* + * Ring + * ==== + * + * Measures performance of various operations using rdtsc + * * Empty ring dequeue + * * Enqueue/dequeue of bursts in 1 threads + * * Enqueue/dequeue of bursts in 2 threads + */ + +#define RING_NAME "RING_PERF" +#define RING_SIZE 4096 +#define MAX_BURST 64 + +/* + * the sizes to enqueue and dequeue in testing + * (marked volatile so they won't be seen as compile-time constants) + */ +static const volatile unsigned bulk_sizes[] = { 8, 32 }; + +struct lcore_pair { + unsigned c1, c2; +}; + +static volatile unsigned lcore_count; + +/**** Functions to analyse our core mask to get cores for different tests ***/ + +static int +get_two_hyperthreads(struct lcore_pair *lcp) +{ + unsigned id1, id2; + unsigned c1, c2, s1, s2; + RTE_LCORE_FOREACH(id1) { + /* inner loop just re-reads all id's. We could skip the + * first few elements, but since number of cores is small + * there is little point + */ + RTE_LCORE_FOREACH(id2) { + if (id1 == id2) + continue; + + c1 = rte_lcore_to_cpu_id(id1); + c2 = rte_lcore_to_cpu_id(id2); + s1 = rte_lcore_to_socket_id(id1); + s2 = rte_lcore_to_socket_id(id2); + if ((c1 == c2) && (s1 == s2)) { + lcp->c1 = id1; + lcp->c2 = id2; + return 0; + } + } + } + return 1; +} + +static int +get_two_cores(struct lcore_pair *lcp) +{ + unsigned id1, id2; + unsigned c1, c2, s1, s2; + RTE_LCORE_FOREACH(id1) { + RTE_LCORE_FOREACH(id2) { + if (id1 == id2) + continue; + + c1 = rte_lcore_to_cpu_id(id1); + c2 = rte_lcore_to_cpu_id(id2); + s1 = rte_lcore_to_socket_id(id1); + s2 = rte_lcore_to_socket_id(id2); + if ((c1 != c2) && (s1 == s2)) { + lcp->c1 = id1; + lcp->c2 = id2; + return 0; + } + } + } + return 1; +} + +static int +get_two_sockets(struct lcore_pair *lcp) +{ + unsigned id1, id2; + unsigned s1, s2; + RTE_LCORE_FOREACH(id1) { + RTE_LCORE_FOREACH(id2) { + if (id1 == id2) + continue; + s1 = rte_lcore_to_socket_id(id1); + s2 = rte_lcore_to_socket_id(id2); + if (s1 != s2) { + lcp->c1 = id1; + lcp->c2 = id2; + return 0; + } + } + } + return 1; +} + +/* Get cycle counts for dequeuing from an empty ring. Should be 2 or 3 cycles */ +static void +test_empty_dequeue(struct rte_ring *r) +{ + const unsigned iter_shift = 26; + const unsigned iterations = 1<r; + const unsigned size = params->size; + unsigned i; + uint32_t burst[MAX_BURST] = {0}; + +#ifdef RTE_USE_C11_MEM_MODEL + if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) +#else + if (__sync_add_and_fetch(&lcore_count, 1) != 2) +#endif + while (lcore_count != 2) + rte_pause(); + + const uint64_t sp_start = rte_rdtsc(); + for (i = 0; i < iterations; i++) + while (rte_ring_sp_enqueue_bulk_elem(r, burst, 8, size, NULL) + == 0) + rte_pause(); + const uint64_t sp_end = rte_rdtsc(); + + const uint64_t mp_start = rte_rdtsc(); + for (i = 0; i < iterations; i++) + while (rte_ring_mp_enqueue_bulk_elem(r, burst, 8, size, NULL) + == 0) + rte_pause(); + const uint64_t mp_end = rte_rdtsc(); + + params->spsc = ((double)(sp_end - sp_start))/(iterations*size); + params->mpmc = ((double)(mp_end - mp_start))/(iterations*size); + return 0; +} + +/* + * Function that uses rdtsc to measure timing for ring dequeue. Needs pair + * thread running enqueue_bulk function + */ +static int +dequeue_bulk(void *p) +{ + const unsigned iter_shift = 23; + const unsigned iterations = 1<r; + const unsigned size = params->size; + unsigned i; + uint32_t burst[MAX_BURST] = {0}; + +#ifdef RTE_USE_C11_MEM_MODEL + if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) +#else + if (__sync_add_and_fetch(&lcore_count, 1) != 2) +#endif + while (lcore_count != 2) + rte_pause(); + + const uint64_t sc_start = rte_rdtsc(); + for (i = 0; i < iterations; i++) + while (rte_ring_sc_dequeue_bulk_elem(r, burst, 8, size, NULL) + == 0) + rte_pause(); + const uint64_t sc_end = rte_rdtsc(); + + const uint64_t mc_start = rte_rdtsc(); + for (i = 0; i < iterations; i++) + while (rte_ring_mc_dequeue_bulk_elem(r, burst, 8, size, NULL) + == 0) + rte_pause(); + const uint64_t mc_end = rte_rdtsc(); + + params->spsc = ((double)(sc_end - sc_start))/(iterations*size); + params->mpmc = ((double)(mc_end - mc_start))/(iterations*size); + return 0; +} + +/* + * Function that calls the enqueue and dequeue bulk functions on pairs of cores. + * used to measure ring perf between hyperthreads, cores and sockets. + */ +static void +run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, + lcore_function_t f1, lcore_function_t f2) +{ + struct thread_params param1 = {0}, param2 = {0}; + unsigned i; + for (i = 0; i < sizeof(bulk_sizes)/sizeof(bulk_sizes[0]); i++) { + lcore_count = 0; + param1.size = param2.size = bulk_sizes[i]; + param1.r = param2.r = r; + if (cores->c1 == rte_get_master_lcore()) { + rte_eal_remote_launch(f2, ¶m2, cores->c2); + f1(¶m1); + rte_eal_wait_lcore(cores->c2); + } else { + rte_eal_remote_launch(f1, ¶m1, cores->c1); + rte_eal_remote_launch(f2, ¶m2, cores->c2); + rte_eal_wait_lcore(cores->c1); + rte_eal_wait_lcore(cores->c2); + } + printf("SP/SC bulk enq/dequeue (size: %u): %.2F\n", + bulk_sizes[i], param1.spsc + param2.spsc); + printf("MP/MC bulk enq/dequeue (size: %u): %.2F\n", + bulk_sizes[i], param1.mpmc + param2.mpmc); + } +} + +/* + * Test function that determines how long an enqueue + dequeue of a single item + * takes on a single lcore. Result is for comparison with the bulk enq+deq. + */ +static void +test_single_enqueue_dequeue(struct rte_ring *r) +{ + const unsigned iter_shift = 24; + const unsigned iterations = 1<> iter_shift); + printf("MP/MC single enq/dequeue: %"PRIu64"\n", + (mc_end-mc_start) >> iter_shift); +} + +/* + * Test that does both enqueue and dequeue on a core using the burst() API calls + * instead of the bulk() calls used in other tests. Results should be the same + * as for the bulk function called on a single lcore. + */ +static void +test_burst_enqueue_dequeue(struct rte_ring *r) +{ + const unsigned iter_shift = 23; + const unsigned iterations = 1<> iter_shift) / + bulk_sizes[sz]; + uint64_t sc_avg = ((sc_end-sc_start) >> iter_shift) / + bulk_sizes[sz]; + + printf("SP/SC burst enq/dequeue (size: %u): %"PRIu64"\n", + bulk_sizes[sz], sc_avg); + printf("MP/MC burst enq/dequeue (size: %u): %"PRIu64"\n", + bulk_sizes[sz], mc_avg); + } +} + +/* Times enqueue and dequeue on a single lcore */ +static void +test_bulk_enqueue_dequeue(struct rte_ring *r) +{ + const unsigned iter_shift = 23; + const unsigned iterations = 1<