From patchwork Fri Dec 20 04:45:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 182214 Delivered-To: patch@linaro.org Received: by 2002:a92:1f98:0:0:0:0:0 with SMTP id f24csp59063ilf; Thu, 19 Dec 2019 20:48:12 -0800 (PST) X-Google-Smtp-Source: APXvYqypLeNJ8CnHw27L3Lw2oQzL0BnPWOmhesjNO9xzd9JMMX24lU1dg9i12PZA+oOI3UEPBGQf X-Received: by 2002:aa7:d716:: with SMTP id t22mr13840698edq.195.1576817292808; Thu, 19 Dec 2019 20:48:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1576817292; cv=none; d=google.com; s=arc-20160816; b=feNWVxYVRVzmgMRsWJOZykV362cZRWhh9CwZCsbmkctlaJllk1dBl/h2x5IsjJ17oB k0y6OXOQTelwtGVCtG5+vsKVTa36av5JLh3T7LamS8xpWky5VkZAz6rV1zfBvi2u19Y0 0pvY/6zr4Dls/vIjl4cWrWz5wNzbdgOSMI1LMnZO5MsOS8kumTzNRi95Igo00mtvIxEC +qiY5yXf7E86iMmSYYXe5Vrwb3hKtDQH85YYDcOQ+8x5UcSQHxKbscFRFvhSj2Lq7uz9 rubwepgoVL05c9/t67KzBwqnjzGwSVv6NTrArVkpSin7TJfhjTG0k4N5+FmzYY/2E8mo pFug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=muz5GkkULF7c0XOkRs6AukS7omCkDOJcebVprK0s9gw=; b=IIpeCQvWN1BFG3x6VJKZl9L/ZxmFlV8/DHRGiZFvkDmXVFL4iJ/+Mxy+DK/D71R/P6 kkkDKoZkZ+nu/5HLSAT6bf9LtteliJp7zCFh3yKMxsbPZF9DaON+BrnEyf5LsNxHBr0O vMEL0fmQurCRcglpM/wNMGM+GSHHbkFHq1QIevm/T8WuIm9nMOGXIXiM8NHhs6OjF+Q1 Gqb/VaQEW+I2tte5Yut25Y3Mo6bCHkLwmOOzBTPGg8Il7hDAdRS8eq7CAvqbEubmx8D6 qQkf+piiovUXrB0JqrZdHadNYln/Xy5DCNDL9DbBiF0mnL6xNrpnuK5XZoQ6RzCwGjDd b5Xg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id q29si3714274edc.429.2019.12.19.20.48.12; Thu, 19 Dec 2019 20:48:12 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 11C701BFCB; Fri, 20 Dec 2019 05:46:27 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id BD6641B9B5 for ; Fri, 20 Dec 2019 05:45:59 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B5E071476; Thu, 19 Dec 2019 20:45:57 -0800 (PST) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.14.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A07073F86C; Thu, 19 Dec 2019 20:45:57 -0800 (PST) From: Honnappa Nagarahalli To: olivier.matz@6wind.com, sthemmin@microsoft.com, jerinj@marvell.com, bruce.richardson@intel.com, david.marchand@redhat.com, pbhagavatula@marvell.com, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com Cc: dev@dpdk.org, dharmik.thakkar@arm.com, ruifeng.wang@arm.com, gavin.hu@arm.com, nd@arm.com Date: Thu, 19 Dec 2019 22:45:21 -0600 Message-Id: <20191220044524.32910-15-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191220044524.32910-1-honnappa.nagarahalli@arm.com> References: <20190906190510.11146-1-honnappa.nagarahalli@arm.com> <20191220044524.32910-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v7 14/17] test/ring: modify multi-lcore perf test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify test cases to test the performance of legacy and rte_ring_xxx_elem APIs for multi lcore scenarios. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- app/test/test_ring_perf.c | 175 +++++++++++++++++++++++++------------- 1 file changed, 115 insertions(+), 60 deletions(-) -- 2.17.1 diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index 0f578c9ae..b893b5779 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -178,19 +178,21 @@ struct thread_params { }; /* - * Function that uses rdtsc to measure timing for ring enqueue. Needs pair - * thread running dequeue_bulk function + * Helper function to call bulk SP/MP enqueue functions. + * flag == 0 -> enqueue + * flag == 1 -> dequeue */ -static int -enqueue_bulk(void *p) +static __rte_always_inline int +enqueue_dequeue_bulk_helper(const unsigned int flag, const int esize, + struct thread_params *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; + int ret; + const unsigned int iter_shift = 23; + const unsigned int iterations = 1 << iter_shift; + struct rte_ring *r = p->r; + unsigned int bsize = p->size; + unsigned int i; + void *burst = NULL; #ifdef RTE_USE_C11_MEM_MODEL if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) @@ -200,23 +202,55 @@ enqueue_bulk(void *p) while(lcore_count != 2) rte_pause(); + burst = test_ring_calloc(MAX_BURST, esize); + if (burst == NULL) + return -1; + const uint64_t sp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_sp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + TEST_RING_ENQUEUE(r, burst, esize, bsize, ret, + TEST_RING_S | TEST_RING_BL); + else if (flag == 1) + TEST_RING_DEQUEUE(r, burst, esize, bsize, ret, + TEST_RING_S | TEST_RING_BL); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t sp_end = rte_rdtsc(); const uint64_t mp_start = rte_rdtsc(); for (i = 0; i < iterations; i++) - while (rte_ring_mp_enqueue_bulk(r, burst, size, NULL) == 0) - rte_pause(); + do { + if (flag == 0) + TEST_RING_ENQUEUE(r, burst, esize, bsize, ret, + TEST_RING_M | TEST_RING_BL); + else if (flag == 1) + TEST_RING_DEQUEUE(r, burst, esize, bsize, ret, + TEST_RING_M | TEST_RING_BL); + if (ret == 0) + rte_pause(); + } while (!ret); const uint64_t mp_end = rte_rdtsc(); - params->spsc = ((double)(sp_end - sp_start))/(iterations*size); - params->mpmc = ((double)(mp_end - mp_start))/(iterations*size); + p->spsc = ((double)(sp_end - sp_start))/(iterations * bsize); + p->mpmc = ((double)(mp_end - mp_start))/(iterations * bsize); return 0; } +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(0, -1, params); +} + /* * Function that uses rdtsc to measure timing for ring dequeue. Needs pair * thread running enqueue_bulk function @@ -224,45 +258,41 @@ enqueue_bulk(void *p) static int dequeue_bulk(void *p) { - const unsigned iter_shift = 23; - const unsigned iterations = 1<r; - const unsigned size = params->size; - unsigned i; - void *burst[MAX_BURST] = {0}; -#ifdef RTE_USE_C11_MEM_MODEL - if (__atomic_add_fetch(&lcore_count, 1, __ATOMIC_RELAXED) != 2) -#else - if (__sync_add_and_fetch(&lcore_count, 1) != 2) -#endif - while(lcore_count != 2) - rte_pause(); + return enqueue_dequeue_bulk_helper(1, -1, params); +} - const uint64_t sc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_sc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t sc_end = rte_rdtsc(); +/* + * Function that uses rdtsc to measure timing for ring enqueue. Needs pair + * thread running dequeue_bulk function + */ +static int +enqueue_bulk_16B(void *p) +{ + struct thread_params *params = p; - const uint64_t mc_start = rte_rdtsc(); - for (i = 0; i < iterations; i++) - while (rte_ring_mc_dequeue_bulk(r, burst, size, NULL) == 0) - rte_pause(); - const uint64_t mc_end = rte_rdtsc(); + return enqueue_dequeue_bulk_helper(0, 16, params); +} - params->spsc = ((double)(sc_end - sc_start))/(iterations*size); - params->mpmc = ((double)(mc_end - mc_start))/(iterations*size); - return 0; +/* + * Function that uses rdtsc to measure timing for ring dequeue. Needs pair + * thread running enqueue_bulk function + */ +static int +dequeue_bulk_16B(void *p) +{ + struct thread_params *params = p; + + return enqueue_dequeue_bulk_helper(1, 16, params); } /* * Function that calls the enqueue and dequeue bulk functions on pairs of cores. * used to measure ring perf between hyperthreads, cores and sockets. */ -static void -run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, +static int +run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, int esize, lcore_function_t f1, lcore_function_t f2) { struct thread_params param1 = {0}, param2 = {0}; @@ -278,14 +308,20 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, } else { rte_eal_remote_launch(f1, ¶m1, cores->c1); rte_eal_remote_launch(f2, ¶m2, cores->c2); - rte_eal_wait_lcore(cores->c1); - rte_eal_wait_lcore(cores->c2); + if (rte_eal_wait_lcore(cores->c1) < 0) + return -1; + if (rte_eal_wait_lcore(cores->c2) < 0) + return -1; } - printf("SP/SC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.spsc + param2.spsc); - printf("MP/MC bulk enq/dequeue (size: %u): %.2F\n", bulk_sizes[i], - param1.mpmc + param2.mpmc); + test_ring_print_test_string(TEST_RING_S | TEST_RING_BL, esize, + bulk_sizes[i], + param1.spsc + param2.spsc); + test_ring_print_test_string(TEST_RING_M | TEST_RING_BL, esize, + bulk_sizes[i], + param1.mpmc + param2.mpmc); } + + return 0; } static rte_atomic32_t synchro; @@ -466,6 +502,24 @@ test_ring_perf(void) printf("\n### Testing empty bulk deq ###\n"); test_empty_dequeue(r, -1, TEST_RING_S | TEST_RING_BL); test_empty_dequeue(r, -1, TEST_RING_M | TEST_RING_BL); + if (get_two_hyperthreads(&cores) == 0) { + printf("\n### Testing using two hyperthreads ###\n"); + if (run_on_core_pair(&cores, r, -1, enqueue_bulk, + dequeue_bulk) < 0) + return -1; + } + if (get_two_cores(&cores) == 0) { + printf("\n### Testing using two physical cores ###\n"); + if (run_on_core_pair(&cores, r, -1, enqueue_bulk, + dequeue_bulk) < 0) + return -1; + } + if (get_two_sockets(&cores) == 0) { + printf("\n### Testing using two NUMA nodes ###\n"); + if (run_on_core_pair(&cores, r, -1, enqueue_bulk, + dequeue_bulk) < 0) + return -1; + } rte_ring_free(r); TEST_RING_CREATE(RING_NAME, 16, RING_SIZE, rte_socket_id(), 0, r); @@ -494,29 +548,30 @@ test_ring_perf(void) printf("\n### Testing empty bulk deq ###\n"); test_empty_dequeue(r, 16, TEST_RING_S | TEST_RING_BL); test_empty_dequeue(r, 16, TEST_RING_M | TEST_RING_BL); - rte_ring_free(r); - - r = rte_ring_create(RING_NAME, RING_SIZE, rte_socket_id(), 0); - if (r == NULL) - return -1; - if (get_two_hyperthreads(&cores) == 0) { printf("\n### Testing using two hyperthreads ###\n"); - run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); + if (run_on_core_pair(&cores, r, 16, enqueue_bulk_16B, + dequeue_bulk_16B) < 0) + return -1; } if (get_two_cores(&cores) == 0) { printf("\n### Testing using two physical cores ###\n"); - run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); + if (run_on_core_pair(&cores, r, 16, enqueue_bulk_16B, + dequeue_bulk_16B) < 0) + return -1; } if (get_two_sockets(&cores) == 0) { printf("\n### Testing using two NUMA nodes ###\n"); - run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); + if (run_on_core_pair(&cores, r, 16, enqueue_bulk_16B, + dequeue_bulk_16B) < 0) + return -1; } printf("\n### Testing using all slave nodes ###\n"); run_on_all_cores(r); rte_ring_free(r); + return 0; }