From patchwork Thu Aug 9 07:00:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 143815 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp1730847ljj; Thu, 9 Aug 2018 00:00:46 -0700 (PDT) X-Google-Smtp-Source: AA+uWPxMfpoeiFQM1Vk2VZafqW35gJHeeCyFrxUHIgJ4phCWe5m2P8vGxcnYXeSKYUJufoOKZfic X-Received: by 2002:a37:a9c1:: with SMTP id s184-v6mr669827qke.199.1533798046517; Thu, 09 Aug 2018 00:00:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533798046; cv=none; d=google.com; s=arc-20160816; b=Fn+G17MxJyEwGLf1JzeonrKWWxYHFgxC9YRmVu2QCAeF1H/5fW1cZBZUAy70B3lQuP h88EQQ4Q/ojDAfpcOPyGau9//xG9As/xDc4ykg4XJoFbepkNuwIB5qgPIp7N3GifP6yV 6NzvgEGWS85M60+J7138FLHDglCM5Hl3amgnhaE1cB8DBFCdXfqG99CnUWP2g1syQAWI CIJq+uACwoczYPO/KeV6TpYfkwnJr3K6VVeGMiuV/qsBq8TMabq69ofg9uxhZnV3JDd9 0RlQqDXpXZFDW9F8bTSbGhOy7QKhDHBJaCtf25PAXVUF0QebCF6j0BAIEERXqK4UBW3S iA8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=VqTelMOkUimtsDJGTLtAxGWH8hJPeaafkNhxIOxpcfE=; b=WUXCFcaL2O0lQTGX80OwkyPrNXbqGtsqQqhIJb5ZWopO8iCYEJRQ+2vzWPOWkOLLg+ 1xfI7L0isEhhGiuo/dgvyvca37Tw+DEeDEohxQCbbVwEmzkp41qhOWBCtT2ymD8cgip8 JxPuphzqQJN9JKt24Z5ioyERko7YOVYQVZobVe1sH4ngy6VjfKvUDwXau1QkI4erg3NS DTCXJvcUtukHJ3PWCdi22P8ql8Kps7VHuDxZpUyZxMaWPREX8bAh7AygEy8RIFMP2UsI hgS94K2sg/MkFzegmzVmvk3ya1SVZSJ0TQMYM9jKQ+xs0uoCbF7MpuzaWDfJyAeUvOBc Hk4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id b190-v6si3985361qkf.74.2018.08.09.00.00.46; Thu, 09 Aug 2018 00:00:46 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 413F56861D; Thu, 9 Aug 2018 07:00:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 273E068605; Thu, 9 Aug 2018 07:00:17 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id A477F685E0; Thu, 9 Aug 2018 07:00:12 +0000 (UTC) Received: from forward105p.mail.yandex.net (forward105p.mail.yandex.net [77.88.28.108]) by lists.linaro.org (Postfix) with ESMTPS id D335C62796 for ; Thu, 9 Aug 2018 07:00:10 +0000 (UTC) Received: from mxback7j.mail.yandex.net (mxback7j.mail.yandex.net [IPv6:2a02:6b8:0:1619::110]) by forward105p.mail.yandex.net (Yandex) with ESMTP id 288324082DA4 for ; Thu, 9 Aug 2018 10:00:09 +0300 (MSK) Received: from smtp4j.mail.yandex.net (smtp4j.mail.yandex.net [2a02:6b8:0:1619::15:6]) by mxback7j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id SECZroAXmG-09KSsstA; Thu, 09 Aug 2018 10:00:09 +0300 Received: by smtp4j.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id 9v53rjw1P0-088e5Gds; Thu, 09 Aug 2018 10:00:08 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 9 Aug 2018 07:00:06 +0000 Message-Id: <1533798006-29905-2-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1533798006-29905-1-git-send-email-odpbot@yandex.ru> References: <1533798006-29905-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 666 Subject: [lng-odp] [PATCH v2 1/1] test: sched_perf: add num queues option X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen Added option to set number of queues per worker thread. Number of active queues affects usually scheduler performance. Signed-off-by: Petri Savolainen --- /** Email created from pull request 666 (psavol:master-sched-perf-numqueue) ** https://github.com/Linaro/odp/pull/666 ** Patch: https://github.com/Linaro/odp/pull/666.patch ** Base sha: 7c87b66edc84e8c713fefc68d46464660adaf71e ** Merge commit sha: 795644af6ee1c45181c83bd6a115c1722632aa6b **/ test/performance/odp_sched_perf.c | 67 +++++++++++++++++++++---------- 1 file changed, 46 insertions(+), 21 deletions(-) diff --git a/test/performance/odp_sched_perf.c b/test/performance/odp_sched_perf.c index e76725cc0..ac2b9005b 100644 --- a/test/performance/odp_sched_perf.c +++ b/test/performance/odp_sched_perf.c @@ -14,12 +14,18 @@ #include #include +#define MAX_QUEUES_PER_CPU 1024 +#define MAX_QUEUES (ODP_THREAD_COUNT_MAX * MAX_QUEUES_PER_CPU) + typedef struct test_options_t { uint32_t num_cpu; + uint32_t num_queue; uint32_t num_event; uint32_t num_round; uint32_t max_burst; int queue_type; + uint32_t tot_queue; + uint32_t tot_event; } test_options_t; @@ -38,7 +44,7 @@ typedef struct test_global_t { odp_barrier_t barrier; odp_pool_t pool; odp_cpumask_t cpumask; - odp_queue_t queue[ODP_THREAD_COUNT_MAX]; + odp_queue_t queue[MAX_QUEUES]; odph_odpthread_t thread_tbl[ODP_THREAD_COUNT_MAX]; test_stat_t stat[ODP_THREAD_COUNT_MAX]; @@ -53,11 +59,12 @@ static void print_usage(void) "\n" "Usage: odp_sched_perf [options]\n" "\n" - " -c, --num_cpu Number of CPUs (worker threads). 0: all available CPUs. Default 1.\n" + " -c, --num_cpu Number of CPUs (worker threads). 0: all available CPUs. Default: 1.\n" + " -q, --num_queue Number of queues per CPU. Default: 1.\n" " -e, --num_event Number of events per queue\n" " -r, --num_round Number of rounds\n" " -b, --burst Maximum number of events per operation\n" - " -t, --type Queue type. 0: parallel, 1: atomic, 2: ordered. Default 0.\n" + " -t, --type Queue type. 0: parallel, 1: atomic, 2: ordered. Default: 0.\n" " -h, --help This help\n" "\n"); } @@ -70,6 +77,7 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) static const struct option longopts[] = { {"num_cpu", required_argument, NULL, 'c'}, + {"num_queue", required_argument, NULL, 'q'}, {"num_event", required_argument, NULL, 'e'}, {"num_round", required_argument, NULL, 'r'}, {"burst", required_argument, NULL, 'b'}, @@ -78,9 +86,10 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) {NULL, 0, NULL, 0} }; - static const char *shortopts = "+c:e:r:b:t:h"; + static const char *shortopts = "+c:q:e:r:b:t:h"; test_options->num_cpu = 1; + test_options->num_queue = 1; test_options->num_event = 100; test_options->num_round = 100000; test_options->max_burst = 100; @@ -96,6 +105,9 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) case 'c': test_options->num_cpu = atoi(optarg); break; + case 'q': + test_options->num_queue = atoi(optarg); + break; case 'e': test_options->num_event = atoi(optarg); break; @@ -117,6 +129,17 @@ static int parse_options(int argc, char *argv[], test_options_t *test_options) } } + if (test_options->num_queue > MAX_QUEUES_PER_CPU) { + printf("Error: Too many queues per worker. Max supported %i\n.", + MAX_QUEUES_PER_CPU); + ret = -1; + } + + test_options->tot_queue = test_options->num_queue * + test_options->num_cpu; + test_options->tot_event = test_options->tot_queue * + test_options->num_event; + return ret; } @@ -157,18 +180,22 @@ static int create_pool(test_global_t *global) odp_pool_param_t pool_param; odp_pool_t pool; test_options_t *test_options = &global->test_options; + uint32_t num_cpu = test_options->num_cpu; + uint32_t num_queue = test_options->num_queue; uint32_t num_event = test_options->num_event; uint32_t num_round = test_options->num_round; uint32_t max_burst = test_options->max_burst; - int num_cpu = test_options->num_cpu; - uint32_t tot_event = num_event * num_cpu; + uint32_t tot_queue = test_options->tot_queue; + uint32_t tot_event = test_options->tot_event; printf("\nScheduler performance test\n"); - printf(" num cpu %i\n", num_cpu); - printf(" num rounds %u\n", num_round); - printf(" num events %u\n", tot_event); + printf(" num cpu %u\n", num_cpu); + printf(" queues per cpu %u\n", num_queue); printf(" events per queue %u\n", num_event); - printf(" max burst %u\n", max_burst); + printf(" max burst size %u\n", max_burst); + printf(" num queues %u\n", tot_queue); + printf(" num events %u\n", tot_event); + printf(" num rounds %u\n", num_round); if (odp_pool_capability(&pool_capa)) { printf("Error: Pool capa failed.\n"); @@ -207,7 +234,7 @@ static int create_queues(test_global_t *global) uint32_t i, j; test_options_t *test_options = &global->test_options; uint32_t num_event = test_options->num_event; - uint32_t num_queue = test_options->num_cpu; + uint32_t tot_queue = test_options->tot_queue; int type = test_options->queue_type; odp_pool_t pool = global->pool; @@ -222,7 +249,6 @@ static int create_queues(test_global_t *global) sync = ODP_SCHED_SYNC_ORDERED; } - printf(" num queues %u\n", num_queue); printf(" queue type %s\n\n", type_str); if (odp_queue_capability(&queue_capa)) { @@ -230,7 +256,7 @@ static int create_queues(test_global_t *global) return -1; } - if (num_queue > queue_capa.sched.max_num) { + if (tot_queue > queue_capa.sched.max_num) { printf("Max queues supported %u\n", queue_capa.sched.max_num); return -1; } @@ -241,9 +267,6 @@ static int create_queues(test_global_t *global) return -1; } - for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) - global->queue[i] = ODP_QUEUE_INVALID; - odp_queue_param_init(&queue_param); queue_param.type = ODP_QUEUE_TYPE_SCHED; queue_param.sched.prio = ODP_SCHED_PRIO_DEFAULT; @@ -251,18 +274,18 @@ static int create_queues(test_global_t *global) queue_param.sched.group = ODP_SCHED_GROUP_ALL; queue_param.size = num_event; - for (i = 0; i < num_queue; i++) { + for (i = 0; i < tot_queue; i++) { queue = odp_queue_create(NULL, &queue_param); + global->queue[i] = queue; + if (queue == ODP_QUEUE_INVALID) { printf("Error: Queue create failed %u\n", i); return -1; } - - global->queue[i] = queue; } - for (i = 0; i < num_queue; i++) { + for (i = 0; i < tot_queue; i++) { queue = global->queue[i]; for (j = 0; j < num_event; j++) { @@ -288,13 +311,15 @@ static int destroy_queues(test_global_t *global) uint32_t i; odp_event_t ev; uint64_t wait; + test_options_t *test_options = &global->test_options; + uint32_t tot_queue = test_options->tot_queue; wait = odp_schedule_wait_time(200 * ODP_TIME_MSEC_IN_NS); while ((ev = odp_schedule(NULL, wait)) != ODP_EVENT_INVALID) odp_event_free(ev); - for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) { + for (i = 0; i < tot_queue; i++) { if (global->queue[i] != ODP_QUEUE_INVALID) { if (odp_queue_destroy(global->queue[i])) { printf("Error: Queue destroy failed %u\n", i);