From patchwork Wed Sep 9 09:00:31 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 53306 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by patches.linaro.org (Postfix) with ESMTPS id 67E7122B19 for ; Wed, 9 Sep 2015 09:00:58 +0000 (UTC) Received: by wicgb1 with SMTP id gb1sf4419768wic.3 for ; Wed, 09 Sep 2015 02:00:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=VWIVr1MK0/Zn/AVmWsqXrHopMhXwOhZ5pkZMqmgQJyw=; b=h08FS7CCVEWk2cksQbPtvs3H6KTgbtw7Lz7Yyc57onW4Q7MuxBmWXM81RbTiVjzDlj wjKxEMbGs1hKB6JRp4BwlWb8f/M0ZMWpagmfFIAC84u9Vp2lLu5A89qOL06vUzgBagf2 H1knsyxRj6lL3taqTf0c8JyV12PZtoMtddM5C/ACUU2baE7pxqSO+A5oVDRloRXSCQO0 jzwZ82AuHxaLqeWTOCHdTYKyfai5C9Uv+n9kFAJJ6kmLNXVbZiEGWtlOIDpqkSXbrQ2e L9Rd6skk51/LKdl+BdYuC32wDir4syMjrG1mn5WtGoDBkIXO1LrIJryB4rUQxjkWZZsa XHoA== X-Gm-Message-State: ALoCoQmDvlkiW3BrXdd8Xuqp47S9PlN9zAohG8dsl6xfFGYTsAZCG0lLspwYBH7F7mw3zPbZiPiE X-Received: by 10.180.87.199 with SMTP id ba7mr7890101wib.5.1441789257661; Wed, 09 Sep 2015 02:00:57 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.170 with SMTP id x10ls17246lal.61.gmail; Wed, 09 Sep 2015 02:00:57 -0700 (PDT) X-Received: by 10.152.234.72 with SMTP id uc8mr19086167lac.80.1441789257474; Wed, 09 Sep 2015 02:00:57 -0700 (PDT) Received: from mail-la0-f45.google.com (mail-la0-f45.google.com. [209.85.215.45]) by mx.google.com with ESMTPS id ky10si6049937lab.43.2015.09.09.02.00.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Sep 2015 02:00:57 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) client-ip=209.85.215.45; Received: by laeb10 with SMTP id b10so2406590lae.1 for ; Wed, 09 Sep 2015 02:00:57 -0700 (PDT) X-Received: by 10.152.18.133 with SMTP id w5mr17529590lad.72.1441789257346; Wed, 09 Sep 2015 02:00:57 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp190126lbq; Wed, 9 Sep 2015 02:00:56 -0700 (PDT) X-Received: by 10.140.23.52 with SMTP id 49mr42678260qgo.38.1441789255826; Wed, 09 Sep 2015 02:00:55 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 11si7350969qgg.24.2015.09.09.02.00.54; Wed, 09 Sep 2015 02:00:55 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id C5A8761E11; Wed, 9 Sep 2015 09:00:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id B2DC761B6D; Wed, 9 Sep 2015 09:00:47 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id C339A61CC5; Wed, 9 Sep 2015 09:00:39 +0000 (UTC) Received: from mail-la0-f45.google.com (mail-la0-f45.google.com [209.85.215.45]) by lists.linaro.org (Postfix) with ESMTPS id 424F461A4F for ; Wed, 9 Sep 2015 09:00:38 +0000 (UTC) Received: by lamp12 with SMTP id p12so2295049lam.0 for ; Wed, 09 Sep 2015 02:00:36 -0700 (PDT) X-Received: by 10.152.29.232 with SMTP id n8mr20226577lah.97.1441789236358; Wed, 09 Sep 2015 02:00:36 -0700 (PDT) Received: from localhost.localdomain (ppp91-76-161-180.pppoe.mtu-net.ru. [91.76.161.180]) by smtp.gmail.com with ESMTPSA id n10sm1608669lbc.37.2015.09.09.02.00.35 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 09 Sep 2015 02:00:35 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Wed, 9 Sep 2015 12:00:31 +0300 Message-Id: <1441789231-31070-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.9.1 X-Topics: patch Subject: [lng-odp] [PATCHv3] validation: remove MAX_WORKERS X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 ODP has api to request available number of workers. Now no need limit that inside application. Signed-off-by: Maxim Uvarov Reviewed-by: Nicolas Morey-Chaisemartin --- v3: account that thread_tbl calloc can fail. v2: timer.c: do not substract -1 from number of workers. test/validation/common/odp_cunit_common.c | 14 ++++++-- test/validation/common/odp_cunit_common.h | 2 -- test/validation/scheduler/scheduler.c | 3 -- test/validation/shmem/shmem.c | 6 ++-- test/validation/synchronizers/synchronizers.c | 48 ++++++++++----------------- test/validation/timer/timer.c | 10 ++---- 6 files changed, 34 insertions(+), 49 deletions(-) diff --git a/test/validation/common/odp_cunit_common.c b/test/validation/common/odp_cunit_common.c index d995ad3..4d61024 100644 --- a/test/validation/common/odp_cunit_common.c +++ b/test/validation/common/odp_cunit_common.c @@ -9,7 +9,7 @@ #include #include /* Globals */ -static odph_linux_pthread_t thread_tbl[MAX_WORKERS]; +static odph_linux_pthread_t *thread_tbl; /* * global init/term functions which may be registered @@ -26,9 +26,15 @@ static struct { int odp_cunit_thread_create(void *func_ptr(void *), pthrd_arg *arg) { odp_cpumask_t cpumask; + int num; /* Create and init additional threads */ - odp_cpumask_def_worker(&cpumask, arg->numthrds); + num = odp_cpumask_def_worker(&cpumask, arg->numthrds); + thread_tbl = calloc(sizeof(odph_linux_pthread_t), num); + if (!thread_tbl) { + fprintf(stderr, "error: thread_tbl memory alloc.\n"); + return -1; + } return odph_linux_pthread_create(thread_tbl, &cpumask, func_ptr, (void *)arg); @@ -39,6 +45,10 @@ int odp_cunit_thread_exit(pthrd_arg *arg) { /* Wait for other threads to exit */ odph_linux_pthread_join(thread_tbl, arg->numthrds); + if (thread_tbl) { + free(thread_tbl); + thread_tbl = NULL; + } return 0; } diff --git a/test/validation/common/odp_cunit_common.h b/test/validation/common/odp_cunit_common.h index 6cafaaa..f94b44e 100644 --- a/test/validation/common/odp_cunit_common.h +++ b/test/validation/common/odp_cunit_common.h @@ -16,8 +16,6 @@ #include #include "CUnit/Basic.h" -#define MAX_WORKERS 32 /**< Maximum number of work threads */ - /* the function, called by module main(), to run the testsuites: */ int odp_cunit_run(CU_SuiteInfo testsuites[]); diff --git a/test/validation/scheduler/scheduler.c b/test/validation/scheduler/scheduler.c index 1874889..e12895d 100644 --- a/test/validation/scheduler/scheduler.c +++ b/test/validation/scheduler/scheduler.c @@ -8,7 +8,6 @@ #include "odp_cunit_common.h" #include "scheduler.h" -#define MAX_WORKERS_THREADS 32 #define MSG_POOL_SIZE (4 * 1024 * 1024) #define QUEUES_PER_PRIO 16 #define BUF_SIZE 64 @@ -1018,8 +1017,6 @@ int scheduler_suite_init(void) memset(globals, 0, sizeof(test_globals_t)); globals->num_workers = odp_cpumask_def_worker(&mask, 0); - if (globals->num_workers > MAX_WORKERS) - globals->num_workers = MAX_WORKERS; shm = odp_shm_reserve(SHM_THR_ARGS_NAME, sizeof(thread_args_t), ODP_CACHE_LINE_SIZE, 0); diff --git a/test/validation/shmem/shmem.c b/test/validation/shmem/shmem.c index 6dc579a..dfa5310 100644 --- a/test/validation/shmem/shmem.c +++ b/test/validation/shmem/shmem.c @@ -49,6 +49,7 @@ void shmem_test_odp_shm_sunnyday(void) pthrd_arg thrdarg; odp_shm_t shm; test_shared_data_t *test_shared_data; + odp_cpumask_t mask; shm = odp_shm_reserve(TESTNAME, sizeof(test_shared_data_t), ALIGE_SIZE, 0); @@ -67,10 +68,7 @@ void shmem_test_odp_shm_sunnyday(void) test_shared_data->foo = TEST_SHARE_FOO; test_shared_data->bar = TEST_SHARE_BAR; - thrdarg.numthrds = odp_cpu_count(); - - if (thrdarg.numthrds > MAX_WORKERS) - thrdarg.numthrds = MAX_WORKERS; + thrdarg.numthrds = odp_cpumask_def_worker(&mask, 0); odp_cunit_thread_create(run_shm_thread, &thrdarg); odp_cunit_thread_exit(&thrdarg); diff --git a/test/validation/synchronizers/synchronizers.c b/test/validation/synchronizers/synchronizers.c index 0a31a40..914b37e 100644 --- a/test/validation/synchronizers/synchronizers.c +++ b/test/validation/synchronizers/synchronizers.c @@ -45,7 +45,7 @@ typedef struct { typedef struct { /* Global variables */ - uint32_t g_num_threads; + uint32_t g_num_workers; uint32_t g_iterations; uint32_t g_verbose; uint32_t g_max_num_cores; @@ -169,7 +169,7 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, thread_num = odp_thread_id(); global_mem = per_thread_mem->global_mem; - num_threads = global_mem->g_num_threads; + num_threads = global_mem->g_num_workers; iterations = BARRIER_ITERATIONS; barrier_errs = 0; @@ -710,7 +710,7 @@ static void barrier_test_init(void) { uint32_t num_threads, idx; - num_threads = global_mem->g_num_threads; + num_threads = global_mem->g_num_workers; for (idx = 0; idx < NUM_TEST_BARRIERS; idx++) { odp_barrier_init(&global_mem->test_barriers[idx], num_threads); @@ -924,7 +924,7 @@ void synchronizers_test_no_barrier_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; barrier_test_init(); odp_cunit_thread_create(no_barrier_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -934,7 +934,7 @@ void synchronizers_test_barrier_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; barrier_test_init(); odp_cunit_thread_create(barrier_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -951,7 +951,7 @@ void synchronizers_test_no_lock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_cunit_thread_create(no_lock_functional_test, &arg); odp_cunit_thread_exit(&arg); } @@ -966,7 +966,7 @@ void synchronizers_test_spinlock_api(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_cunit_thread_create(spinlock_api_tests, &arg); odp_cunit_thread_exit(&arg); } @@ -975,7 +975,7 @@ void synchronizers_test_spinlock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_spinlock_init(&global_mem->global_spinlock); odp_cunit_thread_create(spinlock_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -992,7 +992,7 @@ void synchronizers_test_ticketlock_api(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_cunit_thread_create(ticketlock_api_tests, &arg); odp_cunit_thread_exit(&arg); } @@ -1001,7 +1001,7 @@ void synchronizers_test_ticketlock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_ticketlock_init(&global_mem->global_ticketlock); odp_cunit_thread_create(ticketlock_functional_test, &arg); @@ -1019,7 +1019,7 @@ void synchronizers_test_rwlock_api(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_cunit_thread_create(rwlock_api_tests, &arg); odp_cunit_thread_exit(&arg); } @@ -1028,7 +1028,7 @@ void synchronizers_test_rwlock_functional(void) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; odp_rwlock_init(&global_mem->global_rwlock); odp_cunit_thread_create(rwlock_functional_test, &arg); odp_cunit_thread_exit(&arg); @@ -1044,7 +1044,7 @@ int synchronizers_suite_init(void) { uint32_t num_threads, idx; - num_threads = global_mem->g_num_threads; + num_threads = global_mem->g_num_workers; odp_barrier_init(&global_mem->global_barrier, num_threads); for (idx = 0; idx < NUM_RESYNC_BARRIERS; idx++) odp_barrier_init(&global_mem->barrier_array[idx], num_threads); @@ -1054,7 +1054,6 @@ int synchronizers_suite_init(void) int synchronizers_init(void) { - uint32_t workers_count, max_threads; int ret = 0; odp_cpumask_t mask; @@ -1078,25 +1077,12 @@ int synchronizers_init(void) global_mem = odp_shm_addr(global_shm); memset(global_mem, 0, sizeof(global_shared_mem_t)); - global_mem->g_num_threads = MAX_WORKERS; + global_mem->g_num_workers = odp_cpumask_def_worker(&mask, 0); global_mem->g_iterations = MAX_ITERATIONS; global_mem->g_verbose = VERBOSE; - workers_count = odp_cpumask_def_worker(&mask, 0); - - max_threads = (workers_count >= MAX_WORKERS) ? - MAX_WORKERS : workers_count; - - if (max_threads < global_mem->g_num_threads) { - printf("Requested num of threads is too large\n"); - printf("reducing from %" PRIu32 " to %" PRIu32 "\n", - global_mem->g_num_threads, - max_threads); - global_mem->g_num_threads = max_threads; - } - - printf("Num of threads used = %" PRIu32 "\n", - global_mem->g_num_threads); + printf("Num of workers used = %" PRIu32 "\n", + global_mem->g_num_workers); return ret; } @@ -1158,7 +1144,7 @@ static void test_atomic_functional(void *func_ptr(void *)) { pthrd_arg arg; - arg.numthrds = global_mem->g_num_threads; + arg.numthrds = global_mem->g_num_workers; test_atomic_init(); test_atomic_store(); odp_cunit_thread_create(func_ptr, &arg); diff --git a/test/validation/timer/timer.c b/test/validation/timer/timer.c index 7a8b98a..bcba3d4 100644 --- a/test/validation/timer/timer.c +++ b/test/validation/timer/timer.c @@ -34,12 +34,6 @@ static odp_timer_pool_t tp; /** @private Count of timeouts delivered too late */ static odp_atomic_u32_t ndelivtoolate; -/** @private min() function */ -static int min(int a, int b) -{ - return a < b ? a : b; -} - /* @private Timer helper structure */ struct test_timer { odp_timer_t tim; /* Timer handle */ @@ -441,10 +435,12 @@ void timer_test_odp_timer_all(void) int rc; odp_pool_param_t params; odp_timer_pool_param_t tparam; + odp_cpumask_t mask; + /* Reserve at least one core for running other processes so the timer * test hopefully can run undisturbed and thus get better timing * results. */ - int num_workers = min(odp_cpu_count() - 1, MAX_WORKERS); + int num_workers = odp_cpumask_def_worker(&mask, 0) - 1; /* On a single-CPU machine run at least one thread */ if (num_workers < 1) num_workers = 1;