From patchwork Tue May 17 13:04:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 67952 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp2070794qge; Tue, 17 May 2016 06:19:49 -0700 (PDT) X-Received: by 10.55.137.68 with SMTP id l65mr1371900qkd.194.1463491189450; Tue, 17 May 2016 06:19:49 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 64si2155839qkd.214.2016.05.17.06.19.49; Tue, 17 May 2016 06:19:49 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 1EABE617D6; Tue, 17 May 2016 13:19:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id C825261735; Tue, 17 May 2016 13:14:11 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 4ACCF617E5; Tue, 17 May 2016 13:13:55 +0000 (UTC) Received: from mail-lf0-f50.google.com (mail-lf0-f50.google.com [209.85.215.50]) by lists.linaro.org (Postfix) with ESMTPS id AD20461786 for ; Tue, 17 May 2016 13:05:16 +0000 (UTC) Received: by mail-lf0-f50.google.com with SMTP id j8so6494390lfd.2 for ; Tue, 17 May 2016 06:05:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ffEjvzXosxoYDTpaadrkH5yy3CE3F2/lQwAaAfKv0os=; b=L4Md4VpTO+rFEydFnRb/57Oo+cETPfWFmCF+/xbunJr9lDXgaufVKBfbqsjg6ucJ8Q RMHeVaBl1zIEo/H+2YR8C81/r+9fGMTCZPi/D59Gc0kl954uvwo479a1zfN9oN2xeTkb MHhdf/bTn2jI2QtzvedoNR0bOMZL0PIHNwwxBUrQ8aIUNwDUi9vhnKbEreAlmmnzTMLI GFdQ/yHDFYOCvnVKXTAeTRMd4B/lhU2/GxfB2zroFWB1DrnxVMs3ApXDMr8nsWq0l9jF lZeNP1jNimd/HNJXaG5R/ittZzz0dWDuzPiRNz8teKZTxYj3BUzA+Bubzt6EeNqT6L0n y60A== X-Gm-Message-State: AOPr4FXeZGKZZBqUeqI0TqyRaVOf7WraDrzMr1Vnfo6rvbWynbgygg1NnB016vd/yeGrc9pQQE0= X-Received: by 10.25.20.105 with SMTP id k102mr448437lfi.110.1463490315522; Tue, 17 May 2016 06:05:15 -0700 (PDT) Received: from localhost.localdomain (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by smtp.gmail.com with ESMTPSA id q1sm506536lbo.4.2016.05.17.06.05.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 17 May 2016 06:05:14 -0700 (PDT) From: Christophe Milard To: lng-odp@lists.linaro.org, brian.brooks@linaro.org, mike.holmes@linaro.org Date: Tue, 17 May 2016 15:04:13 +0200 Message-Id: <1463490282-23277-7-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1463490282-23277-1-git-send-email-christophe.milard@linaro.org> References: <1463490282-23277-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCHv7 06/35] validation: using implementation agnostic function for ODP threads X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" cunit_common is changed to use the implementation agnostic ODP thread create and join functions, from helpers. Tests are no longer aware if an odp thread is a linux process or linux thread under the hood. The helper decides. The function pointer which is passed when creating the odp threads now points to a function returning an int instead of a ptr. This is changed so that when odp threads are processes, the int returned become the process exit code. The return code is nevertheless just used to detect erros (as before). Note that it is now important that the ODP threads return a correct status, as for processes, odp threads runs on a copy of the memory, so that c_unit assertions will not be reflected on the main process summary. Failing to return a proper status means that error will be lost when running the test in process mode. odp threads returning an error status code will be detected as an error on the main process at join time... Signed-off-by: Christophe Milard --- test/validation/atomic/atomic.c | 34 ++++++++++++------------ test/validation/barrier/barrier.c | 8 +++--- test/validation/common/odp_cunit_common.c | 14 ++++++---- test/validation/common/odp_cunit_common.h | 4 +-- test/validation/lock/lock.c | 44 +++++++++++++++---------------- test/validation/scheduler/scheduler.c | 8 +++--- test/validation/shmem/shmem.c | 6 ++--- test/validation/thread/thread.c | 4 +-- test/validation/timer/timer.c | 4 +-- 9 files changed, 65 insertions(+), 61 deletions(-) diff --git a/test/validation/atomic/atomic.c b/test/validation/atomic/atomic.c index 5eec467..0dfd651 100644 --- a/test/validation/atomic/atomic.c +++ b/test/validation/atomic/atomic.c @@ -584,7 +584,7 @@ int atomic_init(odp_instance_t *inst) } /* Atomic tests */ -static void *test_atomic_inc_dec_thread(void *arg UNUSED) +static int test_atomic_inc_dec_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -594,10 +594,10 @@ static void *test_atomic_inc_dec_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_add_sub_thread(void *arg UNUSED) +static int test_atomic_add_sub_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -607,10 +607,10 @@ static void *test_atomic_add_sub_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_fetch_inc_dec_thread(void *arg UNUSED) +static int test_atomic_fetch_inc_dec_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -620,10 +620,10 @@ static void *test_atomic_fetch_inc_dec_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_fetch_add_sub_thread(void *arg UNUSED) +static int test_atomic_fetch_add_sub_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -633,10 +633,10 @@ static void *test_atomic_fetch_add_sub_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_max_min_thread(void *arg UNUSED) +static int test_atomic_max_min_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -646,10 +646,10 @@ static void *test_atomic_max_min_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_cas_inc_dec_thread(void *arg UNUSED) +static int test_atomic_cas_inc_dec_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -659,10 +659,10 @@ static void *test_atomic_cas_inc_dec_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_xchg_thread(void *arg UNUSED) +static int test_atomic_xchg_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -672,10 +672,10 @@ static void *test_atomic_xchg_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *test_atomic_non_relaxed_thread(void *arg UNUSED) +static int test_atomic_non_relaxed_thread(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; @@ -685,10 +685,10 @@ static void *test_atomic_non_relaxed_thread(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void test_atomic_functional(void *func_ptr(void *), int check) +static void test_atomic_functional(int func_ptr(void *), int check) { pthrd_arg arg; diff --git a/test/validation/barrier/barrier.c b/test/validation/barrier/barrier.c index be6d22d..2a533dc 100644 --- a/test/validation/barrier/barrier.c +++ b/test/validation/barrier/barrier.c @@ -221,7 +221,7 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, return barrier_errs; } -static void *no_barrier_functional_test(void *arg UNUSED) +static int no_barrier_functional_test(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; uint32_t barrier_errs; @@ -239,10 +239,10 @@ static void *no_barrier_functional_test(void *arg UNUSED) CU_ASSERT(barrier_errs != 0 || global_mem->g_num_threads == 1); thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *barrier_functional_test(void *arg UNUSED) +static int barrier_functional_test(void *arg UNUSED) { per_thread_mem_t *per_thread_mem; uint32_t barrier_errs; @@ -253,7 +253,7 @@ static void *barrier_functional_test(void *arg UNUSED) CU_ASSERT(barrier_errs == 0); thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } static void barrier_test_init(void) diff --git a/test/validation/common/odp_cunit_common.c b/test/validation/common/odp_cunit_common.c index 2a5864f..7df9aa6 100644 --- a/test/validation/common/odp_cunit_common.c +++ b/test/validation/common/odp_cunit_common.c @@ -9,7 +9,7 @@ #include #include /* Globals */ -static odph_linux_pthread_t thread_tbl[MAX_WORKERS]; +static odph_odpthread_t thread_tbl[MAX_WORKERS]; static odp_instance_t instance; /* @@ -26,10 +26,10 @@ static struct { static odp_suiteinfo_t *global_testsuites; /** create test thread */ -int odp_cunit_thread_create(void *func_ptr(void *), pthrd_arg *arg) +int odp_cunit_thread_create(int func_ptr(void *), pthrd_arg *arg) { odp_cpumask_t cpumask; - odph_linux_thr_params_t thr_params; + odph_odpthread_params_t thr_params; memset(&thr_params, 0, sizeof(thr_params)); thr_params.start = func_ptr; @@ -40,14 +40,18 @@ int odp_cunit_thread_create(void *func_ptr(void *), pthrd_arg *arg) /* Create and init additional threads */ odp_cpumask_default_worker(&cpumask, arg->numthrds); - return odph_linux_pthread_create(thread_tbl, &cpumask, &thr_params); + return odph_odpthreads_create(thread_tbl, &cpumask, &thr_params); } /** exit from test thread */ int odp_cunit_thread_exit(pthrd_arg *arg) { /* Wait for other threads to exit */ - odph_linux_pthread_join(thread_tbl, arg->numthrds); + if (odph_odpthreads_join(thread_tbl) != arg->numthrds) { + fprintf(stderr, + "error: odph_odpthreads_join() failed.\n"); + return -1; + } return 0; } diff --git a/test/validation/common/odp_cunit_common.h b/test/validation/common/odp_cunit_common.h index 3812b0f..52fe203 100644 --- a/test/validation/common/odp_cunit_common.h +++ b/test/validation/common/odp_cunit_common.h @@ -82,8 +82,8 @@ int odp_cunit_update(odp_suiteinfo_t testsuites[]); /* the function, called by module main(), to run the testsuites: */ int odp_cunit_run(void); -/** create thread fro start_routine function */ -int odp_cunit_thread_create(void *func_ptr(void *), pthrd_arg *arg); +/** create thread for start_routine function (which returns 0 on success) */ +int odp_cunit_thread_create(int func_ptr(void *), pthrd_arg *arg); int odp_cunit_thread_exit(pthrd_arg *); /** diff --git a/test/validation/lock/lock.c b/test/validation/lock/lock.c index e90095c..fb69261 100644 --- a/test/validation/lock/lock.c +++ b/test/validation/lock/lock.c @@ -148,7 +148,7 @@ static void spinlock_api_test(odp_spinlock_t *spinlock) CU_ASSERT(odp_spinlock_is_locked(spinlock) == 0); } -static void *spinlock_api_tests(void *arg UNUSED) +static int spinlock_api_tests(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -164,7 +164,7 @@ static void *spinlock_api_tests(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } static void spinlock_recursive_api_test(odp_spinlock_recursive_t *spinlock) @@ -197,7 +197,7 @@ static void spinlock_recursive_api_test(odp_spinlock_recursive_t *spinlock) CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); } -static void *spinlock_recursive_api_tests(void *arg UNUSED) +static int spinlock_recursive_api_tests(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -214,7 +214,7 @@ static void *spinlock_recursive_api_tests(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } static void ticketlock_api_test(odp_ticketlock_t *ticketlock) @@ -236,7 +236,7 @@ static void ticketlock_api_test(odp_ticketlock_t *ticketlock) CU_ASSERT(odp_ticketlock_is_locked(ticketlock) == 0); } -static void *ticketlock_api_tests(void *arg UNUSED) +static int ticketlock_api_tests(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -252,7 +252,7 @@ static void *ticketlock_api_tests(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } static void rwlock_api_test(odp_rwlock_t *rw_lock) @@ -286,7 +286,7 @@ static void rwlock_api_test(odp_rwlock_t *rw_lock) odp_rwlock_write_unlock(rw_lock); } -static void *rwlock_api_tests(void *arg UNUSED) +static int rwlock_api_tests(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -302,7 +302,7 @@ static void *rwlock_api_tests(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } static void rwlock_recursive_api_test(odp_rwlock_recursive_t *rw_lock) @@ -337,7 +337,7 @@ static void rwlock_recursive_api_test(odp_rwlock_recursive_t *rw_lock) /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ } -static void *rwlock_recursive_api_tests(void *arg UNUSED) +static int rwlock_recursive_api_tests(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -353,7 +353,7 @@ static void *rwlock_recursive_api_tests(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } /* @@ -362,7 +362,7 @@ static void *rwlock_recursive_api_tests(void *arg UNUSED) * so we have a fair chance to see that the tested synchronizer * does avoid the race condition. */ -static void *no_lock_functional_test(void *arg UNUSED) +static int no_lock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -479,10 +479,10 @@ static void *no_lock_functional_test(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *spinlock_functional_test(void *arg UNUSED) +static int spinlock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -564,10 +564,10 @@ static void *spinlock_functional_test(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *spinlock_recursive_functional_test(void *arg UNUSED) +static int spinlock_recursive_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -673,10 +673,10 @@ static void *spinlock_recursive_functional_test(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *ticketlock_functional_test(void *arg UNUSED) +static int ticketlock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -760,10 +760,10 @@ static void *ticketlock_functional_test(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *rwlock_functional_test(void *arg UNUSED) +static int rwlock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -851,10 +851,10 @@ static void *rwlock_functional_test(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } -static void *rwlock_recursive_functional_test(void *arg UNUSED) +static int rwlock_recursive_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; per_thread_mem_t *per_thread_mem; @@ -981,7 +981,7 @@ static void *rwlock_recursive_functional_test(void *arg UNUSED) thread_finalize(per_thread_mem); - return NULL; + return CU_get_number_of_failures(); } /* Thread-unsafe tests */ diff --git a/test/validation/scheduler/scheduler.c b/test/validation/scheduler/scheduler.c index dce21cb..c459698 100644 --- a/test/validation/scheduler/scheduler.c +++ b/test/validation/scheduler/scheduler.c @@ -454,7 +454,7 @@ void scheduler_test_groups(void) CU_ASSERT_FATAL(odp_pool_destroy(p) == 0); } -static void *chaos_thread(void *arg) +static int chaos_thread(void *arg) { uint64_t i, wait; int rc; @@ -529,7 +529,7 @@ static void *chaos_thread(void *arg) printf("Thread %d ends, elapsed time = %" PRIu64 "us\n", odp_thread_id(), odp_time_to_ns(diff) / 1000); - return NULL; + return CU_get_number_of_failures(); } static void chaos_run(unsigned int qtype) @@ -674,7 +674,7 @@ void scheduler_test_chaos(void) chaos_run(3); } -static void *schedule_common_(void *arg) +static int schedule_common_(void *arg) { thread_args_t *args = (thread_args_t *)arg; odp_schedule_sync_t sync; @@ -885,7 +885,7 @@ static void *schedule_common_(void *arg) if (locked) odp_ticketlock_unlock(&globals->lock); - return NULL; + return CU_get_number_of_failures(); } static void fill_queues(thread_args_t *args) diff --git a/test/validation/shmem/shmem.c b/test/validation/shmem/shmem.c index b4c6847..cbff673 100644 --- a/test/validation/shmem/shmem.c +++ b/test/validation/shmem/shmem.c @@ -15,7 +15,7 @@ static odp_barrier_t test_barrier; -static void *run_shm_thread(void *arg) +static int run_shm_thread(void *arg ODP_UNUSED) { odp_shm_info_t info; odp_shm_t shm; @@ -44,7 +44,7 @@ static void *run_shm_thread(void *arg) odp_shm_print_all(); fflush(stdout); - return arg; + return CU_get_number_of_failures(); } void shmem_test_odp_shm_sunnyday(void) @@ -78,7 +78,7 @@ void shmem_test_odp_shm_sunnyday(void) odp_barrier_init(&test_barrier, thrdarg.numthrds); odp_cunit_thread_create(run_shm_thread, &thrdarg); - odp_cunit_thread_exit(&thrdarg); + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); } odp_testinfo_t shmem_suite[] = { diff --git a/test/validation/thread/thread.c b/test/validation/thread/thread.c index 5cbec83..29ada26 100644 --- a/test/validation/thread/thread.c +++ b/test/validation/thread/thread.c @@ -32,7 +32,7 @@ void thread_test_odp_thread_count(void) CU_PASS(); } -static void *thread_func(void *arg TEST_UNUSED) +static int thread_func(void *arg TEST_UNUSED) { /* indicate that thread has started */ odp_barrier_wait(&bar_entry); @@ -42,7 +42,7 @@ static void *thread_func(void *arg TEST_UNUSED) /* wait for indication that we can exit */ odp_barrier_wait(&bar_exit); - return NULL; + return CU_get_number_of_failures(); } void thread_test_odp_thrmask_worker(void) diff --git a/test/validation/timer/timer.c b/test/validation/timer/timer.c index 378b427..b42f1d5 100644 --- a/test/validation/timer/timer.c +++ b/test/validation/timer/timer.c @@ -272,7 +272,7 @@ static void handle_tmo(odp_event_t ev, bool stale, uint64_t prev_tick) /* @private Worker thread entrypoint which performs timer alloc/set/cancel/free * tests */ -static void *worker_entrypoint(void *arg TEST_UNUSED) +static int worker_entrypoint(void *arg TEST_UNUSED) { int thr = odp_thread_id(); uint32_t i, allocated; @@ -449,7 +449,7 @@ static void *worker_entrypoint(void *arg TEST_UNUSED) free(tt); LOG_DBG("Thread %u: exiting\n", thr); - return NULL; + return CU_get_number_of_failures(); } /* @private Timer test case entrypoint */