From patchwork Tue Dec 16 22:46:20 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ola Liljedahl X-Patchwork-Id: 42364 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f71.google.com (mail-ee0-f71.google.com [74.125.83.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 96DF226C8F for ; Tue, 16 Dec 2014 22:47:02 +0000 (UTC) Received: by mail-ee0-f71.google.com with SMTP id c13sf9555392eek.6 for ; Tue, 16 Dec 2014 14:47:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=4r9VqzLcRscBtkJO1GsunBJkubln2JkPkm8W7QpJ8xw=; b=I4yGlZUac3peWYPoevJod9rT4sVK2ReIW1kURMCA7LAZWzUD5b0youXUs3yn27dyA1 Gs1VRD4QBbs7yzznJJbuN/ZN+S+hoJ/P8IF0EvwfYmZQWKPw1YKGp997POUnVhIubU25 mOnQWoeWq1YwJgKq2UdQuTnuLQKMz3K+upi+7Zp4KwaYit7tNe+CRygjczVMYh45L5uX nOVLrf7XHma5JfvvWQnrl+mhcmnM/jjMZxdJQzZ7UbmpO/S8DbLU6yxH7saU2RZRjlb1 qfntcVBZ6/j7P7gqnEzAPGKOMxd6u4WgptSiFZ1GqP1HgUl1tvZs1yZzhk6V32XEqvVr YAtQ== X-Gm-Message-State: ALoCoQnIArNn9vSIU6SurciM39218kUVYCyPL3KiuzRz5u5fkTXP+5sOVG8wySXsuoI3Ry016WW1 X-Received: by 10.112.168.70 with SMTP id zu6mr741888lbb.8.1418770021873; Tue, 16 Dec 2014 14:47:01 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.3.202 with SMTP id e10ls889760lae.86.gmail; Tue, 16 Dec 2014 14:47:01 -0800 (PST) X-Received: by 10.152.234.169 with SMTP id uf9mr37624250lac.86.1418770021625; Tue, 16 Dec 2014 14:47:01 -0800 (PST) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id lt6si2114906lac.47.2014.12.16.14.47.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 16 Dec 2014 14:47:01 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by mail-lb0-f175.google.com with SMTP id u10so11984813lbd.34 for ; Tue, 16 Dec 2014 14:47:01 -0800 (PST) X-Received: by 10.152.5.226 with SMTP id v2mr26982947lav.34.1418770021513; Tue, 16 Dec 2014 14:47:01 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.142.69 with SMTP id ru5csp1182707lbb; Tue, 16 Dec 2014 14:47:00 -0800 (PST) X-Received: by 10.224.16.2 with SMTP id m2mr54856033qaa.54.1418770019440; Tue, 16 Dec 2014 14:46:59 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id z20si2522654qaf.102.2014.12.16.14.46.55 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 16 Dec 2014 14:46:59 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Y10tJ-0005JN-Kp; Tue, 16 Dec 2014 22:46:53 +0000 Received: from mail-lb0-f176.google.com ([209.85.217.176]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Y10sx-0005ES-AQ for lng-odp@lists.linaro.org; Tue, 16 Dec 2014 22:46:31 +0000 Received: by mail-lb0-f176.google.com with SMTP id p9so11595860lbv.21 for ; Tue, 16 Dec 2014 14:46:25 -0800 (PST) X-Received: by 10.152.7.206 with SMTP id l14mr23285413laa.1.1418769985674; Tue, 16 Dec 2014 14:46:25 -0800 (PST) Received: from macmini.lan (78-82-118-111.tn.glocalnet.net. [78.82.118.111]) by mx.google.com with ESMTPSA id jf3sm491713lbc.44.2014.12.16.14.46.24 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 16 Dec 2014 14:46:25 -0800 (PST) From: Ola Liljedahl To: lng-odp@lists.linaro.org Date: Tue, 16 Dec 2014 23:46:20 +0100 Message-Id: <1418769980-8244-4-git-send-email-ola.liljedahl@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1418769980-8244-1-git-send-email-ola.liljedahl@linaro.org> References: <1418769980-8244-1-git-send-email-ola.liljedahl@linaro.org> X-Topics: timers patch Subject: [lng-odp] [PATCHv2 3/3] test: odp_timer.h: cunit test X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ola.liljedahl@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 A new cunit test program test/validation/odp_timer.c for the updated timer API. Signed-off-by: Ola Liljedahl --- (This document/code contribution attached is provided under the terms of agreement LES-LTM-21309) test/validation/.gitignore | 1 + test/validation/Makefile.am | 4 + test/validation/odp_timer.c | 327 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 332 insertions(+) create mode 100644 test/validation/odp_timer.c diff --git a/test/validation/.gitignore b/test/validation/.gitignore index 32834ae..9c4cd86 100644 --- a/test/validation/.gitignore +++ b/test/validation/.gitignore @@ -5,3 +5,4 @@ odp_queue odp_crypto odp_schedule odp_shm +odp_timer diff --git a/test/validation/Makefile.am b/test/validation/Makefile.am index d0b5426..f01a6f1 100644 --- a/test/validation/Makefile.am +++ b/test/validation/Makefile.am @@ -7,6 +7,7 @@ if ODP_CUNIT_ENABLED TESTS = ${bin_PROGRAMS} check_PROGRAMS = ${bin_PROGRAMS} bin_PROGRAMS = odp_init odp_queue odp_crypto odp_shm odp_schedule +bin_PROGRAMS += odp_timer odp_init_LDFLAGS = $(AM_LDFLAGS) odp_queue_LDFLAGS = $(AM_LDFLAGS) odp_crypto_CFLAGS = $(AM_CFLAGS) -I$(srcdir)/crypto @@ -15,6 +16,8 @@ odp_shm_CFLAGS = $(AM_CFLAGS) odp_shm_LDFLAGS = $(AM_LDFLAGS) odp_schedule_CFLAGS = $(AM_CFLAGS) odp_schedule_LDFLAGS = $(AM_LDFLAGS) +odp_timer_CFLAGS = $(AM_CFLAGS) +odp_timer_LDFLAGS = $(AM_LDFLAGS) endif dist_odp_init_SOURCES = odp_init.c @@ -29,3 +32,4 @@ dist_odp_schedule_SOURCES = odp_schedule.c common/odp_cunit_common.c #For Linux generic the unimplemented crypto API functions break the #regression TODO: https://bugs.linaro.org/show_bug.cgi?id=975 XFAIL_TESTS=odp_crypto +dist_odp_timer_SOURCES = odp_timer.c common/odp_cunit_common.c diff --git a/test/validation/odp_timer.c b/test/validation/odp_timer.c new file mode 100644 index 0000000..5ffec29 --- /dev/null +++ b/test/validation/odp_timer.c @@ -0,0 +1,327 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + */ + +#include +#include +#include +#include +#include +#include +#include "odp_cunit_common.h" + +/** @private Timeout range in milliseconds (ms) */ +#define RANGE_MS 2000 + +/** @private Number of timers per thread */ +#define NTIMERS 2000 + +/** @private Barrier for thread synchronisation */ +static odp_barrier_t test_barrier; + +/** @private Timeout buffer pool handle used by all threads */ +static odp_buffer_pool_t tbp; + +/** @private Timer pool handle used by all threads */ +static odp_timer_pool_t tp; + +/** @private min() function */ +static int min(int a, int b) +{ + return a < b ? a : b; +} + +/* @private Timer helper structure */ +struct test_timer { + odp_timer_t tim; /* Timer handle */ + odp_buffer_t buf; /* Timeout buffer */ + odp_buffer_t buf2; /* Copy of buffer handle */ + uint64_t tick; /* Expiration tick or ODP_TICK_INVALID */ +}; + +/* @private Handle a received (timeout) buffer */ +static void handle_tmo(odp_buffer_t buf, bool stale, uint64_t prev_tick) +{ + odp_timer_t tim = ODP_TIMER_INVALID; + uint64_t tick = ODP_TICK_INVALID; + struct test_timer *ttp = NULL; + + /* Use assert() for correctness check of test program itself */ + assert(buf != ODP_BUFFER_INVALID); + if (!odp_timer_tmo_metadata(buf, &tim, &tick, (void **)&ttp)) { + /* Not a default timeout buffer */ + CU_FAIL("Unexpected buffer type received"); + return; + } + + if (tim == ODP_TIMER_INVALID) + CU_FAIL("odp_timer_tmo_metadata() invalid timer"); + if (tick == ODP_TICK_INVALID) + CU_FAIL("odp_timer_tmo_metadata() invalid tick"); + if (ttp == NULL) + CU_FAIL("odp_timer_tmo_metadata() null user ptr"); + + if (ttp->buf2 != buf) + CU_FAIL("odp_timer_tmo_metadata() wrong user ptr"); + if (ttp->tim != tim) + CU_FAIL("odp_timer_tmo_metadata() wrong timer"); + if (stale) { + /* Stale timeout => timer must have invalid tick */ + if (ttp->tick != ODP_TICK_INVALID) + CU_FAIL("Stale timeout for active timer"); + } else { + /* Fresh timeout => timer must have matching tick */ + if (ttp->tick != tick) + CU_FAIL("odp_timer_tmo_metadata() wrong tick"); + /* Check that timeout was delivered 'timely' */ + if (tick > odp_timer_current_tick(tp)) + CU_FAIL("Timeout delivered too early"); + if (tick < prev_tick) + CU_FAIL("Timeout delivered too late"); + } + + /* Use assert() for correctness check of test program itself */ + assert(ttp->buf == ODP_BUFFER_INVALID); + ttp->buf = buf; +} + +/* @private Worker thread entrypoint which performs timer alloc/set/cancel/free + * tests */ +static void *worker_entrypoint(void *arg) +{ + int thr = odp_thread_id(); + uint32_t i; + unsigned seed = thr; + (void)arg; + + odp_queue_t queue = odp_queue_create("timer_queue", + ODP_QUEUE_TYPE_POLL, + NULL); + if (queue == ODP_QUEUE_INVALID) + CU_FAIL_FATAL("Queue create failed"); + + struct test_timer *tt = malloc(sizeof(struct test_timer) * NTIMERS); + if (tt == NULL) + perror("malloc"), abort(); + + /* Prepare all timers */ + for (i = 0; i < NTIMERS; i++) { + tt[i].tim = odp_timer_alloc(tp, queue, &tt[i]); + if (tt[i].tim == ODP_TIMER_INVALID) + CU_FAIL_FATAL("Failed to allocate timer"); + tt[i].buf = odp_buffer_alloc(tbp); + if (tt[i].buf == ODP_BUFFER_INVALID) + CU_FAIL_FATAL("Failed to allocate timeout buffer"); + tt[i].buf2 = tt[i].buf; + tt[i].tick = ODP_TICK_INVALID; + } + + odp_barrier_wait(&test_barrier); + + /* Initial set all timers with a random expiration time */ + uint32_t nset = 0; + for (i = 0; i < NTIMERS; i++) { + uint64_t tck = odp_timer_current_tick(tp) + 1 + + odp_timer_ns_to_tick(tp, + (rand_r(&seed) % RANGE_MS) + * 1000000ULL); + tt[i].tick = odp_timer_set_abs(tt[i].tim, tck, &tt[i].buf); + uint64_t rc = tt[i].tick; + if (rc == ODP_TICK_TOOEARLY || + rc == ODP_TICK_TOOLATE || + rc == ODP_TICK_INVALID) { + CU_FAIL("Failed to set timer"); + } + nset++; + } + + /* Step through wall time, 1ms at a time and check for expired timers */ + uint32_t nrcv = 0; + uint32_t nreset = 0; + uint32_t ncancel = 0; + uint32_t ntoolate = 0; + uint32_t ms; + uint64_t prev_tick = odp_timer_current_tick(tp); + for (ms = 0; ms < 7 * RANGE_MS / 10; ms++) { + odp_buffer_t buf; + while ((buf = odp_queue_deq(queue)) != ODP_BUFFER_INVALID) { + handle_tmo(buf, false, prev_tick - 1); + nrcv++; + } + prev_tick = odp_timer_current_tick(tp); + i = rand_r(&seed) % NTIMERS; + if (tt[i].buf == ODP_BUFFER_INVALID && + (rand_r(&seed) % 2 == 0)) { + /* Timer active, cancel it */ + tt[i].tick = odp_timer_cancel(tt[i].tim, &tt[i].buf); + if (tt[i].buf == ODP_BUFFER_INVALID) { + /* Cancel failed, timer already expired */ + ntoolate++; + } + ncancel++; + } else { + if (tt[i].buf != ODP_BUFFER_INVALID) + /* Timer inactive => set */ + nset++; + else + /* Timer active => reset */ + nreset++; + uint64_t tck = 1 + odp_timer_ns_to_tick(tp, + (rand_r(&seed) % RANGE_MS) * 1000000ULL); + tt[i].tick = odp_timer_set_rel(tt[i].tim, tck, + &tt[i].buf); + uint64_t rc = tt[i].tick; + if (rc == ODP_TICK_TOOEARLY || + rc == ODP_TICK_TOOLATE) { + CU_FAIL("Failed to set timer (tooearly/toolate)"); + } else if (rc == ODP_TICK_INVALID) { + /* Reset failed, timer already expired */ + ntoolate++; + } + } + if (usleep(1000/*1ms*/) < 0) + perror("usleep"), abort(); + } + + /* Free (including cancel) all timers */ + uint32_t nstale = 0; + for (i = 0; i < NTIMERS; i++) { + tt[i].tick = odp_timer_free(tt[i].tim, &tt[i].buf); + if (tt[i].buf == ODP_BUFFER_INVALID) + /* Cancel/free too late, timer already expired and + * timoeut buffer enqueued */ + nstale++; + } + + printf("Thread %u: %u timers set\n", thr, nset); + printf("Thread %u: %u timers reset\n", thr, nreset); + printf("Thread %u: %u timers cancelled\n", thr, ncancel); + printf("Thread %u: %u timers reset/cancelled too late\n", + thr, ntoolate); + printf("Thread %u: %u timeouts received\n", thr, nrcv); + printf("Thread %u: %u stale timeout(s) after odp_timer_free()\n", + thr, nstale); + + /* Delay some more to ensure timeouts for expired timers can be + * received */ + usleep(1000/*1ms*/); + while (nstale != 0) { + odp_buffer_t buf = odp_queue_deq(queue); + if (buf != ODP_BUFFER_INVALID) { + handle_tmo(buf, true, 0/*Dont' care for stale tmo's*/); + nstale--; + } else { + CU_FAIL("Failed to receive stale timeout"); + break; + } + } + /* Check if there any more (unexpected) buffers */ + odp_buffer_t buf = odp_queue_deq(queue); + if (buf != ODP_BUFFER_INVALID) + CU_FAIL("Unexpected buffer received"); + + printf("Thread %u: exiting\n", thr); + return NULL; +} + +/* @private Timer test case entrypoint */ +static void test_odp_timer_all(void) +{ + odp_buffer_pool_param_t params; + int num_workers = min(odp_sys_cpu_count(), MAX_WORKERS); + + /* Create timeout buffer pools */ + params.buf_size = 0; + params.buf_align = ODP_CACHE_LINE_SIZE; + params.num_bufs = (NTIMERS + 1) * num_workers; + params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; + tbp = odp_buffer_pool_create("tmo_pool", ODP_SHM_INVALID, ¶ms); + if (tbp == ODP_BUFFER_POOL_INVALID) + CU_FAIL_FATAL("Timeout buffer pool create failed"); + +#define NAME "timer_pool" +#define RES (10 * ODP_TIME_MSEC / 3) +#define MIN (10 * ODP_TIME_MSEC / 3) +#define MAX (1000000 * ODP_TIME_MSEC) + /* Create a timer pool */ + tp = odp_timer_pool_create(NAME, tbp, + RES, MIN, MAX, + num_workers * NTIMERS, + true, ODP_CLOCK_CPU); + if (tp == ODP_TIMER_POOL_INVALID) + CU_FAIL_FATAL("Timer pool create failed"); + + /* Start all created timer pools */ + odp_timer_pool_start(); + + odp_timer_pool_info_t tpinfo; + size_t sz = odp_timer_pool_info(tp, &tpinfo, sizeof(tpinfo)); + if (sz < offsetof(odp_timer_pool_info_t, name) + strlen(NAME) + 1) + CU_FAIL("odp_timer_pool_info"); + CU_ASSERT(strcmp(tpinfo.name, NAME) == 0); + CU_ASSERT(tpinfo.resolution == RES); + CU_ASSERT(tpinfo.min_tmo == odp_timer_ns_to_tick(tp, MIN)); + CU_ASSERT(tpinfo.max_tmo == odp_timer_ns_to_tick(tp, MAX)); + printf("Timer pool\n"); + printf("----------\n"); + printf(" name: %s\n", tpinfo.name); + printf(" resolution: %"PRIu64" ns (%"PRIu64" us)\n", + tpinfo.resolution, tpinfo.resolution / 1000); + printf(" min tmo: %"PRIu64" tick(s)\n", tpinfo.min_tmo); + printf(" max tmo: %"PRIu64" ticks\n", tpinfo.max_tmo); + printf("\n"); + + printf("#timers..: %u\n", NTIMERS); + printf("Tmo range: %u ms (%"PRIu64" ticks)\n", RANGE_MS, + odp_timer_ns_to_tick(tp, 1000000ULL * RANGE_MS)); + printf("\n"); + + uint64_t tick; + for (tick = 0; tick < 1000000000000ULL; tick += 1000000ULL) { + uint64_t ns = odp_timer_tick_to_ns(tp, tick); + uint64_t t2 = odp_timer_ns_to_tick(tp, ns); + if (tick != t2) + CU_FAIL("Invalid conversion tick->ns->tick"); + } + + /* Initialize barrier used by worker threads for synchronization */ + odp_barrier_init(&test_barrier, num_workers); + + /* Create and start worker threads */ + pthrd_arg thrdarg; + thrdarg.testcase = 0; + thrdarg.numthrds = num_workers; + odp_cunit_thread_create(worker_entrypoint, &thrdarg); + + /* Wait for worker threads to exit */ + odp_cunit_thread_exit(&thrdarg); + + /* Check some statistics after the test */ + sz = odp_timer_pool_info(tp, &tpinfo, sizeof(tpinfo)); + if (sz < offsetof(odp_timer_pool_info_t, name) + strlen(NAME) + 1) + CU_FAIL("odp_timer_pool_info"); + CU_ASSERT(tpinfo.num_timers == (unsigned)num_workers * NTIMERS); + CU_ASSERT(tpinfo.cur_timers == 0); + CU_ASSERT(tpinfo.hwm_timers == (unsigned)num_workers * NTIMERS); + + /* Destroy timer pool, all timers must have been freed */ + odp_timer_pool_destroy(tp); + + CU_PASS("ODP timer test"); +} + +CU_TestInfo test_odp_timer[] = { + {"test_odp_timer_all", test_odp_timer_all}, + CU_TEST_INFO_NULL, +}; + +CU_SuiteInfo odp_testsuites[] = { + {"Timer", NULL, NULL, NULL, NULL, test_odp_timer}, + CU_SUITE_INFO_NULL, +};