From patchwork Wed Jan 27 17:36:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zoltan Kiss X-Patchwork-Id: 60634 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp2725114lbb; Wed, 27 Jan 2016 09:37:04 -0800 (PST) X-Received: by 10.107.198.202 with SMTP id w193mr4530941iof.178.1453916224222; Wed, 27 Jan 2016 09:37:04 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id c19si12959097igr.39.2016.01.27.09.36.56; Wed, 27 Jan 2016 09:37:04 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 1C7FF615CB; Wed, 27 Jan 2016 17:36:56 +0000 (UTC) Authentication-Results: lists.linaro.org; dkim=fail reason="verification failed; unprotected key" header.d=linaro.org header.i=@linaro.org header.b=iSTehmrm; dkim-adsp=none (unprotected policy); dkim-atps=neutral X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id A834C6178C; Wed, 27 Jan 2016 17:36:50 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 05B65617F9; Wed, 27 Jan 2016 17:36:48 +0000 (UTC) Received: from mail-wm0-f50.google.com (mail-wm0-f50.google.com [74.125.82.50]) by lists.linaro.org (Postfix) with ESMTPS id DF5A9615CB for ; Wed, 27 Jan 2016 17:36:47 +0000 (UTC) Received: by mail-wm0-f50.google.com with SMTP id n5so39743161wmn.1 for ; Wed, 27 Jan 2016 09:36:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=VvZ5ji8Y+0cMvsP/R98v94VAjEM/dvF5BWlM3Eih++A=; b=iSTehmrmBGqGq9hDcB6BPr58kdQfbMO9HOUeaJgWfqWnPwwrfQA8OTdB2wNRTz+IGB puTOuawh3DLVo1YnsiY1CagPO5MqE/as38b/+qTH6+tjlCBweINe6hrX7N3KG9y/3e9y Ke6jTU0jxnz71OrQ1lKTyAcMkCc5IIkRo9U/8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VvZ5ji8Y+0cMvsP/R98v94VAjEM/dvF5BWlM3Eih++A=; b=WqBz3rvMV0r3cRvTe8klw5nfkMc6j5pbDXaLbwNISSLCRPqVuu1rDuljAoY3hyM2dS 7Vd3G4Dys/LeYZodz7RVtKF6UxiBgvMNcsa8Ob5sI2ZCo2TY/ps84+o8a71Olry5qx+N YGzjBCMeYNdbbTLNx31aUhDgJ+A/k9JAj/M8jyjpdtnafaszL4F8rolQo0DAArJAvtci JTSH54p9v6zmEhRdEZt2WVG+V4t7LQ/tO+ywzIWzeDSzg7xtCzdpvAzSTU9/rkO79QwA m0zOYF0Vj3K3Q/BjEIqROHh6H8/S15kFCokJrWT49zltFzuKco6IUW7+8mUP8qcufoOJ yBYw== X-Gm-Message-State: AG10YOQ+W1e9JgzmbVe/MpSsmEW5L+gH/zBB9CForjsYFY99zj/AE2zjDOsO5n0phQMTtyQ/rQE= X-Received: by 10.28.215.136 with SMTP id o130mr31079324wmg.33.1453916206994; Wed, 27 Jan 2016 09:36:46 -0800 (PST) Received: from localhost.localdomain ([195.11.233.227]) by smtp.googlemail.com with ESMTPSA id gl10sm7197227wjb.30.2016.01.27.09.36.45 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 27 Jan 2016 09:36:46 -0800 (PST) From: Zoltan Kiss To: lng-odp@lists.linaro.org Date: Wed, 27 Jan 2016 17:36:33 +0000 Message-Id: <1453916193-31377-1-git-send-email-zoltan.kiss@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453487050-10913-3-git-send-email-zoltan.kiss@linaro.org> References: <1453487050-10913-3-git-send-email-zoltan.kiss@linaro.org> X-Topics: timers patch Cc: Ola.Liljedahl@arm.com, petri.savolainen@nokia.com Subject: [lng-odp] [API-NEXT PATCH 2/2 v2] validation: timer: handle early exhaustion of pool X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" As per-thread caches might retain some elements, no particular thread should assume that a certain amount of elements are available at any time. Also, to make the high watermark test reliable we should avoid releasing timers. Signed-off-by: Zoltan Kiss Reviewed-by: Ola Liljedahl --- v2: - keep high watermark test by bookkeeping the exact amounts of allocation. It needs a change in order of allocation to make sure no timer is released, otherwise the watermark becomes unpredictable - use Ola's subject recommendation test/validation/timer/timer.c | 38 +++++++++++++++++++++++++++----------- 1 file changed, 27 insertions(+), 11 deletions(-) diff --git a/test/validation/timer/timer.c b/test/validation/timer/timer.c index 5d89700..8f00788 100644 --- a/test/validation/timer/timer.c +++ b/test/validation/timer/timer.c @@ -37,6 +37,10 @@ static odp_timer_pool_t tp; /** @private Count of timeouts delivered too late */ static odp_atomic_u32_t ndelivtoolate; +/** @private Sum of all allocated timers from all threads. Thread-local + * caches may make this number lower than the capacity of the pool */ +static odp_atomic_u32_t timers_allocated; + /** @private min() function */ static int min(int a, int b) { @@ -274,7 +278,7 @@ static void handle_tmo(odp_event_t ev, bool stale, uint64_t prev_tick) static void *worker_entrypoint(void *arg TEST_UNUSED) { int thr = odp_thread_id(); - uint32_t i; + uint32_t i, allocated; unsigned seed = thr; int rc; @@ -290,21 +294,30 @@ static void *worker_entrypoint(void *arg TEST_UNUSED) /* Prepare all timers */ for (i = 0; i < NTIMERS; i++) { - tt[i].tim = odp_timer_alloc(tp, queue, &tt[i]); - if (tt[i].tim == ODP_TIMER_INVALID) - CU_FAIL_FATAL("Failed to allocate timer"); tt[i].ev = odp_timeout_to_event(odp_timeout_alloc(tbp)); - if (tt[i].ev == ODP_EVENT_INVALID) - CU_FAIL_FATAL("Failed to allocate timeout"); + if (tt[i].ev == ODP_EVENT_INVALID) { + LOG_DBG("Failed to allocate timeout (%d/%d)\n", + i, NTIMERS); + break; + } + tt[i].tim = odp_timer_alloc(tp, queue, &tt[i]); + if (tt[i].tim == ODP_TIMER_INVALID) { + LOG_DBG("Failed to allocate timer (%d/%d)\n", + i, NTIMERS); + odp_timeout_free(tt[i].ev); + break; + } tt[i].ev2 = tt[i].ev; tt[i].tick = TICK_INVALID; } + allocated = i; + odp_atomic_fetch_add_u32(&timers_allocated, allocated); odp_barrier_wait(&test_barrier); /* Initial set all timers with a random expiration time */ uint32_t nset = 0; - for (i = 0; i < NTIMERS; i++) { + for (i = 0; i < allocated; i++) { uint64_t tck = odp_timer_current_tick(tp) + 1 + odp_timer_ns_to_tick(tp, (rand_r(&seed) % RANGE_MS) @@ -336,7 +349,7 @@ static void *worker_entrypoint(void *arg TEST_UNUSED) nrcv++; } prev_tick = odp_timer_current_tick(tp); - i = rand_r(&seed) % NTIMERS; + i = rand_r(&seed) % allocated; if (tt[i].ev == ODP_EVENT_INVALID && (rand_r(&seed) % 2 == 0)) { /* Timer active, cancel it */ @@ -384,7 +397,7 @@ static void *worker_entrypoint(void *arg TEST_UNUSED) /* Cancel and free all timers */ uint32_t nstale = 0; - for (i = 0; i < NTIMERS; i++) { + for (i = 0; i < allocated; i++) { (void)odp_timer_cancel(tt[i].tim, &tt[i].ev); tt[i].tick = TICK_INVALID; if (tt[i].ev == ODP_EVENT_INVALID) @@ -428,7 +441,7 @@ static void *worker_entrypoint(void *arg TEST_UNUSED) rc = odp_queue_destroy(queue); CU_ASSERT(rc == 0); - for (i = 0; i < NTIMERS; i++) { + for (i = 0; i < allocated; i++) { if (tt[i].ev != ODP_EVENT_INVALID) odp_event_free(tt[i].ev); } @@ -504,6 +517,9 @@ void timer_test_odp_timer_all(void) /* Initialize the shared timeout counter */ odp_atomic_init_u32(&ndelivtoolate, 0); + /* Initialize the number of finally allocated elements */ + odp_atomic_init_u32(&timers_allocated, 0); + /* Create and start worker threads */ pthrd_arg thrdarg; thrdarg.testcase = 0; @@ -520,7 +536,7 @@ void timer_test_odp_timer_all(void) CU_FAIL("odp_timer_pool_info"); CU_ASSERT(tpinfo.param.num_timers == (unsigned)num_workers * NTIMERS); CU_ASSERT(tpinfo.cur_timers == 0); - CU_ASSERT(tpinfo.hwm_timers == (unsigned)num_workers * NTIMERS); + CU_ASSERT(tpinfo.hwm_timers == odp_atomic_load_u32(&timers_allocated)); /* Destroy timer pool, all timers must have been freed */ odp_timer_pool_destroy(tp);