From patchwork Fri Sep 26 17:33:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ola Liljedahl X-Patchwork-Id: 38014 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7EE00202DB for ; Fri, 26 Sep 2014 17:34:18 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id a1sf5945532wgh.1 for ; Fri, 26 Sep 2014 10:34:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=ZjT/ynvHWFAlbfpbVjlNIclTZUiFMiVLj4cTcHlw3zA=; b=ZxJPNT+8rv84uc8l9UiaxKvki3tpHaXo0LfrNGtA++IjJXyDcbLpK7ImH92IvQ7YEw dS5GMtSXrPjvQZuHE11xQNTp8bj2BjdyIvD933unx6BytAaJgEmrKcCpjs5J/wxEI8/u iSMVcFasvv56ATxGCboyJDdqkSNRugOmLtMfeEt86NobN9DluvPCEKBnwIGKCubaYB/U cxuNpDk6hUAV6aSURVFliyVphIj6JZrhVhfl3MK2UU9z9VL9GrZ+ISJ5q4MJjWlkdTN6 +7z30iSjMZ2UA/BW+M+sZKVVUs+OyM1xWK02N/mV/PSiFtPjzPrUlvuhWQATAJFSVtc9 KSUg== X-Gm-Message-State: ALoCoQk/5Z79o9HGWIe3PcWYKq1JlX3HQ+x4cOrIlcoQRf3ZNCu0w5Oaclj09GBXYy+0LpWr/LKO X-Received: by 10.194.57.237 with SMTP id l13mr7044wjq.7.1411752857706; Fri, 26 Sep 2014 10:34:17 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.169 with SMTP id t9ls421971lat.74.gmail; Fri, 26 Sep 2014 10:34:17 -0700 (PDT) X-Received: by 10.152.43.18 with SMTP id s18mr21419696lal.68.1411752857327; Fri, 26 Sep 2014 10:34:17 -0700 (PDT) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com [209.85.217.175]) by mx.google.com with ESMTPS id zk2si8135289lbb.51.2014.09.26.10.34.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 26 Sep 2014 10:34:17 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by mail-lb0-f175.google.com with SMTP id w7so9410943lbi.34 for ; Fri, 26 Sep 2014 10:34:17 -0700 (PDT) X-Received: by 10.112.76.6 with SMTP id g6mr20840672lbw.22.1411752857113; Fri, 26 Sep 2014 10:34:17 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp101202lbb; Fri, 26 Sep 2014 10:34:15 -0700 (PDT) X-Received: by 10.224.125.200 with SMTP id z8mr31281312qar.77.1411752854251; Fri, 26 Sep 2014 10:34:14 -0700 (PDT) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id s44si6505249qge.111.2014.09.26.10.34.11 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 26 Sep 2014 10:34:14 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XXZPF-0008Lq-UU; Fri, 26 Sep 2014 17:34:10 +0000 Received: from mail-lb0-f180.google.com ([209.85.217.180]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XXZP4-0008LU-Rz for lng-odp@lists.linaro.org; Fri, 26 Sep 2014 17:33:59 +0000 Received: by mail-lb0-f180.google.com with SMTP id f15so1391045lbj.25 for ; Fri, 26 Sep 2014 10:33:52 -0700 (PDT) X-Received: by 10.112.159.169 with SMTP id xd9mr20834306lbb.71.1411752832804; Fri, 26 Sep 2014 10:33:52 -0700 (PDT) Received: from localhost.localdomain (84-217-192-59.tn.glocalnet.net. [84.217.192.59]) by mx.google.com with ESMTPSA id us3sm2104005lbc.24.2014.09.26.10.33.49 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Sep 2014 10:33:51 -0700 (PDT) From: Ola Liljedahl To: lng-odp@lists.linaro.org Date: Fri, 26 Sep 2014 19:33:38 +0200 Message-Id: <1411752818-8117-1-git-send-email-ola.liljedahl@linaro.org> X-Mailer: git-send-email 1.9.1 X-Topics: timers patch Subject: [lng-odp] [PATCHv3] Timer API and and priority queue-based implementation X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ola.liljedahl@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Ola Liljedahl --- Summary of changes from v2 based on review feedback from Petri S. odp_timer.h: Renamed struct odp_timer_pool to struct odp_timer_pool_s. Renamed enum odp_timer_pool_clock_source_e to enum odp_timer_clk_src_e. Replaced ODP_CLOCK_DEFAULT and ODP_CLOCK_NONE with ODP_CLOCK_CPU and ODP_CLOCK_EXT. Renamed struct odp_timer to struct odp_timer_s. Removed odp_timer_tick_t, use uint64_t instead. Updated all affected function prototypes. We don't want to give the implementations the possibility to use a different scalar type. Returned min_tmo parameter to odp_timer_pool_create(). Unsure of the exact implications of this parameter. All timer set functions now return a status code: success/tooearly/toolate. Added odp_timer_set_rel_w_buf() function. Added odp_timeout_from_buffer() and odp_buffer_from_timeout() functions. Added odp_timer_return_tmo() function. odp_timer_return_tmo() should be called for all fresh and stale timeouts when processing has finished. Renamed odp_timer_get_handle() to odp_timer_handle(). Renamed odp_timer_get_expiry() to odp_timer_expiration(). Renamed odp_timer_get_userptr() to odp_timer_userptr(). Removed odp_timer_pool_expire() prototype (needed only for testing purposes). odp_timer.c: Changes caused by updates to odp_timer.h. Separate between odp_buffer_t and odp_timer_tmo_t types and use new translation functions. Verify that a timer that has been used with a user-defined timeout buffer (odp_timer_set_abs/rel_w_buf) cannot be used without a user-defined buffer (odp_timer_set_abs/rel). Implemented the checks for too early and too late expiration time based on minimum and maximum tiomeouts specified in odp_timer_pool_create(). Moved some functionality from odp_timer_tmo_status() to new call odp_timer_return_tmo(). odp_timer_ping.c and odp_timer_test.c: Changes because of update to odp_timer.h. example/timer/odp_timer_test.c | 124 +-- platform/linux-generic/Makefile.am | 1 + platform/linux-generic/include/api/odp_timer.h | 550 ++++++++++-- .../include/odp_priority_queue_internal.h | 108 +++ .../linux-generic/include/odp_timer_internal.h | 71 +- platform/linux-generic/odp_priority_queue.c | 283 +++++++ platform/linux-generic/odp_timer.c | 939 ++++++++++++++------- test/api_test/odp_timer_ping.c | 73 +- 8 files changed, 1648 insertions(+), 501 deletions(-) create mode 100644 platform/linux-generic/include/odp_priority_queue_internal.h create mode 100644 platform/linux-generic/odp_priority_queue.c diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c index 6e1715d..5c3d736 100644 --- a/example/timer/odp_timer_test.c +++ b/example/timer/odp_timer_test.c @@ -41,67 +41,88 @@ typedef struct { /** @private Barrier for test synchronisation */ static odp_barrier_t test_barrier; -/** @private Timer handle*/ -static odp_timer_t test_timer; +/** @private Timer pool handle*/ +static odp_timer_pool_t tp; +static const char *const status2str[] = { + "fresh", "stale", "orphaned" +}; + /** @private test timeout */ static void test_abs_timeouts(int thr, test_args_t *args) { - uint64_t tick; uint64_t period; uint64_t period_ns; odp_queue_t queue; - odp_buffer_t buf; - int num; + int remain = args->tmo_count; + odp_timer_t hdl; + uint64_t tick; ODP_DBG(" [%i] test_timeouts\n", thr); queue = odp_queue_lookup("timer_queue"); period_ns = args->period_us*ODP_TIME_USEC; - period = odp_timer_ns_to_tick(test_timer, period_ns); + period = odp_timer_ns_to_tick(tp, period_ns); ODP_DBG(" [%i] period %"PRIu64" ticks, %"PRIu64" ns\n", thr, period, period_ns); - tick = odp_timer_current_tick(test_timer); - - ODP_DBG(" [%i] current tick %"PRIu64"\n", thr, tick); - - tick += period; + ODP_DBG(" [%i] current tick %"PRIu64"\n", thr, + odp_timer_current_tick(tp)); - if (odp_timer_absolute_tmo(test_timer, tick, queue, ODP_BUFFER_INVALID) - == ODP_TIMER_TMO_INVALID){ - ODP_DBG("Timeout request failed\n"); + odp_timer_t test_timer; + test_timer = odp_timer_alloc(tp, queue, NULL); + if (test_timer == ODP_TIMER_INVALID) { + ODP_ERR("Failed to allocate timer\n"); return; } + tick = odp_timer_current_tick(tp); + hdl = test_timer; - num = args->tmo_count; - - while (1) { - odp_timeout_t tmo; - - buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); - - tmo = odp_timeout_from_buffer(buf); - tick = odp_timeout_tick(tmo); - - ODP_DBG(" [%i] timeout, tick %"PRIu64"\n", thr, tick); - - odp_buffer_free(buf); - - num--; - - if (num == 0) - break; + while (remain != 0) { + odp_buffer_t buf; + odp_timer_tmo_t tmo; + odp_timer_tmo_status_t stat; + odp_timer_set_t rc; tick += period; + rc = odp_timer_set_abs(hdl, tick); + if (odp_unlikely(rc != ODP_TIMER_SET_SUCCESS)) { + ODP_ERR("odp_timer_set_abs() failed (%u)\n", rc); + abort(); + } - odp_timer_absolute_tmo(test_timer, tick, - queue, ODP_BUFFER_INVALID); + /* Get the next ready buffer/timeout */ + buf = odp_schedule_one(&queue, ODP_SCHED_WAIT); + if (odp_unlikely(odp_buffer_type(buf) != + ODP_BUFFER_TYPE_TIMEOUT)) { + ODP_ERR("Unexpected buffer type received\n"); + abort(); + } + tmo = odp_timeout_from_buffer(buf); + stat = odp_timer_tmo_status(tmo); + tick = odp_timer_expiration(tmo); + hdl = odp_timer_handle(tmo); + ODP_DBG(" [%i] timeout, tick %"PRIu64", status %s\n", + thr, tick, status2str[stat]); + /* if (stat == ODP_TMO_FRESH) - do your thing! */ + if (odp_likely(stat == ODP_TMO_ORPHAN)) { + /* Some other thread freed the corresponding + timer after the timeout was already + enqueued */ + /* Timeout handle is invalid, use our own timer */ + hdl = test_timer; + } + /* Return timeout to timer manager, regardless of status */ + odp_timer_return_tmo(tmo); + remain--; } + odp_timer_cancel(test_timer); + odp_timer_free(test_timer); + if (odp_queue_sched_type(queue) == ODP_SCHED_SYNC_ATOMIC) odp_schedule_release_atomic(); } @@ -155,7 +176,6 @@ static void print_usage(void) printf("Options:\n"); printf(" -c, --count core count, core IDs start from 1\n"); printf(" -r, --resolution timeout resolution in usec\n"); - printf(" -m, --min minimum timeout in usec\n"); printf(" -x, --max maximum timeout in usec\n"); printf(" -p, --period timeout period in usec\n"); printf(" -t, --timeouts timeout repeat count\n"); @@ -190,14 +210,14 @@ static void parse_args(int argc, char *argv[], test_args_t *args) /* defaults */ args->core_count = 0; /* all cores */ args->resolution_us = 10000; - args->min_us = args->resolution_us; + args->min_us = 0; args->max_us = 10000000; args->period_us = 1000000; args->tmo_count = 30; while (1) { opt = getopt_long(argc, argv, "+c:r:m:x:p:t:h", - longopts, &long_index); + longopts, &long_index); if (opt == -1) break; /* No more options */ @@ -321,10 +341,25 @@ int main(int argc, char *argv[]) ODP_BUFFER_TYPE_TIMEOUT); if (pool == ODP_BUFFER_POOL_INVALID) { - ODP_ERR("Pool create failed.\n"); + ODP_ERR("Buffer pool create failed.\n"); return -1; } + tp = odp_timer_pool_create("timer_pool", pool, + args.resolution_us*ODP_TIME_USEC, + args.min_us*ODP_TIME_USEC, + args.max_us*ODP_TIME_USEC, + num_workers, /* One timer per worker */ + true, + ODP_CLOCK_CPU); + if (tp == ODP_TIMER_POOL_INVALID) { + ODP_ERR("Timer pool create failed.\n"); + return -1; + } + odp_timer_pool_start(); + + odp_shm_print_all(); + /* * Create a queue for timer test */ @@ -340,19 +375,6 @@ int main(int argc, char *argv[]) return -1; } - test_timer = odp_timer_create("test_timer", pool, - args.resolution_us*ODP_TIME_USEC, - args.min_us*ODP_TIME_USEC, - args.max_us*ODP_TIME_USEC); - - if (test_timer == ODP_TIMER_INVALID) { - ODP_ERR("Timer create failed.\n"); - return -1; - } - - - odp_shm_print_all(); - printf("CPU freq %"PRIu64" hz\n", odp_sys_cpu_hz()); printf("Cycles vs nanoseconds:\n"); ns = 0; diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 25c82ea..26964d8 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -62,6 +62,7 @@ __LIB__libodp_la_SOURCES = \ odp_packet_flags.c \ odp_packet_io.c \ odp_packet_socket.c \ + odp_priority_queue.c \ odp_queue.c \ odp_ring.c \ odp_rwlock.c \ diff --git a/platform/linux-generic/include/api/odp_timer.h b/platform/linux-generic/include/api/odp_timer.h index 01db839..d571766 100644 --- a/platform/linux-generic/include/api/odp_timer.h +++ b/platform/linux-generic/include/api/odp_timer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2013, Linaro Limited +/* Copyright (c) 2014, Linaro Limited * All rights reserved. * * SPDX-License-Identifier: BSD-3-Clause @@ -8,7 +8,182 @@ /** * @file * - * ODP timer + * ODP timer service + * + +//Example #1 Retransmission timer (e.g. for reliable connections) + +//Create timer pool for reliable connections +#define SEC 1000000000ULL //1s expressed in nanoseconds +odp_timer_pool_t tcp_tpid = + odp_timer_pool_create("TCP", + buffer_pool, + 1000000,//resolution 1ms + 0,//min tmo + 7200 * SEC,//max tmo length 2hours + 40000,//num_timers + true,//shared + ODP_CLOCK_CPU + ); +if (tcp_tpid == ODP_TIMER_POOL_INVALID) +{ + //Failed to create timer pool => fatal error +} + + +//Setting up a new connection +//Allocate retransmission timeout (identical for supervision timeout) +//The user pointer points back to the connection context +conn->ret_tim = odp_timer_alloc(tcp_tpid, queue, conn); +//Check if all resources were successfully allocated +if (conn->ret_tim == ODP_TIMER_INVALID) +{ + //Failed to allocate all resources for connection => tear down + //Destroy timeout + odp_timer_free(conn->ret_tim); + //Tear down connection + ... + return false; +} +//All necessary resources successfully allocated +//Compute initial retransmission length in timer ticks +conn->ret_len = odp_timer_ns_to_tick(tcp_tpid, 3 * SEC);//Per RFC1122 +//Arm the timer +odp_timer_set_rel(conn->ret_tim, conn->ret_len); +return true; + + +//A packet for the connection has just been transmitted +//Reset the retransmission timer +odp_timer_set_rel(conn->ret_tim, conn->ret_len); + + +//A retransmission timeout buffer for the connection has been received +odp_timer_tmo_t tmo = odp_timeout_from_buffer(buf); +odp_timer_tmo_status_t stat = odp_timer_tmo_status(tmo); +//Check if timeout is fresh or stale, for stale timeouts we need to reset the +//timer +if (stat == ODP_TMO_FRESH) { + //Fresh timeout, last transmitted packet not acked in time => + retransmit + //Get connection from timeout event + conn = odp_timer_get_userptr(tmo); + //Retransmit last packet (e.g. TCP segment) + ... + //Re-arm timer using original delta value + odp_timer_set_rel(conn->ret_tim, conn->ret_len); +} else if (stat == ODP_TMO_ORPHAN) { + odp_free_buffer(buf); + return;//Get out of here +} // else stat == ODP_TMO_STALE, do nothing +//Finished processing, return timeout +odp_timer_return_tmo(tmo); + + +//Example #2 Periodic tick + +//Create timer pool for periodic ticks +odp_timer_pool_t per_tpid = + odp_timer_pool_create("periodic-tick", + buffer_pool, + 1,//resolution 1ns + 1,//minimum timeout length 1ns + 1000000000,//maximum timeout length 1s + 10,//num_timers + false,//not shared + ODP_CLOCK_CPU + ); +if (per_tpid == ODP_TIMER_POOL_INVALID) +{ + //Failed to create timer pool => fatal error +} + + +//Allocate periodic timer +tim_1733 = odp_timer_alloc(per_tpid, queue, NULL); +//Check if all resources were successfully allocated +if (tim_1733 == ODP_TIMER_INVALID) +{ + //Failed to allocate all resources => tear down + //Destroy timeout + odp_timer_free(tim_1733); + //Tear down other state + ... + return false; +} +//All necessary resources successfully allocated +//Compute tick period in timer ticks +period_1733 = odp_timer_ns_to_tick(per_tpid, 1000000000U / 1733U);//1733Hz +//Compute when next tick should expire +next_1733 = odp_timer_current_tick(per_tpid) + period_1733; +//Arm the periodic timer +odp_timer_set_abs(tim_1733, next_1733); +return true; + + + +//A periodic timer timeout has been received +odp_timer_tmo_t tmo = odp_timeout_from_buffer(buf); +//Get status of timeout +odp_timer_tmo_status_t stat = odp_timer_tmo_status(tmo); +//We expect the timeout is always fresh since we are not calling set or cancel +on active or expired timers in this example +assert(stat == ODP_TMO_FRESH); +//Do processing driven by timeout *before* +... +do { + //Compute when the timer should expire next + next_1733 += period_1733; + //Check that this is in the future + if (likely(next_1733 > odp_timer_current_tick(per_tpid)) + break;//Yes, done + //Else we missed a timeout + //Optionally attempt some recovery and/or logging of the problem + ... +} while (0); +//Re-arm periodic timer +odp_timer_set_abs(tim_1733, next_1733); +//Or do processing driven by timeout *after* +... +odp_timer_return_tmo(tmo); +return; + +//Example #3 Tear down of flow +//ctx points to flow context data structure owned by application +//Free the timer, cancelling any timeout +odp_timer_free(ctx->timer);//Any enqueued timeout will be made invalid +//Continue tearing down and eventually freeing context +... +return; + +//A timeout has been received, check status +odp_timer_tmo_t tmo = odp_timeout_from_buffer(buf); +switch (odp_timer_tmo_status(tmo)) +{ + case ODP_TMO_FRESH : + //A flow has timed out, tear it down + //Find flow context from timeout + ctx = (context *)odp_timer_get_userptr(tmo); + //Free the supervision timer, any enqueued timeout will remain + odp_timer_free(ctx->tim); + //Free other flow related resources + ... + //Free the timeout buffer + odp_buffer_free(buf); + //Flow torn down + break; + case ODP_TMO_STALE : + //A stale timeout was received, return timeout and update timer + odp_timer_return_tmo(tmo); + break; + case ODP_TMO_ORPHAN : + //Orphaned timeout (from previously torn down flow) + //No corresponding timer or flow context + //Free the timeout buffer + odp_buffer_free(buf); + break; +} + */ #ifndef ODP_TIMER_H_ @@ -18,144 +193,395 @@ extern "C" { #endif +#include #include #include #include #include +/** +* ODP timer pool handle (platform dependent) +*/ +struct odp_timer_pool_s; +typedef struct odp_timer_pool_s *odp_timer_pool_t; /** - * ODP timer handle + * Invalid timer pool handle (platform dependent) */ -typedef uint32_t odp_timer_t; +#define ODP_TIMER_POOL_INVALID NULL -/** Invalid timer */ -#define ODP_TIMER_INVALID 0 +typedef enum odp_timer_clk_src_e { + ODP_CLOCK_CPU, + ODP_CLOCK_EXT + /* Platform dependent which other clock sources exist */ +} odp_timer_clk_src_t; +/** +* ODP timer handle (platform dependent) +*/ +struct odp_timer_s; +typedef struct odp_timer_s *odp_timer_t; /** - * ODP timeout handle + * Invalid timer handle (platform dependent) */ -typedef odp_buffer_t odp_timer_tmo_t; - -/** Invalid timeout */ -#define ODP_TIMER_TMO_INVALID 0 +#define ODP_TIMER_INVALID NULL +/** + * ODP timer set returns + */ +typedef enum odp_timer_set_e { + ODP_TIMER_SET_SUCCESS, /* Set operation successful */ + ODP_TIMER_SET_TOOEARLY,/* Set operation failed, expiration too early */ + ODP_TIMER_SET_TOOLATE /* Set operation failed, expiration too late */ +} odp_timer_set_t; /** - * Timeout notification + * ODP timeout event handle */ -typedef odp_buffer_t odp_timeout_t; +typedef odp_buffer_t odp_timer_tmo_t; +/** + * ODP timeout status + */ +typedef enum odp_timer_tmo_status_e { + ODP_TMO_FRESH, /* Timeout is fresh, process it */ + ODP_TMO_STALE, /* Timer reset or cancelled, do nothing */ + ODP_TMO_ORPHAN /* Timer deleted, free timeout */ +} odp_timer_tmo_status_t; /** - * Create a timer + * Create a timer pool * - * Creates a new timer with requested properties. + * Create a new timer pool. * * @param name Name - * @param pool Buffer pool for allocating timeout notifications + * @param buf_pool Buffer pool for allocating timeouts (and only timeouts) * @param resolution Timeout resolution in nanoseconds - * @param min_tmo Minimum timeout duration in nanoseconds - * @param max_tmo Maximum timeout duration in nanoseconds + * @param min_tmo Minimum relative timeout in nanoseconds + * @param max_tmo Maximum relative timeout in nanoseconds + * @param num_timers Number of supported timers (minimum) + * @param shared Shared or private timer pool. + * Operations on shared timers will include the necessary + * mutual exclusion, operations on private timers may not + * (mutual exclusion is the responsibility of the caller). + * @param clk_src Clock source to use + * + * @return Timer pool handle if successful, otherwise ODP_TIMER_POOL_INVALID + * and errno set + */ +odp_timer_pool_t +odp_timer_pool_create(const char *name, + odp_buffer_pool_t buf_pool, + uint64_t resolution, + uint64_t min_tmo, + uint64_t max_tmo, + uint32_t num_timers, + bool shared, + odp_timer_clk_src_t clk_src); + +/** + * Start a timer pool + * + * Start all created timer pools, enabling the allocation of timers. + * The purpose of this call is to coordinate the creation of multiple timer + * pools that may use the same underlying HW resources. + * This function may be called multiple times. + */ +void odp_timer_pool_start(void); + +/** + * Destroy a timer pool * - * @return Timer handle if successful, otherwise ODP_TIMER_INVALID + * Destroy a timer pool, freeing all resources. + * All timers must have been freed. + * + * @param tpid Timer pool identifier */ -odp_timer_t odp_timer_create(const char *name, odp_buffer_pool_t pool, - uint64_t resolution, uint64_t min_tmo, - uint64_t max_tmo); +void odp_timer_pool_destroy(odp_timer_pool_t tpid); /** * Convert timer ticks to nanoseconds * - * @param timer Timer + * @param tpid Timer pool identifier * @param ticks Timer ticks * * @return Nanoseconds */ -uint64_t odp_timer_tick_to_ns(odp_timer_t timer, uint64_t ticks); +uint64_t odp_timer_tick_to_ns(odp_timer_pool_t tpid, uint64_t ticks); /** * Convert nanoseconds to timer ticks * - * @param timer Timer + * @param tpid Timer pool identifier * @param ns Nanoseconds * * @return Timer ticks */ -uint64_t odp_timer_ns_to_tick(odp_timer_t timer, uint64_t ns); +uint64_t odp_timer_ns_to_tick(odp_timer_pool_t tpid, uint64_t ns); /** - * Timer resolution in nanoseconds + * Current tick value * - * @param timer Timer + * @param tpid Timer pool identifier * - * @return Resolution in nanoseconds + * @return Current time in timer ticks */ -uint64_t odp_timer_resolution(odp_timer_t timer); +uint64_t odp_timer_current_tick(odp_timer_pool_t tpid); /** - * Maximum timeout in timer ticks + * ODP timer configurations + */ + +typedef enum odp_timer_pool_conf_e { + ODP_TIMER_NAME, /* Return name of timer pool */ + ODP_TIMER_RESOLUTION,/* Return the timer resolution (in ns) */ + ODP_TIMER_MIN_TICKS, /* Return the min supported rel timeout (ticks) */ + ODP_TIMER_MAX_TICKS, /* Return the max supported rel timeout (ticks) */ + ODP_TIMER_NUM_TIMERS,/* Return number of supported timers */ + ODP_TIMER_SHARED /* Return shared flag */ +} odp_timer_pool_conf_t; + +/** + * Query different timer pool configurations, e.g. + * Timer resolution in nanoseconds + * Maximum timeout in timer ticks + * Number of supported timers + * Shared or private timer pool * - * @param timer Timer + * @param tpid Timer pool identifier + * @param item Configuration item being queried * - * @return Maximum timeout in timer ticks + * @return the requested piece of information or 0 for unknown item. */ -uint64_t odp_timer_maximum_tmo(odp_timer_t timer); +uintptr_t odp_timer_pool_query_conf(odp_timer_pool_t tpid, + odp_timer_pool_conf_t item); /** - * Current timer tick + * Allocate a timer * - * @param timer Timer + * Create a timer (allocating all necessary resources e.g. timeout event) from + * the timer pool. * - * @return Current time in timer ticks + * @param tpid Timer pool identifier + * @param queue Destination queue for timeout notifications + * @param user_ptr User defined pointer or NULL (copied to timeouts) + * + * @return Timer handle if successful, otherwise ODP_TIMER_INVALID and + * errno set. */ -uint64_t odp_timer_current_tick(odp_timer_t timer); +odp_timer_t odp_timer_alloc(odp_timer_pool_t tpid, + odp_queue_t queue, + void *user_ptr); /** - * Request timeout with an absolute timer tick + * Free a timer + * + * Free (destroy) a timer, freeing all associated resources (e.g. default + * timeout event). An expired and enqueued timeout event will not be freed. + * It is the responsibility of the application to free this timeout when it + * is received. + * + * @param tim Timer handle + */ +void odp_timer_free(odp_timer_t tim); + +/** + * Set a timer (absolute time) with a user-defined timeout buffer + * + * Set (arm) the timer to expire at specific time. The user-defined + * buffer will be enqueued when the timer expires. + * Arming may fail (if the timer is in state EXPIRED), an earlier timeout + * will then be received. odp_timer_tmo_status() must be used to check if + * the received timeout is valid. * - * When tick reaches tmo_tick, the timer enqueues the timeout notification into - * the destination queue. + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. * - * @param timer Timer - * @param tmo_tick Absolute timer tick value which triggers the timeout - * @param queue Destination queue for the timeout notification - * @param buf User defined timeout notification buffer. When - * ODP_BUFFER_INVALID, default timeout notification is used. + * @param tim Timer + * @param abs_tck Expiration time in absolute timer ticks + * @param user_buf The buffer to use as timeout event * - * @return Timeout handle if successful, otherwise ODP_TIMER_TMO_INVALID + * @return Success or failure code */ -odp_timer_tmo_t odp_timer_absolute_tmo(odp_timer_t timer, uint64_t tmo_tick, - odp_queue_t queue, odp_buffer_t buf); +odp_timer_set_t odp_timer_set_abs_w_buf(odp_timer_t tim, + uint64_t abs_tck, + odp_buffer_t user_buf); /** - * Cancel a timeout + * Set a timer with an absolute expiration time * - * @param timer Timer - * @param tmo Timeout to cancel + * Set (arm) the timer to expire at a specific time. + * Arming may fail (if the timer is in state EXPIRED), an earlier timeout + * will then be received. odp_timer_tmo_status() must be used to check if + * the received timeout is valid. * - * @return 0 if successful + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer + * @param abs_tck Expiration time in absolute timer ticks + * + * @return Success or failure code + */ +odp_timer_set_t odp_timer_set_abs(odp_timer_t tim, uint64_t abs_tck); + +/** + * Set a timer with a relative expiration time and user-defined buffer. + * + * Set (arm) the timer to expire at a relative future time. + * Arming may fail (if the timer is in state EXPIRED), + * an earlier timeout will then be received. odp_timer_tmo_status() must + * be used to check if the received timeout is valid. + * + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer + * @param rel_tck Expiration time in timer ticks relative to current time of + * the timer pool the timer belongs to + * @param user_buf The buffer to use as timeout event + * + * @return Success or failure code + */ +odp_timer_set_t odp_timer_set_rel_w_buf(odp_timer_t tim, + uint64_t rel_tck, + odp_buffer_t user_buf); +/** + * Set a timer with a relative expiration time + * + * Set (arm) the timer to expire at a relative future time. + * Arming may fail (if the timer is in state EXPIRED), + * an earlier timeout will then be received. odp_timer_tmo_status() must + * be used to check if the received timeout is valid. + * + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer + * @param rel_tck Expiration time in timer ticks relative to current time of + * the timer pool the timer belongs to + * + * @return Success or failure code + */ +odp_timer_set_t odp_timer_set_rel(odp_timer_t tim, uint64_t rel_tck); + +/** + * Cancel a timer + * + * Cancel a timer, preventing future expiration and delivery. + * + * A timer that has already expired and been enqueued for delivery may be + * impossible to cancel and will instead be delivered to the destination queue. + * Use odp_timer_tmo_status() the check whether a received timeout is fresh or + * stale (cancelled). Stale timeouts will automatically be recycled. + * + * Note: any invalid parameters will be treated as programming errors and will + * cause the application to abort. + * + * @param tim Timer handle */ -int odp_timer_cancel_tmo(odp_timer_t timer, odp_timer_tmo_t tmo); +void odp_timer_cancel(odp_timer_t tim); /** - * Convert buffer handle to timeout handle + * Translate from buffer to timeout + * + * Return the timeout handle that corresponds to the specified buffer handle. + * The buffer must be of time ODP_BUFFER_TYPE_TIMEOUT. * - * @param buf Buffer handle + * @param buf Buffer handle to translate. * - * @return Timeout buffer handle + * @return The corresponding timeout handle. */ -odp_timeout_t odp_timeout_from_buffer(odp_buffer_t buf); +static inline odp_timer_tmo_t odp_timeout_from_buffer(odp_buffer_t buf) +{ + if (odp_unlikely(odp_buffer_type(buf) != ODP_BUFFER_TYPE_TIMEOUT)) { + ODP_ERR("Buffer type %u not timeout\n", buf); + abort(); + } + /* In this implementation, timeout == buffer */ + return (odp_timer_tmo_t)buf; +} /** - * Return absolute timeout tick + * Translate from timeout to buffer + * + * Return the buffer handle that corresponds to the specified timeout handle. + * + * @param tmo Timeout handle to translate. + * + * @return The corresponding buffer handle. + */ +static inline odp_buffer_t odp_buffer_from_timeout(odp_timer_tmo_t tmo) +{ + /* In this implementation, buffer == timeout */ + return (odp_buffer_t)tmo; +} + +/** + * Return timeout to timer + * + * Return a received timeout for reuse with the parent timer. + * Note: odp_timer_return_tmo() must be called on all received timeouts! + * (Excluding user defined timeout buffers). + * The timeout must not be accessed after this call, the semantics is + * equivalent to a free call. + * + * @param tmo Timeout + */ +void odp_timer_return_tmo(odp_timer_tmo_t tmo); + +/** + * Return fresh/stale/orphan status of timeout. + * + * Check a received timeout for orphaness (i.e. parent timer freed) and + * staleness (i.e. parent timer has been reset or cancelled after the timeout + * expired and was enqueued). + * If the timeout is fresh, it should be processed. + * If the timeout is stale or orphaned, it should be ignored. + * All timeouts must be returned using the odp_timer_return_tmo() call. + * + * @param tmo Timeout + * + * @return One of ODP_TMO_FRESH, ODP_TMO_STALE or ODP_TMO_ORPHAN. + */ +odp_timer_tmo_status_t odp_timer_tmo_status(odp_timer_tmo_t tmo); + +/** + * Get timer handle + * + * Return Handle of parent timer. + * + * @param tmo Timeout + * + * @return Timer handle or ODP_TIMER_INVALID for orphaned timeouts. + * Note that the parent timer could be freed by some other thread + * at any time and thus the timeout becomes orphaned. + */ +odp_timer_t odp_timer_handle(odp_timer_tmo_t tmo); + +/** + * Get expiration time + * + * Return (requested) expiration time of timeout. + * + * @param tmo Timeout + * + * @return Expiration time + */ +uint64_t odp_timer_expiration(odp_timer_tmo_t tmo); + +/** + * Get user pointer + * + * Return User pointer of timer associated with timeout. + * The user pointer is often used to point to some associated context. * - * @param tmo Timeout buffer handle + * @param tmo Timeout * - * @return Absolute timeout tick + * @return User pointer */ -uint64_t odp_timeout_tick(odp_timeout_t tmo); +void *odp_timer_userptr(odp_timer_tmo_t tmo); #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_priority_queue_internal.h b/platform/linux-generic/include/odp_priority_queue_internal.h new file mode 100644 index 0000000..7d7f3a2 --- /dev/null +++ b/platform/linux-generic/include/odp_priority_queue_internal.h @@ -0,0 +1,108 @@ +#ifndef _PRIORITY_QUEUE_H +#define _PRIORITY_QUEUE_H + +#include +#include +#include +#include +#include + +#define INVALID_INDEX ~0U +#define INVALID_PRIORITY ((pq_priority_t)~0ULL) + +typedef uint64_t pq_priority_t; + +struct heap_node; + +typedef struct priority_queue { + uint32_t max_elems;/* Number of elements in heap */ + /* Number of registered elements (active + inactive) */ + uint32_t reg_elems; + uint32_t num_elems;/* Number of active elements */ + struct heap_node *heap; + struct heap_node *org_ptr; +} priority_queue ODP_ALIGNED(sizeof(uint64_t)); + +/* The user gets a pointer to this structure */ +typedef struct { + /* Set when pq_element registered with priority queue */ + priority_queue *pq; + uint32_t index;/* Index into heap array */ + pq_priority_t prio; +} pq_element; + +/*** Operations on pq_element ***/ + +static inline void pq_element_con(pq_element *this) +{ + this->pq = NULL; + this->index = INVALID_INDEX; + this->prio = 0U; +} + +static inline void pq_element_des(pq_element *this) +{ + (void)this; + assert(this->index == INVALID_INDEX); +} + +static inline priority_queue *get_pq(const pq_element *this) +{ + return this->pq; +} + +static inline pq_priority_t get_prio(const pq_element *this) +{ + return this->prio; +} + +static inline uint32_t get_index(const pq_element *this) +{ + return this->index; +} + +static inline bool is_active(const pq_element *this) +{ + return this->index != INVALID_INDEX; +} + +/*** Operations on priority_queue ***/ + +extern uint32_t pq_smallest_child(priority_queue *, uint32_t, pq_priority_t); +extern void pq_bubble_down(priority_queue *, pq_element *); +extern void pq_bubble_up(priority_queue *, pq_element *); + +static inline bool valid_index(priority_queue *this, uint32_t idx) +{ + return idx < this->num_elems; +} + +extern void priority_queue_con(priority_queue *, uint32_t _max_elems); +extern void priority_queue_des(priority_queue *); + +/* Register pq_element with priority queue */ +/* Return false if priority queue full */ +extern bool pq_register_element(priority_queue *, pq_element *); + +/* Activate and add pq_element to priority queue */ +/* Element must be disarmed */ +extern void pq_activate_element(priority_queue *, pq_element *, pq_priority_t); + +/* Reset (increase) priority for pq_element */ +/* Element may be active or inactive (released) */ +extern void pq_reset_element(priority_queue *, pq_element *, pq_priority_t); + +/* Deactivate and remove element from priority queue */ +/* Element may be active or inactive (released) */ +extern void pq_deactivate_element(priority_queue *, pq_element *); + +/* Unregister pq_element */ +extern void pq_unregister_element(priority_queue *, pq_element *); + +/* Return priority of first element (lowest numerical value) */ +extern pq_priority_t pq_first_priority(const priority_queue *); + +/* Deactivate and return first element if it's prio is <= threshold */ +extern pq_element *pq_release_element(priority_queue *, pq_priority_t thresh); + +#endif /* _PRIORITY_QUEUE_H */ diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h index ad28f53..d86e274 100644 --- a/platform/linux-generic/include/odp_timer_internal.h +++ b/platform/linux-generic/include/odp_timer_internal.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2013, Linaro Limited +/* Copyright (c) 2014, Linaro Limited * All rights reserved. * * SPDX-License-Identifier: BSD-3-Clause @@ -8,72 +8,51 @@ /** * @file * - * ODP timer timeout descriptor - implementation internal + * ODP timeout descriptor - implementation internal */ #ifndef ODP_TIMER_INTERNAL_H_ #define ODP_TIMER_INTERNAL_H_ -#ifdef __cplusplus -extern "C" { -#endif - -#include -#include -#include +#include +#include #include #include #include -struct timeout_t; - -typedef struct timeout_t { - struct timeout_t *next; - int timer_id; - int tick; - uint64_t tmo_tick; - odp_queue_t queue; - odp_buffer_t buf; - odp_buffer_t tmo_buf; -} timeout_t; - - -struct odp_timeout_hdr_t; - /** - * Timeout notification header + * Internal Timeout header */ -typedef struct odp_timeout_hdr_t { +typedef struct { + /* common buffer header */ odp_buffer_hdr_t buf_hdr; - timeout_t meta; - - uint8_t buf_data[]; + /* Requested expiration time */ + uint64_t expiration; + /* User ptr inherited from parent timer */ + void *user_ptr; + /* Parent timer */ + odp_timer_t timer; + /* Tag inherited from parent timer at time of expiration */ + uint32_t tag; + /* Gen-cnt inherited from parent timer at time of creation */ + uint16_t gc; + uint16_t pad; + uint8_t buf_data[0]; } odp_timeout_hdr_t; - - ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) == - ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), - "ODP_TIMEOUT_HDR_T__SIZE_ERR"); - + ODP_OFFSETOF(odp_timeout_hdr_t, buf_data), + "sizeof(odp_timeout_hdr_t) == ODP_OFFSETOF(odp_timeout_hdr_t, buf_data)"); ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0, - "ODP_TIMEOUT_HDR_T__SIZE_ERR2"); - + "sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0"); /** - * Return timeout header + * Return the timeout header */ -static inline odp_timeout_hdr_t *odp_timeout_hdr(odp_timeout_t tmo) +static inline odp_timeout_hdr_t *odp_timeout_hdr(odp_buffer_t buf) { - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)tmo); - return (odp_timeout_hdr_t *)(uintptr_t)buf_hdr; -} - - - -#ifdef __cplusplus + return (odp_timeout_hdr_t *)odp_buf_to_hdr(buf); } -#endif #endif diff --git a/platform/linux-generic/odp_priority_queue.c b/platform/linux-generic/odp_priority_queue.c new file mode 100644 index 0000000..b72c26f --- /dev/null +++ b/platform/linux-generic/odp_priority_queue.c @@ -0,0 +1,283 @@ +#define NDEBUG /* Enabled by default by ODP build system */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include "odp_priority_queue_internal.h" + + +#define NUM_CHILDREN 4 +#define CHILD(n) (NUM_CHILDREN * (n) + 1) +#define PARENT(n) (((n) - 1) / NUM_CHILDREN) + +/* Internal nodes in the array */ +typedef struct heap_node { + pq_element *elem; + /* Copy of elem->prio so we avoid unnecessary dereferencing */ + pq_priority_t prio; +} heap_node; + +static void pq_assert_heap(priority_queue *this); + +#define ALIGNMENT(p) (1U << ((unsigned)ffs((int)p) - 1U)) + +void priority_queue_con(priority_queue *this, uint32_t _max_elems) +{ + this->max_elems = _max_elems; + this->reg_elems = 0; + this->num_elems = 0; + this->org_ptr = malloc((_max_elems + 64 / sizeof(heap_node)) * + sizeof(heap_node)); + if (odp_unlikely(this->org_ptr == NULL)) { + ODP_ERR("malloc failed\n"); + abort(); + } + this->heap = this->org_ptr; + assert((size_t)&this->heap[1] % 8 == 0); + /* Increment base address until first child (index 1) is cache line */ + /* aligned and thus all children (e.g. index 1-4) stored in the */ + /* same cache line. We are not interested in the alignment of */ + /* heap[0] as this is a lone node */ + while ((size_t)&this->heap[1] % ODP_CACHE_LINE_SIZE != 0) { + /* Cast to ptr to struct member with the greatest alignment */ + /* requirement */ + this->heap = (heap_node *)((pq_priority_t *)this->heap + 1); + } + pq_assert_heap(this); +} + +void priority_queue_des(priority_queue *this) +{ + pq_assert_heap(this); + free(this->org_ptr); +} + +#ifndef NDEBUG +static uint32_t +pq_assert_elem(priority_queue *this, uint32_t index, bool recurse) +{ + uint32_t num = 1; + const pq_element *elem = this->heap[index].elem; + assert(elem->index == index); + assert(elem->prio == this->heap[index].prio); + uint32_t child = CHILD(index); + uint32_t i; + for (i = 0; i < NUM_CHILDREN; i++, child++) { + if (valid_index(this, child)) { + assert(this->heap[child].elem != NULL); + assert(this->heap[child].prio >= elem->prio); + if (recurse) + num += pq_assert_elem(this, child, recurse); + } + } + return num; +} +#endif + +static void +pq_assert_heap(priority_queue *this) +{ + (void)this; +#ifndef NDEBUG + uint32_t num = 0; + if (odp_likely(this->num_elems != 0)) { + assert(this->heap[0].elem != NULL); + num += pq_assert_elem(this, 0, true); + } + assert(num == this->num_elems); + unsigned i; + for (i = 0; i < this->num_elems; i++) { + assert(this->heap[i].elem != NULL); + assert(this->heap[i].prio != INVALID_PRIORITY); + } +#endif +} + +/* Bubble up to proper position */ +void +pq_bubble_up(priority_queue *this, pq_element *elem) +{ + assert(this->heap[elem->index].elem == elem); + assert(this->heap[elem->index].prio == elem->prio); + uint32_t current = elem->index; + pq_priority_t prio = elem->prio; + assert(current == 0 || this->heap[PARENT(current)].elem != NULL); + /* Move up into proper position */ + while (current != 0 && this->heap[PARENT(current)].prio > prio) { + uint32_t parent = PARENT(current); + assert(this->heap[parent].elem != NULL); + /* Swap current with parent */ + /* 1) Move parent down */ + this->heap[current].elem = this->heap[parent].elem; + this->heap[current].prio = this->heap[parent].prio; + this->heap[current].elem->index = current; + /* 2) Move current up to parent */ + this->heap[parent].elem = elem; + this->heap[parent].prio = prio; + this->heap[parent].elem->index = parent; + /* Continue moving elem until it is in the right place */ + current = parent; + } + pq_assert_heap(this); +} + +/* Find the smallest child that is smaller than the specified priority */ +/* Very hot function, can we decrease the number of cache misses? */ +uint32_t pq_smallest_child(priority_queue *this, + uint32_t index, + pq_priority_t val) +{ + uint32_t smallest = index; + uint32_t child = CHILD(index); +#if NUM_CHILDREN == 4 + /* Unroll loop when all children exist */ + if (odp_likely(valid_index(this, child + 3))) { + if (this->heap[child + 0].prio < val) + val = this->heap[smallest = child + 0].prio; + if (this->heap[child + 1].prio < val) + val = this->heap[smallest = child + 1].prio; + if (this->heap[child + 2].prio < val) + val = this->heap[smallest = child + 2].prio; + if (this->heap[child + 3].prio < val) + (void)this->heap[smallest = child + 3].prio; + return smallest; + } +#endif + uint32_t i; + for (i = 0; i < NUM_CHILDREN; i++) { + if (odp_unlikely(!valid_index(this, child + i))) + break; + if (this->heap[child + i].prio < val) { + smallest = child + i; + val = this->heap[smallest].prio; + } + } + return smallest; +} + +/* Very hot function, can it be optimised? */ +void +pq_bubble_down(priority_queue *this, pq_element *elem) +{ + assert(this->heap[elem->index].elem == elem); + assert(this->heap[elem->index].prio == elem->prio); + uint32_t current = elem->index; + pq_priority_t prio = elem->prio; + for (;;) { + uint32_t child = pq_smallest_child(this, current, prio); + if (current == child) { + /* No smaller child, we are done */ + pq_assert_heap(this); + return; + } + /* Element larger than smaller child, must move down */ + assert(this->heap[child].elem != NULL); + /* 1) Move child up to current */ + this->heap[current].elem = this->heap[child].elem; + this->heap[current].prio = this->heap[child].prio; + /* 2) Move current down to child */ + this->heap[child].elem = elem; + this->heap[child].prio = prio; + this->heap[child].elem->index = child; + + this->heap[current].elem->index = current; /* cache misses! */ + /* Continue moving element until it is in the right place */ + current = child; + } +} + +bool +pq_register_element(priority_queue *this, pq_element *elem) +{ + if (odp_likely(this->reg_elems < this->max_elems)) { + elem->pq = this; + this->reg_elems++; + return true; + } + return false; +} + +void +pq_unregister_element(priority_queue *this, pq_element *elem) +{ + assert(elem->pq == this); + if (is_active(elem)) + pq_deactivate_element(this, elem); + this->reg_elems--; +} + +void +pq_activate_element(priority_queue *this, pq_element *elem, pq_priority_t prio) +{ + assert(elem->index == INVALID_INDEX); + /* Insert element at end */ + uint32_t index = this->num_elems++; + this->heap[index].elem = elem; + this->heap[index].prio = prio; + elem->index = index; + elem->prio = prio; + pq_bubble_up(this, elem); +} + +void +pq_deactivate_element(priority_queue *this, pq_element *elem) +{ + assert(elem->pq == this); + if (odp_likely(is_active(elem))) { + /* Swap element with last element */ + uint32_t current = elem->index; + uint32_t last = --this->num_elems; + if (odp_likely(last != current)) { + /* Move last element to current */ + this->heap[current].elem = this->heap[last].elem; + this->heap[current].prio = this->heap[last].prio; + this->heap[current].elem->index = current; + /* Bubble down old 'last' element to its proper place*/ + if (this->heap[current].prio < elem->prio) + pq_bubble_up(this, this->heap[current].elem); + else + pq_bubble_down(this, this->heap[current].elem); + } + elem->index = INVALID_INDEX; + pq_assert_heap(this); + } +} + +void +pq_reset_element(priority_queue *this, pq_element *elem, pq_priority_t prio) +{ + assert(prio != INVALID_PRIORITY); + if (odp_likely(is_active(elem))) { + assert(prio >= elem->prio); + elem->prio = prio; + this->heap[elem->index].prio = prio;/* cache misses here! */ + pq_bubble_down(this, elem); + pq_assert_heap(this); + } else { + pq_activate_element(this, elem, prio); + } +} + +pq_priority_t pq_first_priority(const priority_queue *this) +{ + return this->num_elems != 0 ? this->heap[0].prio : INVALID_PRIORITY; +} + +pq_element * +pq_release_element(priority_queue *this, pq_priority_t threshold) +{ + if (odp_likely(this->num_elems != 0 && + this->heap[0].prio <= threshold)) { + pq_element *elem = this->heap[0].elem; + /* Remove element from heap */ + pq_deactivate_element(this, elem); + assert(elem->prio <= threshold); + return elem; + } + return NULL; +} diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index 313c713..b0a1487 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -1,431 +1,744 @@ -/* Copyright (c) 2013, Linaro Limited +/* Copyright (c) 2014, Linaro Limited * All rights reserved. * * SPDX-License-Identifier: BSD-3-Clause */ -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include +/** + * @file + * + * ODP timer service + * + */ +#include +#include #include - -#define NUM_TIMERS 1 -#define MAX_TICKS 1024 -#define MAX_RES ODP_TIME_SEC -#define MIN_RES (100*ODP_TIME_USEC) - - -typedef struct { - odp_spinlock_t lock; - timeout_t *list; -} tick_t; - -typedef struct { - int allocated; - volatile int active; - volatile uint64_t cur_tick; - timer_t timerid; - odp_timer_t timer_hdl; - odp_buffer_pool_t pool; - uint64_t resolution_ns; - uint64_t max_ticks; - tick_t tick[MAX_TICKS]; - -} timer_ring_t; - -typedef struct { - odp_spinlock_t lock; - int num_timers; - timer_ring_t timer[NUM_TIMERS]; - -} timer_global_t; - -/* Global */ -static timer_global_t odp_timer; - -static void add_tmo(tick_t *tick, timeout_t *tmo) +#include +#include +#include +#include "odp_std_types.h" +#include "odp_buffer.h" +#include "odp_buffer_pool.h" +#include "odp_queue.h" +#include "odp_hints.h" +#include "odp_sync.h" +#include "odp_spinlock.h" +#include "odp_debug.h" +#include "odp_align.h" +#include "odp_shared_memory.h" +#include "odp_hints.h" +#include "odp_internal.h" +#include "odp_time.h" +#include "odp_timer.h" +#include "odp_timer_internal.h" +#include "odp_priority_queue_internal.h" + +/****************************************************************************** + * Translation between timeout and timeout header + *****************************************************************************/ + +static inline odp_timeout_hdr_t *odp_tmo_to_hdr(odp_timer_tmo_t tmo) { - odp_spinlock_lock(&tick->lock); - - tmo->next = tick->list; - tick->list = tmo; + odp_buffer_t buf = odp_buffer_from_timeout(tmo); + odp_timeout_hdr_t *tmo_hdr = (odp_timeout_hdr_t *)odp_buf_to_hdr(buf); + return tmo_hdr; +} - odp_spinlock_unlock(&tick->lock); +/****************************************************************************** + * odp_timer abstract datatype + *****************************************************************************/ + +typedef struct odp_timer_s { + pq_element pqelem;/* Base class */ + uint64_t req_tmo;/* Requested timeout tick */ + odp_buffer_t tmo_buf;/* ODP_BUFFER_INVALID if timeout enqueued */ + odp_queue_t queue;/* ODP_QUEUE_INVALID if timer is free */ + uint32_t tag;/* Reusing tag as next pointer/index when timer is free */ + uint16_t gc;/* Smaller to make place for user_buf flag */ + unsigned int user_buf:1; /* User-defined buffer? */ +} odp_timer; + +/* Constructor */ +static inline void odp_timer_con(odp_timer *this) +{ + pq_element_con(&this->pqelem); + this->tmo_buf = ODP_BUFFER_INVALID; + this->queue = ODP_QUEUE_INVALID; + this->gc = 0; } -static timeout_t *rem_tmo(tick_t *tick) +/* Destructor */ +static inline void odp_timer_des(odp_timer *this) { - timeout_t *tmo; + assert(this->tmo_buf == ODP_BUFFER_INVALID); + assert(this->queue == ODP_QUEUE_INVALID); + pq_element_des(&this->pqelem); +} - odp_spinlock_lock(&tick->lock); +/* Setup when timer is allocated */ +static void setup(odp_timer *this, + odp_queue_t _q, + void *_up, + odp_buffer_t _tmo) +{ + this->req_tmo = INVALID_PRIORITY; + this->tmo_buf = _tmo; + this->queue = _q; + this->tag = 0; + this->user_buf = false; + /* Initialise constant fields of timeout event */ + odp_timeout_hdr_t *tmo_hdr = + odp_tmo_to_hdr(odp_timeout_from_buffer(this->tmo_buf)); + tmo_hdr->gc = this->gc; + tmo_hdr->timer = this; + tmo_hdr->user_ptr = _up; + /* tmo_hdr->tag set at expiration time */ + /* tmo_hdr->expiration set at expiration time */ + assert(this->queue != ODP_QUEUE_INVALID); +} - tmo = tick->list; +/* Teardown when timer is freed */ +static odp_buffer_t teardown(odp_timer *this) +{ + /* Increase generation count to make pending timeout orphaned */ + ++this->gc; + odp_buffer_t buf = this->tmo_buf; + this->tmo_buf = ODP_BUFFER_INVALID; + this->queue = ODP_QUEUE_INVALID; + return buf; +} - if (tmo) - tick->list = tmo->next; +static inline uint32_t get_next_free(odp_timer *this) +{ + assert(this->queue == ODP_QUEUE_INVALID); + return this->tag; +} - odp_spinlock_unlock(&tick->lock); +static inline void set_next_free(odp_timer *this, uint32_t nf) +{ + assert(this->queue == ODP_QUEUE_INVALID); + this->tag = nf; +} - if (tmo) - tmo->next = NULL; +/****************************************************************************** + * odp_timer_pool abstract datatype + * Inludes alloc and free timer + *****************************************************************************/ + +typedef struct odp_timer_pool_s { + priority_queue pq; + uint64_t cur_tick;/* Current tick value */ + uint64_t min_tick;/* Current expiration lower bound */ + uint64_t max_tick;/* Current expiration higher bound */ + bool shared; + odp_spinlock_t lock; + const char *name; + odp_buffer_pool_t buf_pool; + uint64_t resolution_ns; + uint64_t min_tmo_tck; + uint64_t max_tmo_tck; + odp_timer *timers; + uint32_t num_alloc;/* Current number of allocated timers */ + uint32_t max_timers;/* Max number of timers */ + uint32_t first_free;/* 0..max_timers-1 => free timer */ + timer_t timerid; + odp_timer_clk_src_t clk_src; +} odp_timer_pool; + +/* Forward declarations */ +static void timer_init(odp_timer_pool *tp); +static void timer_exit(odp_timer_pool *tp); + +static void odp_timer_pool_con(odp_timer_pool *this, + const char *_n, + odp_buffer_pool_t _bp, + uint64_t _r, + uint64_t _mint, + uint64_t _maxt, + uint32_t _mt, + bool _s, + odp_timer_clk_src_t _cs) +{ + priority_queue_con(&this->pq, _mt); + this->cur_tick = 0; + this->shared = _s; + this->name = strdup(_n); + this->buf_pool = _bp; + this->resolution_ns = _r; + this->min_tmo_tck = odp_timer_ns_to_tick(this, _mint); + this->max_tmo_tck = odp_timer_ns_to_tick(this, _maxt); + this->min_tick = this->cur_tick + this->min_tmo_tck; + this->max_tick = this->cur_tick + this->max_tmo_tck; + this->num_alloc = 0; + this->max_timers = _mt; + this->first_free = 0; + this->clk_src = _cs; + this->timers = malloc(sizeof(odp_timer) * this->max_timers); + if (this->timers == NULL) { + ODP_ERR("%s: malloc failed\n", _n); + abort(); + } + uint32_t i; + for (i = 0; i < this->max_timers; i++) + odp_timer_con(&this->timers[i]); + for (i = 0; i < this->max_timers; i++) + set_next_free(&this->timers[i], i + 1); + odp_spinlock_init(&this->lock); + if (this->clk_src == ODP_CLOCK_CPU) + timer_init(this); + /* Make sure timer pool initialisation is globally observable */ + /* before we return a pointer to it */ + odp_sync_stores(); +} - return tmo; +static odp_timer_pool *odp_timer_pool_new( + const char *_n, + odp_buffer_pool_t _bp, + uint64_t _r, + uint64_t _mint, + uint64_t _maxt, + uint32_t _mt, + bool _s, + odp_timer_clk_src_t _cs) +{ + odp_timer_pool *this = malloc(sizeof(odp_timer_pool)); + if (odp_unlikely(this == NULL)) { + ODP_ERR("%s: timer pool malloc failed\n", _n); + abort(); + } + odp_timer_pool_con(this, _n, _bp, _r, _mint, _maxt, _mt, _s, _cs); + return this; } -/** - * Search and delete tmo entry from timeout list - * return -1 : on error.. handle not in list - * 0 : success - */ -static int find_and_del_tmo(timeout_t **tmo, odp_timer_tmo_t handle) +static void odp_timer_pool_des(odp_timer_pool *this) { - timeout_t *cur, *prev; - prev = NULL; + if (this->shared) + odp_spinlock_lock(&this->lock); + if (this->num_alloc != 0) { + /* It's a programming error to attempt to destroy a */ + /* timer pool which is still in use */ + ODP_ERR("%s: timers in use\n", this->name); + abort(); + } + if (this->clk_src == ODP_CLOCK_CPU) + timer_exit(this); + uint32_t i; + for (i = 0; i < this->max_timers; i++) + odp_timer_des(&this->timers[i]); + free(this->timers); + priority_queue_des(&this->pq); + odp_sync_stores(); +} - for (cur = *tmo; cur != NULL; prev = cur, cur = cur->next) { - if (cur->tmo_buf == handle) { - if (prev == NULL) - *tmo = cur->next; - else - prev->next = cur->next; +static void odp_timer_pool_del(odp_timer_pool *this) +{ + odp_timer_pool_des(this); + free(this); +} - break; +static inline odp_timer *timer_alloc(odp_timer_pool *this, + odp_queue_t queue, + void *user_ptr, + odp_buffer_t tmo_buf) +{ + odp_timer *tim = ODP_TIMER_INVALID; + if (odp_likely(this->shared)) + odp_spinlock_lock(&this->lock); + if (odp_likely(this->num_alloc < this->max_timers)) { + this->num_alloc++; + /* Remove first unused timer from free list */ + assert(this->first_free != this->max_timers); + tim = &this->timers[this->first_free]; + this->first_free = get_next_free(tim); + /* Insert timer into priority queue */ + if (odp_unlikely(!pq_register_element(&this->pq, + &tim->pqelem))) { + /* Unexpected internal error */ + abort(); } + /* Create timer */ + setup(tim, queue, user_ptr, tmo_buf); + } else { + errno = ENFILE; /* Reusing file table overvlow */ } - - if (!cur) - /* couldn't find tmo in list */ - return -1; - - /* application to free tmo_buf provided by absolute_tmo call */ - return 0; + if (odp_likely(this->shared)) + odp_spinlock_unlock(&this->lock); + return tim; } -int odp_timer_cancel_tmo(odp_timer_t timer_hdl, odp_timer_tmo_t tmo) +static inline void timer_free(odp_timer_pool *this, odp_timer *tim) { - int id; - int tick_idx; - timeout_t *cancel_tmo; - odp_timeout_hdr_t *tmo_hdr; - tick_t *tick; - - /* get id */ - id = (int)timer_hdl - 1; - - tmo_hdr = odp_timeout_hdr((odp_timeout_t) tmo); - /* get tmo_buf to cancel */ - cancel_tmo = &tmo_hdr->meta; + if (odp_likely(this->shared)) + odp_spinlock_lock(&this->lock); + if (odp_unlikely(tim->queue == ODP_QUEUE_INVALID)) { + ODP_ERR("Invalid timer %p\n", tim); + abort(); + } + /* Destroy timer */ + odp_buffer_t buf = teardown(tim); + /* Remove timer from priority queue */ + pq_unregister_element(&this->pq, &tim->pqelem); + /* Insert timer into free list */ + set_next_free(tim, this->first_free); + this->first_free = (tim - &this->timers[0]) / sizeof(this->timers[0]); + assert(this->num_alloc != 0); + this->num_alloc--; + if (odp_likely(this->shared)) + odp_spinlock_unlock(&this->lock); + if (buf != ODP_BUFFER_INVALID) + odp_buffer_free(buf); +} - tick_idx = cancel_tmo->tick; - tick = &odp_timer.timer[id].tick[tick_idx]; +/****************************************************************************** + * Operations on timers + * reset/reset_w_buf/cancel timer, return timeout + *****************************************************************************/ - odp_spinlock_lock(&tick->lock); - /* search and delete tmo from tick list */ - if (find_and_del_tmo(&tick->list, tmo) != 0) { - odp_spinlock_unlock(&tick->lock); - ODP_DBG("Couldn't find the tmo (%d) in tick list\n", (int)tmo); - return -1; +static inline void timer_expire(odp_timer *tim) +{ + assert(tim->req_tmo != INVALID_PRIORITY); + /* Timer expired, is there actually any timeout event */ + /* we can enqueue? */ + if (odp_likely(tim->tmo_buf != ODP_BUFFER_INVALID)) { + /* Swap out timeout buffer */ + odp_buffer_t buf = tim->tmo_buf; + tim->tmo_buf = ODP_BUFFER_INVALID; + if (odp_likely(!tim->user_buf)) { + odp_timeout_hdr_t *tmo_hdr = + odp_tmo_to_hdr(odp_timeout_from_buffer(buf)); + /* Copy tag and requested expiration tick from timer */ + tmo_hdr->tag = tim->tag; + tmo_hdr->expiration = tim->req_tmo; + } + /* Else don't touch user-defined buffer */ + int rc = odp_queue_enq(tim->queue, buf); + if (odp_unlikely(rc != 0)) { + ODP_ERR("Failed to enqueue timeout buffer (%d)\n", rc); + abort(); + } + /* Mark timer as inactive */ + tim->req_tmo = INVALID_PRIORITY; } - odp_spinlock_unlock(&tick->lock); + /* No, timeout event already enqueued or unavailable */ + /* Keep timer active, odp_timer_return_tmo() will patch up */ +} - return 0; +static odp_timer_set_t timer_reset(odp_timer_pool *tp, + odp_timer *tim, + uint64_t abs_tck) +{ + assert(tim->user_buf == false); + if (odp_unlikely(abs_tck < tp->min_tick)) + return ODP_TIMER_SET_TOOEARLY; + if (odp_unlikely(abs_tck > tp->max_tick)) + return ODP_TIMER_SET_TOOLATE; + + if (odp_likely(tp->shared)) + odp_spinlock_lock(&tp->lock); + + if (odp_unlikely(tim->queue == ODP_QUEUE_INVALID)) { + ODP_ERR("Invalid timer %p\n", tim); + abort(); + } + if (odp_unlikely(tim->user_buf)) { + ODP_ERR("Timer %p has user buffer\n", tim); + abort(); + } + /* Increase timer tag to make any pending timeout stale */ + tim->tag++; + /* Save requested timeout */ + tim->req_tmo = abs_tck; + /* Update timer position in priority queue */ + pq_reset_element(&tp->pq, &tim->pqelem, abs_tck); + + if (odp_likely(tp->shared)) + odp_spinlock_unlock(&tp->lock); + return ODP_TIMER_SET_SUCCESS; } -static void notify_function(union sigval sigval) +static odp_timer_set_t timer_reset_w_buf(odp_timer_pool *tp, + odp_timer *tim, + uint64_t abs_tck, + odp_buffer_t user_buf) { - uint64_t cur_tick; - timeout_t *tmo; - tick_t *tick; - timer_ring_t *timer; + if (odp_unlikely(abs_tck < tp->min_tick)) + return ODP_TIMER_SET_TOOEARLY; + if (odp_unlikely(abs_tck > tp->max_tick)) + return ODP_TIMER_SET_TOOLATE; - timer = sigval.sival_ptr; + if (odp_likely(tp->shared)) + odp_spinlock_lock(&tp->lock); - if (timer->active == 0) { - ODP_DBG("Timer (%u) not active\n", timer->timer_hdl); - return; + if (odp_unlikely(tim->queue == ODP_QUEUE_INVALID)) { + ODP_ERR("Invalid timer %p\n", tim); + abort(); } + /* Increase timer tag to make any pending timeout stale */ + tim->tag++; + /* Save requested timeout */ + tim->req_tmo = abs_tck; + /* Set flag indicating presence of user defined buffer */ + tim->user_buf = true; + /* Swap in new buffer, save any old buffer pointer */ + odp_buffer_t old_buf = tim->tmo_buf; + tim->tmo_buf = user_buf; + /* Update timer position in priority queue */ + pq_reset_element(&tp->pq, &tim->pqelem, abs_tck); + + if (odp_likely(tp->shared)) + odp_spinlock_unlock(&tp->lock); + + /* Free old buffer if present */ + if (odp_unlikely(old_buf != ODP_BUFFER_INVALID)) + odp_buffer_free(old_buf); + return ODP_TIMER_SET_SUCCESS; +} - /* ODP_DBG("Tick\n"); */ +static inline void timer_cancel(odp_timer_pool *tp, + odp_timer *tim) +{ + odp_buffer_t old_buf = ODP_BUFFER_INVALID; + if (odp_likely(tp->shared)) + odp_spinlock_lock(&tp->lock); - cur_tick = timer->cur_tick++; + if (odp_unlikely(tim->queue == ODP_QUEUE_INVALID)) { + ODP_ERR("Invalid timer %p\n", tim); + abort(); + } + if (odp_unlikely(tim->user_buf)) { + /* Swap out old user buffer */ + old_buf = tim->tmo_buf; + tim->tmo_buf = ODP_BUFFER_INVALID; + /* tim->user_buf stays true */ + } + /* Else a normal timer (no user-defined buffer) */ + /* Increase timer tag to make any pending timeout stale */ + tim->tag++; + /* Clear requested timeout, mark timer inactive */ + tim->req_tmo = INVALID_PRIORITY; + /* Remove timer from the priority queue */ + pq_deactivate_element(&tp->pq, &tim->pqelem); + + if (odp_likely(tp->shared)) + odp_spinlock_unlock(&tp->lock); + /* Free user-defined buffer if present */ + if (odp_unlikely(old_buf != ODP_BUFFER_INVALID)) + odp_buffer_free(old_buf); +} - odp_sync_stores(); +static inline void timer_return(odp_timer_pool *tp, + odp_timer *tim, + odp_timer_tmo_t tmo, + const odp_timeout_hdr_t *tmo_hdr) +{ + odp_buffer_t tmo_buf = odp_buffer_from_timeout(tmo); + if (odp_likely(tp->shared)) + odp_spinlock_lock(&tp->lock); + if (odp_unlikely(tim->user_buf)) { + ODP_ERR("Timer %p has user-defined buffer\n", tim); + abort(); + } + if (odp_likely(tmo_hdr->gc == tim->gc)) { + assert(tim->tmo_buf == ODP_BUFFER_INVALID); + /* Save returned buffer for use when timer expires next time */ + tim->tmo_buf = tmo_buf; + tmo_buf = ODP_BUFFER_INVALID; + /* Check if timer is active and should have expired */ + if (odp_unlikely(tim->req_tmo != INVALID_PRIORITY && + tim->req_tmo <= tp->cur_tick)) { + /* Expire timer now since we have restored the timeout + buffer */ + timer_expire(tim); + } + /* Else timer inactive or expires in the future */ + } + /* Else timeout orphaned, free buffer later */ + if (odp_likely(tp->shared)) + odp_spinlock_unlock(&tp->lock); + if (odp_unlikely(tmo_buf != ODP_BUFFER_INVALID)) + odp_buffer_free(tmo_buf); +} - tick = &timer->tick[cur_tick % MAX_TICKS]; +/* Semi-public API not in odp_timer.h, must declare somewhere */ +unsigned odp_timer_pool_expire(odp_timer_pool_t tpid, uint64_t tick); - while ((tmo = rem_tmo(tick)) != NULL) { - odp_queue_t queue; - odp_buffer_t buf; +unsigned odp_timer_pool_expire(odp_timer_pool_t tpid, uint64_t tick) +{ + if (odp_likely(tpid->shared)) + odp_spinlock_lock(&tpid->lock); + + unsigned nexp = 0; + odp_timer_t tim; + tpid->cur_tick = tick; + tpid->min_tick = tick + tpid->min_tmo_tck; + tpid->max_tick = tick + tpid->max_tmo_tck; + while ((tim = (odp_timer_t)pq_release_element(&tpid->pq, tick)) != + ODP_TIMER_INVALID) { + assert(get_prio(&tim->pqelem) <= tick); + timer_expire(tim); + nexp++; + } - queue = tmo->queue; - buf = tmo->buf; + if (odp_likely(tpid->shared)) + odp_spinlock_unlock(&tpid->lock); + return nexp; +} - if (buf != tmo->tmo_buf) - odp_buffer_free(tmo->tmo_buf); +/****************************************************************************** + * POSIX timer support + * Functions that use Linux/POSIX per-process timers and related facilities + *****************************************************************************/ - odp_queue_enq(queue, buf); - } +static void timer_notify(union sigval sigval) +{ + odp_timer_pool *tp = (odp_timer_pool *)sigval.sival_ptr; + uint64_t new_tick = tp->cur_tick + 1; + (void)odp_timer_pool_expire(tp, new_tick); } -static void timer_start(timer_ring_t *timer) +static void timer_init(odp_timer_pool *tp) { struct sigevent sigev; struct itimerspec ispec; uint64_t res, sec, nsec; - ODP_DBG("\nTimer (%u) starts\n", timer->timer_hdl); + ODP_DBG("Creating POSIX timer for timer pool %s, period %" + PRIu64" ns\n", tp->name, tp->resolution_ns); memset(&sigev, 0, sizeof(sigev)); memset(&ispec, 0, sizeof(ispec)); sigev.sigev_notify = SIGEV_THREAD; - sigev.sigev_notify_function = notify_function; - sigev.sigev_value.sival_ptr = timer; + sigev.sigev_notify_function = timer_notify; + sigev.sigev_value.sival_ptr = tp; - if (timer_create(CLOCK_MONOTONIC, &sigev, &timer->timerid)) { - ODP_DBG("Timer create failed\n"); - return; + if (timer_create(CLOCK_MONOTONIC, &sigev, &tp->timerid)) { + perror("timer_create"); + abort(); } - res = timer->resolution_ns; + res = tp->resolution_ns; sec = res / ODP_TIME_SEC; - nsec = res - sec*ODP_TIME_SEC; + nsec = res - sec * ODP_TIME_SEC; ispec.it_interval.tv_sec = (time_t)sec; ispec.it_interval.tv_nsec = (long)nsec; ispec.it_value.tv_sec = (time_t)sec; ispec.it_value.tv_nsec = (long)nsec; - if (timer_settime(timer->timerid, 0, &ispec, NULL)) { - ODP_DBG("Timer set failed\n"); - return; + if (timer_settime(&tp->timerid, 0, &ispec, NULL)) { + perror("timer_settime"); + abort(); } - - return; } -int odp_timer_init_global(void) -{ - ODP_DBG("Timer init ..."); - - memset(&odp_timer, 0, sizeof(timer_global_t)); - - odp_spinlock_init(&odp_timer.lock); - - ODP_DBG("done\n"); - - return 0; -} - -int odp_timer_disarm_all(void) +static void timer_exit(odp_timer_pool *tp) { - int timers; - struct itimerspec ispec; - - odp_spinlock_lock(&odp_timer.lock); - - timers = odp_timer.num_timers; - - ispec.it_interval.tv_sec = 0; - ispec.it_interval.tv_nsec = 0; - ispec.it_value.tv_sec = 0; - ispec.it_value.tv_nsec = 0; - - for (; timers >= 0; timers--) { - if (timer_settime(odp_timer.timer[timers].timerid, - 0, &ispec, NULL)) { - ODP_DBG("Timer reset failed\n"); - odp_spinlock_unlock(&odp_timer.lock); - return -1; - } - odp_timer.num_timers--; + if (timer_delete(tp->timerid) != 0) { + perror("timer_delete"); + abort(); } - - odp_spinlock_unlock(&odp_timer.lock); - - return 0; } -odp_timer_t odp_timer_create(const char *name, odp_buffer_pool_t pool, - uint64_t resolution_ns, uint64_t min_ns, - uint64_t max_ns) +/****************************************************************************** + * Public API functions + * Some parameter checks and error messages + * No modificatios of internal state + *****************************************************************************/ +odp_timer_pool_t +odp_timer_pool_create(const char *name, + odp_buffer_pool_t buf_pool, + uint64_t resolution_ns, + uint64_t min_timeout, + uint64_t max_timeout, + uint32_t num_timers, + bool shared, + odp_timer_clk_src_t clk_src) { - uint32_t id; - timer_ring_t *timer; - odp_timer_t timer_hdl; - int i; - uint64_t max_ticks; - (void) name; - - if (resolution_ns < MIN_RES) - resolution_ns = MIN_RES; - - if (resolution_ns > MAX_RES) - resolution_ns = MAX_RES; - - max_ticks = max_ns / resolution_ns; - - if (max_ticks > MAX_TICKS) { - ODP_DBG("Maximum timeout too long: %"PRIu64" ticks\n", - max_ticks); - return ODP_TIMER_INVALID; - } - - if (min_ns < resolution_ns) { - ODP_DBG("Min timeout %"PRIu64" ns < resolution %"PRIu64" ns\n", - min_ns, resolution_ns); - return ODP_TIMER_INVALID; + /* Verify that buffer pool can be used for timeouts */ + odp_buffer_t buf = odp_buffer_alloc(buf_pool); + if (buf == ODP_BUFFER_INVALID) { + ODP_ERR("%s: Failed to allocate buffer\n", name); + abort(); } - - odp_spinlock_lock(&odp_timer.lock); - - if (odp_timer.num_timers >= NUM_TIMERS) { - odp_spinlock_unlock(&odp_timer.lock); - ODP_DBG("All timers allocated\n"); - return ODP_TIMER_INVALID; - } - - for (id = 0; id < NUM_TIMERS; id++) { - if (odp_timer.timer[id].allocated == 0) - break; - } - - timer = &odp_timer.timer[id]; - timer->allocated = 1; - odp_timer.num_timers++; - - odp_spinlock_unlock(&odp_timer.lock); - - timer_hdl = id + 1; - - timer->timer_hdl = timer_hdl; - timer->pool = pool; - timer->resolution_ns = resolution_ns; - timer->max_ticks = MAX_TICKS; - - for (i = 0; i < MAX_TICKS; i++) { - odp_spinlock_init(&timer->tick[i].lock); - timer->tick[i].list = NULL; + if (odp_buffer_type(buf) != ODP_BUFFER_TYPE_TIMEOUT) { + ODP_ERR("%s: Buffer pool wrong type\n", name); + abort(); } + odp_buffer_free(buf); + odp_timer_pool_t tp = odp_timer_pool_new(name, buf_pool, resolution_ns, + min_timeout, max_timeout, num_timers, + shared, clk_src); + return tp; +} - timer->active = 1; - odp_sync_stores(); +void odp_timer_pool_start(void) +{ + /* Nothing to do here, timer pools are started by the create call */ +} - timer_start(timer); +void odp_timer_pool_destroy(odp_timer_pool_t tpid) +{ + odp_timer_pool_del(tpid); +} - return timer_hdl; +uint64_t odp_timer_tick_to_ns(odp_timer_pool_t tpid, uint64_t ticks) +{ + return ticks * tpid->resolution_ns; } -odp_timer_tmo_t odp_timer_absolute_tmo(odp_timer_t timer_hdl, uint64_t tmo_tick, - odp_queue_t queue, odp_buffer_t buf) +uint64_t odp_timer_ns_to_tick(odp_timer_pool_t tpid, uint64_t ns) { - int id; - uint64_t tick; - uint64_t cur_tick; - timeout_t *new_tmo; - odp_buffer_t tmo_buf; - odp_timeout_hdr_t *tmo_hdr; - timer_ring_t *timer; + return (uint64_t)(ns / tpid->resolution_ns); +} - id = (int)timer_hdl - 1; - timer = &odp_timer.timer[id]; +uint64_t odp_timer_current_tick(odp_timer_pool_t tpid) +{ + return tpid->cur_tick; +} - cur_tick = timer->cur_tick; - if (tmo_tick <= cur_tick) { - ODP_DBG("timeout too close\n"); - return ODP_TIMER_TMO_INVALID; +uintptr_t odp_timer_pool_query_conf(odp_timer_pool_t tpid, + odp_timer_pool_conf_t item) +{ + switch (item) { + case ODP_TIMER_NAME: + return (uintptr_t)(tpid->name); + case ODP_TIMER_RESOLUTION: + return tpid->resolution_ns; + case ODP_TIMER_MIN_TICKS: + return tpid->min_tmo_tck; + case ODP_TIMER_MAX_TICKS: + return tpid->max_tmo_tck; + case ODP_TIMER_NUM_TIMERS: + return tpid->max_timers; + case ODP_TIMER_SHARED: + return tpid->shared; + default: + return 0; } +} - if ((tmo_tick - cur_tick) > MAX_TICKS) { - ODP_DBG("timeout too far: cur %"PRIu64" tmo %"PRIu64"\n", - cur_tick, tmo_tick); - return ODP_TIMER_TMO_INVALID; +odp_timer_t odp_timer_alloc(odp_timer_pool_t tpid, + odp_queue_t queue, + void *user_ptr) +{ + /* We check this because ODP_QUEUE_INVALID is used */ + /* to indicate a free timer */ + if (odp_unlikely(queue == ODP_QUEUE_INVALID)) { + ODP_ERR("%s: Invalid queue handle\n", tpid->name); + abort(); } - - tick = tmo_tick % MAX_TICKS; - - tmo_buf = odp_buffer_alloc(timer->pool); - if (tmo_buf == ODP_BUFFER_INVALID) { - ODP_DBG("tmo buffer alloc failed\n"); - return ODP_TIMER_TMO_INVALID; + odp_buffer_t tmo_buf = odp_buffer_alloc(tpid->buf_pool); + if (odp_likely(tmo_buf != ODP_BUFFER_INVALID)) { + odp_timer *tim = timer_alloc(tpid, queue, user_ptr, tmo_buf); + if (tim != ODP_TIMER_INVALID) { + /* Success */ + assert(tim->queue != ODP_QUEUE_INVALID); + return tim; + } + odp_buffer_free(tmo_buf); } + /* Else failed to allocate timeout event */ + /* errno set by odp_buffer_alloc() or timer_alloc () */ + return ODP_TIMER_INVALID; +} - tmo_hdr = odp_timeout_hdr((odp_timeout_t) tmo_buf); - new_tmo = &tmo_hdr->meta; - - new_tmo->timer_id = id; - new_tmo->tick = (int)tick; - new_tmo->tmo_tick = tmo_tick; - new_tmo->queue = queue; - new_tmo->tmo_buf = tmo_buf; - - if (buf != ODP_BUFFER_INVALID) - new_tmo->buf = buf; - else - new_tmo->buf = tmo_buf; - - add_tmo(&timer->tick[tick], new_tmo); - - return tmo_buf; +void odp_timer_free(odp_timer_t tim) +{ + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + timer_free(tp, tim); } -uint64_t odp_timer_tick_to_ns(odp_timer_t timer_hdl, uint64_t ticks) +odp_timer_set_t odp_timer_set_abs_w_buf(odp_timer_t tim, + uint64_t abs_tck, + odp_buffer_t user_buf) { - uint32_t id; + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + odp_timer_set_t rc = timer_reset_w_buf(tp, tim, abs_tck, user_buf); + return rc; +} - id = timer_hdl - 1; - return ticks * odp_timer.timer[id].resolution_ns; +odp_timer_set_t odp_timer_set_abs(odp_timer_t tim, uint64_t abs_tck) +{ + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + odp_timer_set_t rc = timer_reset(tp, tim, abs_tck); + return rc; } -uint64_t odp_timer_ns_to_tick(odp_timer_t timer_hdl, uint64_t ns) +odp_timer_set_t odp_timer_set_rel_w_buf(odp_timer_t tim, + uint64_t rel_tck, + odp_buffer_t user_buf) { - uint32_t id; + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + odp_timer_set_t rc = timer_reset_w_buf(tp, tim, tp->cur_tick + rel_tck, + user_buf); + return rc; +} - id = timer_hdl - 1; - return ns / odp_timer.timer[id].resolution_ns; +odp_timer_set_t odp_timer_set_rel(odp_timer_t tim, uint64_t rel_tck) +{ + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + odp_timer_set_t rc = timer_reset(tp, tim, tp->cur_tick + rel_tck); + return rc; } -uint64_t odp_timer_resolution(odp_timer_t timer_hdl) +void odp_timer_cancel(odp_timer_t tim) { - uint32_t id; + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + timer_cancel(tp, tim); +} - id = timer_hdl - 1; - return odp_timer.timer[id].resolution_ns; +void odp_timer_return_tmo(odp_timer_tmo_t tmo) +{ + const odp_timeout_hdr_t *tmo_hdr = odp_tmo_to_hdr(tmo); + odp_timer *tim = tmo_hdr->timer; + odp_timer_pool *tp = (odp_timer_pool *)get_pq(&tim->pqelem); + timer_return(tp, tim, tmo, tmo_hdr); } -uint64_t odp_timer_maximum_tmo(odp_timer_t timer_hdl) +odp_timer_tmo_status_t odp_timer_tmo_status(odp_timer_tmo_t tmo) { - uint32_t id; + const odp_timeout_hdr_t *tmo_hdr = odp_tmo_to_hdr(tmo); + odp_timer *tim = tmo_hdr->timer; - id = timer_hdl - 1; - return odp_timer.timer[id].max_ticks; + /* Compare generation count (gc) of timeout and parent timer (if any)*/ + if (odp_unlikely(tmo_hdr->gc != tim->gc)) { + /* Generation counters differ => timer has been freed */ + return ODP_TMO_ORPHAN; + } + /* Else gen-cnts match => parent timer exists */ + + /* Compare tags of timeout and parent timer */ + if (odp_likely(tim->tag == tmo_hdr->tag)) + return ODP_TMO_FRESH; + else + return ODP_TMO_STALE; } -uint64_t odp_timer_current_tick(odp_timer_t timer_hdl) +odp_timer_t odp_timer_handle(odp_timer_tmo_t tmo) { - uint32_t id; + odp_timeout_hdr_t *tmo_hdr = odp_tmo_to_hdr(tmo); + odp_timer_t tim = tmo_hdr->timer; + if (odp_likely(tmo_hdr->gc == tim->gc)) + return tim; + else + return ODP_TIMER_INVALID; +} - id = timer_hdl - 1; - return odp_timer.timer[id].cur_tick; +uint64_t odp_timer_expiration(odp_timer_tmo_t tmo) +{ + odp_timeout_hdr_t *tmo_hdr = odp_tmo_to_hdr(tmo); + return tmo_hdr->expiration; } -odp_timeout_t odp_timeout_from_buffer(odp_buffer_t buf) +void *odp_timer_userptr(odp_timer_tmo_t tmo) { - return (odp_timeout_t) buf; + odp_timeout_hdr_t *tmo_hdr = odp_tmo_to_hdr(tmo); + return tmo_hdr->user_ptr; } -uint64_t odp_timeout_tick(odp_timeout_t tmo) +int odp_timer_init_global(void) { - odp_timeout_hdr_t *tmo_hdr = odp_timeout_hdr(tmo); - return tmo_hdr->meta.tmo_tick; + return 0; } diff --git a/test/api_test/odp_timer_ping.c b/test/api_test/odp_timer_ping.c index 7406a45..2617b5c 100644 --- a/test/api_test/odp_timer_ping.c +++ b/test/api_test/odp_timer_ping.c @@ -20,6 +20,8 @@ * Otherwise timeout may happen bcz of slow nw speed */ +#include +#include #include #include #include @@ -41,14 +43,15 @@ #define MSG_POOL_SIZE (4*1024*1024) #define BUF_SIZE 8 #define PING_CNT 10 -#define PING_THRD 2 /* Send and Rx Ping thread */ +#define PING_THRD 2 /* send_ping and rx_ping threads */ /* Nanoseconds */ #define RESUS 10000 #define MINUS 10000 #define MAXUS 10000000 -static odp_timer_t test_timer_ping; +static odp_timer_pool_t tp; +static odp_timer_t test_timer_ping = ODP_TIMER_INVALID; static odp_timer_tmo_t test_ping_tmo; #define PKTSIZE 64 @@ -128,15 +131,7 @@ static int listen_to_pingack(void) (socklen_t *)&len); if (bytes > 0) { /* pkt rxvd therefore cancel the timeout */ - if (odp_timer_cancel_tmo(test_timer_ping, - test_ping_tmo) != 0) { - ODP_ERR("cancel_tmo failed ..exiting listner thread\n"); - /* avoid exiting from here even if tmo - * failed for current ping, - * allow subsequent ping_rx request */ - err = -1; - - } + odp_timer_cancel(test_timer_ping); /* cruel bad hack used for sender, listner ipc.. * euwww.. FIXME .. */ @@ -160,7 +155,6 @@ static int send_ping_request(struct sockaddr_in *addr) uint64_t tick; odp_queue_t queue; - odp_buffer_t buf; int err = 0; @@ -184,8 +178,16 @@ static int send_ping_request(struct sockaddr_in *addr) /* get the ping queue */ queue = odp_queue_lookup("ping_timer_queue"); + test_timer_ping = odp_timer_alloc(tp, queue, NULL); + if (test_timer_ping == ODP_TIMER_INVALID) { + ODP_ERR("Failed to allocate timer.\n"); + err = -1; + goto err; + } for (i = 0; i < PING_CNT; i++) { + odp_buffer_t buf; + odp_timer_tmo_t tmo; /* prepare icmp pkt */ bzero(&pckt, sizeof(pckt)); pckt.hdr.type = ICMP_ECHO; @@ -209,12 +211,10 @@ static int send_ping_request(struct sockaddr_in *addr) printf(" icmp_sent msg_cnt %d\n", i); /* arm the timer */ - tick = odp_timer_current_tick(test_timer_ping); + tick = odp_timer_current_tick(tp); tick += 1000; - test_ping_tmo = odp_timer_absolute_tmo(test_timer_ping, tick, - queue, - ODP_BUFFER_INVALID); + odp_timer_set_abs(test_timer_ping, tick); /* wait for timeout event */ while ((buf = odp_queue_deq(queue)) == ODP_BUFFER_INVALID) { /* flag true means ack rxvd.. a cruel hack as I @@ -229,17 +229,28 @@ static int send_ping_request(struct sockaddr_in *addr) break; } } + assert(odp_buffer_type(buf) == ODP_BUFFER_TYPE_TIMEOUT); + tmo = odp_timeout_from_buffer(buf); - /* free tmo_buf for timeout case */ - if (buf != ODP_BUFFER_INVALID) { - ODP_DBG(" timeout msg_cnt [%i] \n", i); + switch (odp_timer_tmo_status(tmo)) { + case ODP_TMO_FRESH: + ODP_DBG(" timeout msg_cnt [%i]\n", i); /* so to avoid seg fault commented */ - odp_buffer_free(buf); err = -1; + break; + case ODP_TMO_STALE: + /* Ignore stale timeouts */ + break; + case ODP_TMO_ORPHAN: + ODP_ERR("Received orphaned timeout!\n"); + abort(); } + odp_timer_return_tmo(tmo); } err: + if (test_timer_ping != ODP_TIMER_INVALID) + odp_timer_free(test_timer_ping); return err; } @@ -340,9 +351,9 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, BUF_SIZE, ODP_CACHE_LINE_SIZE, - ODP_BUFFER_TYPE_RAW); + ODP_BUFFER_TYPE_TIMEOUT); if (pool == ODP_BUFFER_POOL_INVALID) { - ODP_ERR("Pool create failed.\n"); + ODP_ERR("Buffer pool create failed.\n"); return -1; } @@ -357,15 +368,19 @@ int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED) return -1; } - test_timer_ping = odp_timer_create("ping_timer", pool, - RESUS*ODP_TIME_USEC, - MINUS*ODP_TIME_USEC, - MAXUS*ODP_TIME_USEC); - - if (test_timer_ping == ODP_TIMER_INVALID) { - ODP_ERR("Timer create failed.\n"); + /* + * Create timer pool + */ + tp = odp_timer_pool_create("timer_pool", pool, + RESUS*ODP_TIME_USEC, + MINUS*ODP_TIME_USEC, + MAXUS*ODP_TIME_USEC, + 1, false, ODP_CLOCK_CPU); + if (tp == ODP_TIMER_POOL_INVALID) { + ODP_ERR("Timer pool create failed.\n"); return -1; } + odp_timer_pool_start(); odp_shm_print_all();