From patchwork Sun Feb 21 22:58:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 62462 Delivered-To: patch@linaro.org Received: by 10.112.43.199 with SMTP id y7csp917796lbl; Sun, 21 Feb 2016 15:00:40 -0800 (PST) X-Received: by 10.140.250.138 with SMTP id v132mr32633843qhc.0.1456095640330; Sun, 21 Feb 2016 15:00:40 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id t103si15767385qge.30.2016.02.21.15.00.40; Sun, 21 Feb 2016 15:00:40 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id F12C561B8A; Sun, 21 Feb 2016 23:00:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 86AC261A52; Sun, 21 Feb 2016 22:59:01 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id A850761944; Sun, 21 Feb 2016 22:58:53 +0000 (UTC) Received: from mail-oi0-f54.google.com (mail-oi0-f54.google.com [209.85.218.54]) by lists.linaro.org (Postfix) with ESMTPS id 8FBA261944 for ; Sun, 21 Feb 2016 22:58:49 +0000 (UTC) Received: by mail-oi0-f54.google.com with SMTP id j125so44994395oih.0 for ; Sun, 21 Feb 2016 14:58:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NjJFmrYvqb3qRptYUoJmCd/V4uWAZo7AvDaUaDriZug=; b=mYGaCq+bibseeyKKk1N1VsS3+dz8BP9GYw6+/A2em6oDa0BQ7wjpQd6qCNuIEfiwF+ bii6CDVPg3M69n8Ea0HBvwYevGf+d6HMw5m4IKnhnqTTb2iJuWvWPayhB3D/0mnLBjiq COO5J8jLjHbaX3+IGRETHhRalvMuICbH9Etg9Ehq1Q9vGgZ4P3h7K9YZ6l0pcmnE9J71 uIKPsgQ3ulmlzs/K3BaWq3E1mCmX0c9HZGZq/Y79sOb2//g7GOE1CrDywRUuzp44TMGI eNrFJ6LwvGir4H2pca0Fh+T0AqA/UgabZ3doELhk4Yc1h1xzZlEGjeKgy1FhnDKtsxr5 KVJA== X-Gm-Message-State: AG10YOShgOvXdmUEWzpYhu0Vhl10qVSBm5wNrSEm/+LOjppemG08I7cSZzw4oWwTOrIIwAKNsAg= X-Received: by 10.202.77.197 with SMTP id a188mr19011708oib.10.1456095529121; Sun, 21 Feb 2016 14:58:49 -0800 (PST) Received: from Ubuntu15.localdomain (cpe-66-68-129-43.austin.res.rr.com. [66.68.129.43]) by smtp.gmail.com with ESMTPSA id j207sm14972840oib.27.2016.02.21.14.58.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 21 Feb 2016 14:58:48 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Sun, 21 Feb 2016 16:58:38 -0600 Message-Id: <1456095520-1567-4-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1456095520-1567-1-git-send-email-bill.fischofer@linaro.org> References: <1456095520-1567-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv5 4/6] linux-generic: tm: correct some old comments and stop using cpu cycles X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" This patch fixes some incorrect comments regarding the format of the fixed point integer types used to represent the shaper's token bucket count values. Also since the TM and the TM tests were changed to use odp_time_local() instead of the cpu cycles API's, changed field names like current_cycles to current_time. Finally this patch does some minor cosmetic style improvement. Signed-off-by: Bill Fischofer --- .../include/odp_traffic_mngr_internal.h | 90 +++++++++++----------- 1 file changed, 45 insertions(+), 45 deletions(-) diff --git a/platform/linux-generic/include/odp_traffic_mngr_internal.h b/platform/linux-generic/include/odp_traffic_mngr_internal.h index e09c60a..786d51f 100644 --- a/platform/linux-generic/include/odp_traffic_mngr_internal.h +++ b/platform/linux-generic/include/odp_traffic_mngr_internal.h @@ -43,9 +43,6 @@ typedef struct stat file_stat_t; #define TM_QUEUE_MAGIC_NUM 0xBABEBABE #define TM_NODE_MAGIC_NUM 0xBEEFBEEF -/**> @todo Fill this in with what it's supposed to be */ -#define ODP_CYCLES_PER_SEC 1000000000 - /* Macros to convert handles to internal pointers and vice versa. */ #define MAKE_ODP_TM_HANDLE(tm_system) ((odp_tm_t)(uintptr_t)tm_system) @@ -150,10 +147,10 @@ typedef struct { } tm_prop_t; typedef struct { - uint64_t commit_rate; - uint64_t peak_rate; - int64_t max_commit; /* Byte cnt as a fp integer with 26 bits. */ - int64_t max_peak; + uint64_t commit_rate; /* Bytes per clk cycle as a 26 bit fp integer */ + uint64_t peak_rate; /* Same as commit_rate */ + int64_t max_commit; /* Byte cnt as a fp integer with 26 bits. */ + int64_t max_peak; /* Same as max_commit */ uint64_t max_commit_time_delta; uint64_t max_peak_time_delta; uint32_t min_time_delta; @@ -171,15 +168,17 @@ typedef struct { tm_shaper_params_t *shaper_params; tm_sched_params_t *sched_params; - uint64_t last_update_time; /* In clock cycles. */ + uint64_t last_update_time; uint64_t callback_time; /* The shaper token bucket counters are represented as a number of * bytes in a 64-bit fixed point format where the decimal point is at - * bit 24. (aka int64_24). In other words, the number of bytes that - * commit_cnt represents is "commit_cnt / 2**24". Hence the - * commit_rate and peak_rate are in units of bytes per cycle = "8 * - * bits per sec / cycles per sec" + * bit 26. (aka int64_26). In other words, the number of bytes that + * commit_cnt represents is "commit_cnt / 2**26". The commit_rate and + * peak_rate are in units of bytes per nanoseccond, again using a 26-bit + * fixed point integer. Alternatively, ignoring the fixed point, + * the number of bytes that x nanosecconds represents is equal to + * "(rate * nanosecconds) / 2**26". */ int64_t commit_cnt; /* Note token counters can go slightly negative */ int64_t peak_cnt; /* Note token counters can go slightly negative */ @@ -246,63 +245,64 @@ struct tm_queue_obj_s { }; struct tm_node_obj_s { - uint32_t magic_num; - tm_wred_node_t *tm_wred_node; - tm_shaper_obj_t shaper_obj; + uint32_t magic_num; + tm_wred_node_t *tm_wred_node; + tm_shaper_obj_t shaper_obj; tm_schedulers_obj_t *schedulers_obj; - _odp_int_name_t name_tbl_id; - uint32_t max_fanin; - uint8_t level; /* Primarily for debugging */ - uint8_t tm_idx; - uint8_t marked; + _odp_int_name_t name_tbl_id; + uint32_t max_fanin; + uint8_t level; /* Primarily for debugging */ + uint8_t tm_idx; + uint8_t marked; }; typedef struct { tm_queue_obj_t *tm_queue_obj; - odp_packet_t pkt; + odp_packet_t pkt; } input_work_item_t; typedef struct { - uint64_t total_enqueues; - uint64_t enqueue_fail_cnt; - uint64_t total_dequeues; - odp_atomic_u32_t queue_cnt; - uint32_t peak_cnt; - uint32_t head_idx; - uint32_t tail_idx; - odp_ticketlock_t lock; + uint64_t total_enqueues; + uint64_t enqueue_fail_cnt; + uint64_t total_dequeues; + odp_atomic_u32_t queue_cnt; + uint32_t peak_cnt; + uint32_t head_idx; + uint32_t tail_idx; + odp_ticketlock_t lock; input_work_item_t work_ring[INPUT_WORK_RING_SIZE]; } input_work_queue_t; typedef struct { uint32_t next_random_byte; - uint8_t buf[256]; + uint8_t buf[256]; } tm_random_data_t; typedef struct { tm_queue_thresholds_t *threshold_params; - tm_queue_cnts_t queue_cnts; + tm_queue_cnts_t queue_cnts; } tm_queue_info_t; typedef struct { odp_ticketlock_t tm_system_lock; - odp_barrier_t tm_system_barrier; - odp_barrier_t tm_system_destroy_barrier; + odp_barrier_t tm_system_barrier; + odp_barrier_t tm_system_destroy_barrier; odp_atomic_u32_t destroying; - _odp_int_name_t name_tbl_id; + _odp_int_name_t name_tbl_id; - uint32_t next_queue_num; - tm_queue_obj_t **queue_num_tbl; + void *trace_buffer; + uint32_t next_queue_num; + tm_queue_obj_t **queue_num_tbl; input_work_queue_t *input_work_queue; - tm_queue_cnts_t priority_queue_cnts; - tm_queue_cnts_t total_queue_cnts; - pkt_desc_t egress_pkt_desc; + tm_queue_cnts_t priority_queue_cnts; + tm_queue_cnts_t total_queue_cnts; + pkt_desc_t egress_pkt_desc; - _odp_int_queue_pool_t _odp_int_queue_pool; - _odp_timer_wheel_t _odp_int_timer_wheel; + _odp_int_queue_pool_t _odp_int_queue_pool; + _odp_timer_wheel_t _odp_int_timer_wheel; _odp_int_sorted_pool_t _odp_int_sorted_pool; - odp_tm_egress_t egress; + odp_tm_egress_t egress; odp_tm_capability_t capability; tm_queue_info_t total_info; @@ -310,9 +310,9 @@ typedef struct { tm_random_data_t tm_random_data; - uint64_t current_cycles; - uint8_t tm_idx; - uint8_t first_enq; + uint64_t current_time; + uint8_t tm_idx; + uint8_t first_enq; odp_bool_t is_idle; uint64_t shaper_green_cnt;