From patchwork Wed Sep 9 10:06:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 53311 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by patches.linaro.org (Postfix) with ESMTPS id 09BB722B19 for ; Wed, 9 Sep 2015 10:10:03 +0000 (UTC) Received: by lbbmp1 with SMTP id mp1sf1932186lbb.2 for ; Wed, 09 Sep 2015 03:10:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=g9SEIFGzwPnI8Zh4eQZwLvANXuBmyxYAj9NtLJPwy4w=; b=MQN19gRBbFESYhvCnEt5AVu9u0kwjVlOTbLCLPJw4bsxJYidIf1nN3SDIHNnDKB1lg VJtVeMS/gVnWpBbMUtChUhbsicugfabqJHyUmdup3opnBrHBB9FDDwT4dHaHGyZlm1FS +U6VTeaE2Lu3v2fecUdS9ERrfUSg+DsmSfBtN4/H14PuTIgywmRhsY9uslx+GytSJvdH lAblzEZDOLrxZv4Ht+G9h0R4TdLOG3W/qVJihPOHc0s7VTZvkBUymQww5BGwkTrhFroS yUOvlKH0waUrFi7DzQBQEYar/jNqxs0aCU256iLuOdAJgmbCOJiaksO9qP+bysA73UCb Fd9g== X-Gm-Message-State: ALoCoQmkv3YvzhPd+SImFgP5cZfvC41tlOAjqJZirFwkt+HgPrsOeS9NxOp5TscQRqhmn3MzvOms X-Received: by 10.152.21.233 with SMTP id y9mr7917393lae.5.1441793402017; Wed, 09 Sep 2015 03:10:02 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.3.194 with SMTP id e2ls28161lae.49.gmail; Wed, 09 Sep 2015 03:10:01 -0700 (PDT) X-Received: by 10.152.120.130 with SMTP id lc2mr27861653lab.15.1441793401845; Wed, 09 Sep 2015 03:10:01 -0700 (PDT) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com. [209.85.217.181]) by mx.google.com with ESMTPS id n4si6211103lbc.63.2015.09.09.03.10.01 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Sep 2015 03:10:01 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) client-ip=209.85.217.181; Received: by lbcao8 with SMTP id ao8so2834528lbc.3 for ; Wed, 09 Sep 2015 03:10:01 -0700 (PDT) X-Received: by 10.112.72.97 with SMTP id c1mr14217476lbv.86.1441793401716; Wed, 09 Sep 2015 03:10:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp227508lbq; Wed, 9 Sep 2015 03:10:00 -0700 (PDT) X-Received: by 10.55.26.218 with SMTP id l87mr42904989qkh.23.1441793399809; Wed, 09 Sep 2015 03:09:59 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 141si7541316qhe.121.2015.09.09.03.09.58; Wed, 09 Sep 2015 03:09:59 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id F349B61E11; Wed, 9 Sep 2015 10:09:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 4693561E15; Wed, 9 Sep 2015 10:07:09 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 94ABE61CC2; Wed, 9 Sep 2015 10:06:54 +0000 (UTC) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com [209.85.217.181]) by lists.linaro.org (Postfix) with ESMTPS id D03FD61D32 for ; Wed, 9 Sep 2015 10:06:48 +0000 (UTC) Received: by lbpo4 with SMTP id o4so2816759lbp.2 for ; Wed, 09 Sep 2015 03:06:47 -0700 (PDT) X-Received: by 10.152.36.41 with SMTP id n9mr27853226laj.65.1441793207821; Wed, 09 Sep 2015 03:06:47 -0700 (PDT) Received: from localhost.localdomain (ppp91-76-161-180.pppoe.mtu-net.ru. [91.76.161.180]) by smtp.gmail.com with ESMTPSA id qu6sm1654092lbb.27.2015.09.09.03.06.46 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 09 Sep 2015 03:06:47 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Wed, 9 Sep 2015 13:06:37 +0300 Message-Id: <1441793199-30966-5-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1441793199-30966-1-git-send-email-maxim.uvarov@linaro.org> References: <1441793199-30966-1-git-send-email-maxim.uvarov@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCH 4/6] linux-generic: ring: remove ODPH_ prefix X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.181 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Maxim Uvarov --- platform/linux-generic/include/odp_ring_internal.h | 20 +++--- platform/linux-generic/pktio/ring.c | 80 +++++++++++----------- platform/linux-generic/test/ring/odp_ring_test.c | 16 ++--- 3 files changed, 58 insertions(+), 58 deletions(-) diff --git a/platform/linux-generic/include/odp_ring_internal.h b/platform/linux-generic/include/odp_ring_internal.h index 596bf9b..202d0a0 100644 --- a/platform/linux-generic/include/odp_ring_internal.h +++ b/platform/linux-generic/include/odp_ring_internal.h @@ -89,8 +89,8 @@ * */ -#ifndef ODPH_RING_H_ -#define ODPH_RING_H_ +#ifndef RING_H_ +#define RING_H_ #ifdef __cplusplus extern "C" { @@ -104,14 +104,14 @@ extern "C" { #include enum shm_ring_queue_behavior { - ODPH_RING_QUEUE_FIXED = 0, /**< Enq/Deq a fixed number + RING_QUEUE_FIXED = 0, /**< Enq/Deq a fixed number of items from a ring */ - ODPH_RING_QUEUE_VARIABLE /**< Enq/Deq as many items + RING_QUEUE_VARIABLE /**< Enq/Deq as many items a possible from ring */ }; -#define ODPH_RING_NAMESIZE 32 /**< The maximum length of a ring name. */ +#define RING_NAMESIZE 32 /**< The maximum length of a ring name. */ /** * An ODP ring structure. @@ -128,7 +128,7 @@ typedef struct shm_ring { TAILQ_ENTRY(shm_ring) next; /** @private Name of the ring. */ - char name[ODPH_RING_NAMESIZE]; + char name[RING_NAMESIZE]; /** @private Flags supplied at creation. */ int flags; @@ -156,10 +156,10 @@ typedef struct shm_ring { } shm_ring_t; -#define ODPH_RING_F_SP_ENQ 0x0001 /* The default enqueue is "single-producer".*/ -#define ODPH_RING_F_SC_DEQ 0x0002 /* The default dequeue is "single-consumer".*/ -#define ODPH_RING_QUOT_EXCEED (1 << 31) /* Quota exceed for burst ops */ -#define ODPH_RING_SZ_MASK (unsigned)(0x0fffffff) /* Ring size mask */ +#define RING_F_SP_ENQ 0x0001 /* The default enqueue is "single-producer".*/ +#define RING_F_SC_DEQ 0x0002 /* The default dequeue is "single-consumer".*/ +#define RING_QUOT_EXCEED (1 << 31) /* Quota exceed for burst ops */ +#define RING_SZ_MASK (unsigned)(0x0fffffff) /* Ring size mask */ /** diff --git a/platform/linux-generic/pktio/ring.c b/platform/linux-generic/pktio/ring.c index 9ff457f..70685db 100644 --- a/platform/linux-generic/pktio/ring.c +++ b/platform/linux-generic/pktio/ring.c @@ -78,7 +78,7 @@ #include #include #include -#include +#include static TAILQ_HEAD(, shm_ring) odp_ring_list; @@ -155,15 +155,15 @@ void shm_ring_tailq_init(void) shm_ring_t * shm_ring_create(const char *name, unsigned count, unsigned flags) { - char ring_name[ODPH_RING_NAMESIZE]; + char ring_name[RING_NAMESIZE]; shm_ring_t *r; size_t ring_size; odp_shm_t shm; /* count must be a power of 2 */ - if (!RING_VAL_IS_POWER_2(count) || (count > ODPH_RING_SZ_MASK)) { - ODPH_ERR("Requested size is invalid, must be power of 2, and do not exceed the size limit %u\n", - ODPH_RING_SZ_MASK); + if (!RING_VAL_IS_POWER_2(count) || (count > RING_SZ_MASK)) { + ODP_ERR("Requested size is invalid, must be power of 2, and do not exceed the size limit %u\n", + RING_SZ_MASK); return NULL; } @@ -181,8 +181,8 @@ shm_ring_create(const char *name, unsigned count, unsigned flags) snprintf(r->name, sizeof(r->name), "%s", name); r->flags = flags; r->prod.watermark = count; - r->prod.sp_enqueue = !!(flags & ODPH_RING_F_SP_ENQ); - r->cons.sc_dequeue = !!(flags & ODPH_RING_F_SC_DEQ); + r->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ); + r->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ); r->prod.size = count; r->cons.size = count; r->prod.mask = count-1; @@ -194,7 +194,7 @@ shm_ring_create(const char *name, unsigned count, unsigned flags) TAILQ_INSERT_TAIL(&odp_ring_list, r, next); } else { - ODPH_ERR("Cannot reserve memory\n"); + ODP_ERR("Cannot reserve memory\n"); } odp_rwlock_write_unlock(&qlock); @@ -247,7 +247,7 @@ int __shm_ring_mp_do_enqueue(shm_ring_t *r, void * const *obj_table, /* check that we have enough room in ring */ if (odp_unlikely(n > free_entries)) { - if (behavior == ODPH_RING_QUEUE_FIXED) { + if (behavior == RING_QUEUE_FIXED) { return -ENOBUFS; } else { /* No free entry available */ @@ -272,10 +272,10 @@ int __shm_ring_mp_do_enqueue(shm_ring_t *r, void * const *obj_table, /* if we exceed the watermark */ if (odp_unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) { - ret = (behavior == ODPH_RING_QUEUE_FIXED) ? -EDQUOT : - (int)(n | ODPH_RING_QUOT_EXCEED); + ret = (behavior == RING_QUEUE_FIXED) ? -EDQUOT : + (int)(n | RING_QUOT_EXCEED); } else { - ret = (behavior == ODPH_RING_QUEUE_FIXED) ? 0 : n; + ret = (behavior == RING_QUEUE_FIXED) ? 0 : n; } /* @@ -313,7 +313,7 @@ int __shm_ring_sp_do_enqueue(shm_ring_t *r, void * const *obj_table, /* check that we have enough room in ring */ if (odp_unlikely(n > free_entries)) { - if (behavior == ODPH_RING_QUEUE_FIXED) { + if (behavior == RING_QUEUE_FIXED) { return -ENOBUFS; } else { /* No free entry available */ @@ -332,10 +332,10 @@ int __shm_ring_sp_do_enqueue(shm_ring_t *r, void * const *obj_table, /* if we exceed the watermark */ if (odp_unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) { - ret = (behavior == ODPH_RING_QUEUE_FIXED) ? -EDQUOT : - (int)(n | ODPH_RING_QUOT_EXCEED); + ret = (behavior == RING_QUEUE_FIXED) ? -EDQUOT : + (int)(n | RING_QUOT_EXCEED); } else { - ret = (behavior == ODPH_RING_QUEUE_FIXED) ? 0 : n; + ret = (behavior == RING_QUEUE_FIXED) ? 0 : n; } /* Release our entries and the memory they refer to */ @@ -373,7 +373,7 @@ int __shm_ring_mc_do_dequeue(shm_ring_t *r, void **obj_table, /* Set the actual entries for dequeue */ if (n > entries) { - if (behavior == ODPH_RING_QUEUE_FIXED) { + if (behavior == RING_QUEUE_FIXED) { return -ENOENT; } else { if (odp_unlikely(entries == 0)) @@ -406,7 +406,7 @@ int __shm_ring_mc_do_dequeue(shm_ring_t *r, void **obj_table, __atomic_thread_fence(__ATOMIC_RELEASE); r->cons.tail = cons_next; - return behavior == ODPH_RING_QUEUE_FIXED ? 0 : n; + return behavior == RING_QUEUE_FIXED ? 0 : n; } /** @@ -429,7 +429,7 @@ int __shm_ring_sc_do_dequeue(shm_ring_t *r, void **obj_table, entries = prod_tail - cons_head; if (n > entries) { - if (behavior == ODPH_RING_QUEUE_FIXED) { + if (behavior == RING_QUEUE_FIXED) { return -ENOENT; } else { if (odp_unlikely(entries == 0)) @@ -448,7 +448,7 @@ int __shm_ring_sc_do_dequeue(shm_ring_t *r, void **obj_table, DEQUEUE_PTRS(); r->cons.tail = cons_next; - return behavior == ODPH_RING_QUEUE_FIXED ? 0 : n; + return behavior == RING_QUEUE_FIXED ? 0 : n; } /** @@ -458,7 +458,7 @@ int shm_ring_mp_enqueue_bulk(shm_ring_t *r, void * const *obj_table, unsigned n) { return __shm_ring_mp_do_enqueue(r, obj_table, n, - ODPH_RING_QUEUE_FIXED); + RING_QUEUE_FIXED); } /** @@ -468,7 +468,7 @@ int shm_ring_sp_enqueue_bulk(shm_ring_t *r, void * const *obj_table, unsigned n) { return __shm_ring_sp_do_enqueue(r, obj_table, n, - ODPH_RING_QUEUE_FIXED); + RING_QUEUE_FIXED); } /** @@ -477,7 +477,7 @@ int shm_ring_sp_enqueue_bulk(shm_ring_t *r, void * const *obj_table, int shm_ring_mc_dequeue_bulk(shm_ring_t *r, void **obj_table, unsigned n) { return __shm_ring_mc_do_dequeue(r, obj_table, n, - ODPH_RING_QUEUE_FIXED); + RING_QUEUE_FIXED); } /** @@ -486,7 +486,7 @@ int shm_ring_mc_dequeue_bulk(shm_ring_t *r, void **obj_table, unsigned n) int shm_ring_sc_dequeue_bulk(shm_ring_t *r, void **obj_table, unsigned n) { return __shm_ring_sc_do_dequeue(r, obj_table, n, - ODPH_RING_QUEUE_FIXED); + RING_QUEUE_FIXED); } /** @@ -532,19 +532,19 @@ unsigned shm_ring_free_count(const shm_ring_t *r) /* dump the status of the ring on the console */ void shm_ring_dump(const shm_ring_t *r) { - ODPH_DBG("ring <%s>@%p\n", r->name, r); - ODPH_DBG(" flags=%x\n", r->flags); - ODPH_DBG(" size=%" PRIu32 "\n", r->prod.size); - ODPH_DBG(" ct=%" PRIu32 "\n", r->cons.tail); - ODPH_DBG(" ch=%" PRIu32 "\n", r->cons.head); - ODPH_DBG(" pt=%" PRIu32 "\n", r->prod.tail); - ODPH_DBG(" ph=%" PRIu32 "\n", r->prod.head); - ODPH_DBG(" used=%u\n", shm_ring_count(r)); - ODPH_DBG(" avail=%u\n", shm_ring_free_count(r)); + ODP_DBG("ring <%s>@%p\n", r->name, r); + ODP_DBG(" flags=%x\n", r->flags); + ODP_DBG(" size=%" PRIu32 "\n", r->prod.size); + ODP_DBG(" ct=%" PRIu32 "\n", r->cons.tail); + ODP_DBG(" ch=%" PRIu32 "\n", r->cons.head); + ODP_DBG(" pt=%" PRIu32 "\n", r->prod.tail); + ODP_DBG(" ph=%" PRIu32 "\n", r->prod.head); + ODP_DBG(" used=%u\n", shm_ring_count(r)); + ODP_DBG(" avail=%u\n", shm_ring_free_count(r)); if (r->prod.watermark == r->prod.size) - ODPH_DBG(" watermark=0\n"); + ODP_DBG(" watermark=0\n"); else - ODPH_DBG(" watermark=%" PRIu32 "\n", r->prod.watermark); + ODP_DBG(" watermark=%" PRIu32 "\n", r->prod.watermark); } /* dump the status of all rings on the console */ @@ -568,7 +568,7 @@ shm_ring_t *shm_ring_lookup(const char *name) odp_rwlock_read_lock(&qlock); TAILQ_FOREACH(r, &odp_ring_list, next) { - if (strncmp(name, r->name, ODPH_RING_NAMESIZE) == 0) + if (strncmp(name, r->name, RING_NAMESIZE) == 0) break; } odp_rwlock_read_unlock(&qlock); @@ -583,7 +583,7 @@ int shm_ring_mp_enqueue_burst(shm_ring_t *r, void * const *obj_table, unsigned n) { return __shm_ring_mp_do_enqueue(r, obj_table, n, - ODPH_RING_QUEUE_VARIABLE); + RING_QUEUE_VARIABLE); } /** @@ -593,7 +593,7 @@ int shm_ring_sp_enqueue_burst(shm_ring_t *r, void * const *obj_table, unsigned n) { return __shm_ring_sp_do_enqueue(r, obj_table, n, - ODPH_RING_QUEUE_VARIABLE); + RING_QUEUE_VARIABLE); } /** @@ -614,7 +614,7 @@ int shm_ring_enqueue_burst(shm_ring_t *r, void * const *obj_table, int shm_ring_mc_dequeue_burst(shm_ring_t *r, void **obj_table, unsigned n) { return __shm_ring_mc_do_dequeue(r, obj_table, n, - ODPH_RING_QUEUE_VARIABLE); + RING_QUEUE_VARIABLE); } /** @@ -623,7 +623,7 @@ int shm_ring_mc_dequeue_burst(shm_ring_t *r, void **obj_table, unsigned n) int shm_ring_sc_dequeue_burst(shm_ring_t *r, void **obj_table, unsigned n) { return __shm_ring_sc_do_dequeue(r, obj_table, n, - ODPH_RING_QUEUE_VARIABLE); + RING_QUEUE_VARIABLE); } /** diff --git a/platform/linux-generic/test/ring/odp_ring_test.c b/platform/linux-generic/test/ring/odp_ring_test.c index 799f5c6..7b9a81e 100644 --- a/platform/linux-generic/test/ring/odp_ring_test.c +++ b/platform/linux-generic/test/ring/odp_ring_test.c @@ -88,7 +88,7 @@ static int test_ring_basic(shm_ring_t *r) printf("enqueue 1 obj\n"); ret = shm_ring_sp_enqueue_burst(r, cur_src, 1); cur_src += 1; - if ((ret & ODPH_RING_SZ_MASK) != 1) { + if ((ret & RING_SZ_MASK) != 1) { LOG_ERR("sp_enq for 1 obj failed\n"); goto fail; } @@ -96,14 +96,14 @@ static int test_ring_basic(shm_ring_t *r) printf("enqueue 2 objs\n"); ret = shm_ring_sp_enqueue_burst(r, cur_src, 2); cur_src += 2; - if ((ret & ODPH_RING_SZ_MASK) != 2) { + if ((ret & RING_SZ_MASK) != 2) { LOG_ERR("sp_enq for 2 obj failed\n"); goto fail; } printf("enqueue MAX_BULK objs\n"); ret = shm_ring_sp_enqueue_burst(r, cur_src, MAX_BULK); - if ((ret & ODPH_RING_SZ_MASK) != MAX_BULK) { + if ((ret & RING_SZ_MASK) != MAX_BULK) { LOG_ERR("sp_enq for %d obj failed\n", MAX_BULK); goto fail; } @@ -111,7 +111,7 @@ static int test_ring_basic(shm_ring_t *r) printf("dequeue 1 obj\n"); ret = shm_ring_sc_dequeue_burst(r, cur_dst, 1); cur_dst += 1; - if ((ret & ODPH_RING_SZ_MASK) != 1) { + if ((ret & RING_SZ_MASK) != 1) { LOG_ERR("sc_deq for 1 obj failed\n"); goto fail; } @@ -119,7 +119,7 @@ static int test_ring_basic(shm_ring_t *r) printf("dequeue 2 objs\n"); ret = shm_ring_sc_dequeue_burst(r, cur_dst, 2); cur_dst += 2; - if ((ret & ODPH_RING_SZ_MASK) != 2) { + if ((ret & RING_SZ_MASK) != 2) { LOG_ERR("sc_deq for 2 obj failed\n"); goto fail; } @@ -127,7 +127,7 @@ static int test_ring_basic(shm_ring_t *r) printf("dequeue MAX_BULK objs\n"); ret = shm_ring_sc_dequeue_burst(r, cur_dst, MAX_BULK); cur_dst += MAX_BULK; - if ((ret & ODPH_RING_SZ_MASK) != MAX_BULK) { + if ((ret & RING_SZ_MASK) != MAX_BULK) { LOG_ERR("sc_deq for %d obj failed\n", MAX_BULK); goto fail; } @@ -355,7 +355,7 @@ static void *test_ring(void *arg) { ring_arg_t *parg = (ring_arg_t *)arg; int thr; - char ring_name[ODPH_RING_NAMESIZE]; + char ring_name[RING_NAMESIZE]; shm_ring_t *r; int result = 0; @@ -438,7 +438,7 @@ int main(int argc __attribute__((__unused__)), rarg.thrdarg.testcase = ODP_RING_TEST_STRESS; rarg.stress_type = one_enq_one_deq; /* rarg.stress_type = multi_enq_multi_deq;*/ - char ring_name[ODPH_RING_NAMESIZE]; + char ring_name[RING_NAMESIZE]; printf("starting stess test type : %d..\n", rarg.stress_type); /* create a ring */