From patchwork Tue Jan 20 11:18:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petri Savolainen X-Patchwork-Id: 43357 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f69.google.com (mail-ee0-f69.google.com [74.125.83.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D36C72034D for ; Tue, 20 Jan 2015 11:20:18 +0000 (UTC) Received: by mail-ee0-f69.google.com with SMTP id b57sf4646316eek.0 for ; Tue, 20 Jan 2015 03:20:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=vjjl35bqqFXinbByN5qxl4rGkAVENKLshqN7rhVskbM=; b=L02OTimZyxKc5H+YUUetn5RwDRziMMHmiWtZEonW1NJT214zfxaaCagbKPHEswZGdT G+RvscD+gIcXFfIlgAxEQZ3+9/9h3hLQLviWu+dneoDzzK1L4bZ+S9JpmzJJ8Y1bYa7K 884iwzu9TaqqlPvhDlhcvNOMwEI4fiYH2R9CUrGvM6vp8lJqOj0MKJAzPnijnJtpAQrX Wp+3zd/WjmlfdPTcz/22W2T17p/GkDMW6CUmoEyrdKTxs1mMzSSXXVEHiqQhN3yhE3AV y+UsBnNsBcQgQO9iOnQDkgrqPOtB/zsWAHyyanxjY5xlg+vhu5urwBqXQJIxZ3+kKRuv ByNw== X-Gm-Message-State: ALoCoQl+hB1PAhwM1BR5X3AvF6SM30obdfbBGqtMixa3DFNU1b0Ijt29oL58JDQYtgFck0mkv+Y8 X-Received: by 10.180.108.197 with SMTP id hm5mr721268wib.4.1421752818058; Tue, 20 Jan 2015 03:20:18 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.202.198 with SMTP id kk6ls637113lac.103.gmail; Tue, 20 Jan 2015 03:20:17 -0800 (PST) X-Received: by 10.152.88.44 with SMTP id bd12mr30761612lab.88.1421752817874; Tue, 20 Jan 2015 03:20:17 -0800 (PST) Received: from mail-lb0-f175.google.com (mail-lb0-f175.google.com. [209.85.217.175]) by mx.google.com with ESMTPS id j6si5081164lam.29.2015.01.20.03.20.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 20 Jan 2015 03:20:17 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) client-ip=209.85.217.175; Received: by mail-lb0-f175.google.com with SMTP id z11so32991175lbi.6 for ; Tue, 20 Jan 2015 03:20:17 -0800 (PST) X-Received: by 10.152.23.38 with SMTP id j6mr37578746laf.81.1421752817771; Tue, 20 Jan 2015 03:20:17 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp1380973lbb; Tue, 20 Jan 2015 03:20:16 -0800 (PST) X-Received: by 10.229.125.133 with SMTP id y5mr56858379qcr.17.1421752815388; Tue, 20 Jan 2015 03:20:15 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id p20si21623911qgd.117.2015.01.20.03.20.14 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 20 Jan 2015 03:20:15 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YDWqw-0006PO-6N; Tue, 20 Jan 2015 11:20:10 +0000 Received: from mail-qc0-f171.google.com ([209.85.216.171]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YDWqK-0006G2-OV for lng-odp@lists.linaro.org; Tue, 20 Jan 2015 11:19:32 +0000 Received: by mail-qc0-f171.google.com with SMTP id s11so11290851qcv.2 for ; Tue, 20 Jan 2015 03:19:27 -0800 (PST) X-Received: by 10.140.95.179 with SMTP id i48mr16205669qge.4.1421752767541; Tue, 20 Jan 2015 03:19:27 -0800 (PST) Received: from mcpro03.emea.nsn-net.net (ec2-23-23-178-99.compute-1.amazonaws.com. [23.23.178.99]) by mx.google.com with ESMTPSA id t2sm14975907qae.6.2015.01.20.03.19.25 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 20 Jan 2015 03:19:26 -0800 (PST) From: Petri Savolainen To: lng-odp@lists.linaro.org Date: Tue, 20 Jan 2015 13:18:09 +0200 Message-Id: <1421752692-24439-14-git-send-email-petri.savolainen@linaro.org> X-Mailer: git-send-email 2.2.2 In-Reply-To: <1421752692-24439-1-git-send-email-petri.savolainen@linaro.org> References: <1421752692-24439-1-git-send-email-petri.savolainen@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCH v2 13/16] api: pool: Rename pool params and remove buffer types X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: petri.savolainen@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 * Renamed odp_buffer_pool_param_t to odp_pool_param_t * Moved buffer pool parameters into "buf" struct * Left other structs for other types (pkt and tmo) to be added and implemented * Pool type field is common to all pool types * Removed buffer types and use ODP_EVENT_XXX for event type and ODP_POOL_XXX for pool type instead. So event types may not be assosiated to a pool (and may not have a corresponding pool type). Signed-off-by: Petri Savolainen --- example/generator/odp_generator.c | 10 +-- example/ipsec/odp_ipsec.c | 28 ++++---- example/l2fwd/odp_l2fwd.c | 10 +-- example/packet/odp_pktio.c | 10 +-- example/timer/odp_timer_test.c | 10 +-- platform/linux-generic/include/api/odp_pool.h | 81 ++++++++++++++-------- .../linux-generic/include/odp_buffer_inlines.h | 2 +- .../linux-generic/include/odp_buffer_internal.h | 6 +- .../include/odp_buffer_pool_internal.h | 6 +- platform/linux-generic/odp_buffer_pool.c | 76 ++++++++++---------- platform/linux-generic/odp_event.c | 11 +-- platform/linux-generic/odp_packet.c | 12 ++-- platform/linux-generic/odp_schedule.c | 10 +-- platform/linux-generic/odp_timer.c | 2 +- test/performance/odp_scheduling.c | 10 +-- test/validation/buffer/odp_buffer_pool_test.c | 66 ++++++++++-------- test/validation/buffer/odp_buffer_test.c | 12 ++-- test/validation/buffer/odp_packet_test.c | 14 ++-- test/validation/odp_crypto.c | 18 ++--- test/validation/odp_pktio.c | 22 +++--- test/validation/odp_queue.c | 10 +-- test/validation/odp_schedule.c | 10 +-- test/validation/odp_timer.c | 10 +-- 23 files changed, 231 insertions(+), 215 deletions(-) diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c index 5df8868..de639a4 100644 --- a/example/generator/odp_generator.c +++ b/example/generator/odp_generator.c @@ -545,8 +545,8 @@ int main(int argc, char *argv[]) int i; odp_shm_t shm; odp_cpumask_t cpumask; - odp_buffer_pool_param_t params; char cpumaskstr[64]; + odp_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -603,10 +603,10 @@ int main(int argc, char *argv[]) printf("cpu mask: %s\n", cpumaskstr); /* Create packet pool */ - params.buf_size = SHM_PKT_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = SHM_PKT_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.type = ODP_POOL_PACKET; pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c index e375396..389c106 100644 --- a/example/ipsec/odp_ipsec.c +++ b/example/ipsec/odp_ipsec.c @@ -366,7 +366,7 @@ static void ipsec_init_pre(void) { odp_queue_param_t qparam; - odp_buffer_pool_param_t params; + odp_pool_param_t params; /* * Create queues @@ -399,10 +399,10 @@ void ipsec_init_pre(void) } /* Create output buffer pool */ - params.buf_size = SHM_OUT_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_PKT_POOL_BUF_COUNT; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = SHM_OUT_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_PKT_POOL_BUF_COUNT; + params.type = ODP_POOL_PACKET; out_pool = odp_buffer_pool_create("out_pool", ODP_SHM_NULL, ¶ms); @@ -1175,8 +1175,8 @@ main(int argc, char *argv[]) int stream_count; odp_shm_t shm; odp_cpumask_t cpumask; - odp_buffer_pool_param_t params; char cpumaskstr[64]; + odp_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -1234,10 +1234,10 @@ main(int argc, char *argv[]) odp_barrier_init(&sync_barrier, num_workers); /* Create packet buffer pool */ - params.buf_size = SHM_PKT_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_PKT_POOL_BUF_COUNT; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = SHM_PKT_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_PKT_POOL_BUF_COUNT; + params.type = ODP_POOL_PACKET; pkt_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); @@ -1248,10 +1248,10 @@ main(int argc, char *argv[]) } /* Create context buffer pool */ - params.buf_size = SHM_CTX_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_CTX_POOL_BUF_COUNT; - params.buf_type = ODP_BUFFER_TYPE_RAW; + params.buf.size = SHM_CTX_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_CTX_POOL_BUF_COUNT; + params.type = ODP_POOL_BUFFER; ctx_pool = odp_buffer_pool_create("ctx_pool", ODP_SHM_NULL, ¶ms); diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c index 445e869..f1c53f3 100644 --- a/example/l2fwd/odp_l2fwd.c +++ b/example/l2fwd/odp_l2fwd.c @@ -292,8 +292,8 @@ int main(int argc, char *argv[]) int num_workers; odp_shm_t shm; odp_cpumask_t cpumask; - odp_buffer_pool_param_t params; char cpumaskstr[64]; + odp_pool_param_t params; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -352,10 +352,10 @@ int main(int argc, char *argv[]) } /* Create packet pool */ - params.buf_size = SHM_PKT_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = SHM_PKT_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.type = ODP_POOL_PACKET; pool = odp_buffer_pool_create("packet pool", ODP_SHM_NULL, ¶ms); diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c index e3a26db..e3b374f 100644 --- a/example/packet/odp_pktio.c +++ b/example/packet/odp_pktio.c @@ -281,8 +281,8 @@ int main(int argc, char *argv[]) int i; int cpu; odp_cpumask_t cpumask; - odp_buffer_pool_param_t params; char cpumaskstr[64]; + odp_pool_param_t params; args = calloc(1, sizeof(args_t)); if (args == NULL) { @@ -325,10 +325,10 @@ int main(int argc, char *argv[]) printf("cpu mask: %s\n", cpumaskstr); /* Create packet pool */ - params.buf_size = SHM_PKT_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = SHM_PKT_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.type = ODP_POOL_PACKET; pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c index f4fa9a2..3395123 100644 --- a/example/timer/odp_timer_test.c +++ b/example/timer/odp_timer_test.c @@ -304,7 +304,7 @@ int main(int argc, char *argv[]) odp_queue_t queue; uint64_t cycles, ns; odp_queue_param_t param; - odp_buffer_pool_param_t params; + odp_pool_param_t params; odp_timer_pool_param_t tparams; odp_timer_pool_info_t tpinfo; odp_cpumask_t cpumask; @@ -364,10 +364,10 @@ int main(int argc, char *argv[]) /* * Create buffer pool for timeouts */ - params.buf_size = 0; - params.buf_align = 0; - params.num_bufs = MSG_NUM_BUFS; - params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; + params.buf.size = 0; + params.buf.align = 0; + params.buf.num = MSG_NUM_BUFS; + params.type = ODP_POOL_TIMEOUT; pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); diff --git a/platform/linux-generic/include/api/odp_pool.h b/platform/linux-generic/include/api/odp_pool.h index 30a805f..1ca569b 100644 --- a/platform/linux-generic/include/api/odp_pool.h +++ b/platform/linux-generic/include/api/odp_pool.h @@ -23,37 +23,58 @@ extern "C" { #include #include #include +#include /** @addtogroup odp_buffer - * Operations on a buffer pool. + * Operations on a pool. * @{ */ /** Maximum queue name lenght in chars */ -#define ODP_BUFFER_POOL_NAME_LEN 32 +#define ODP_POOL_NAME_LEN 32 /** - * Buffer pool parameters - * Used to communicate buffer pool creation options. + * Pool parameters + * Used to communicate pool creation options. */ -typedef struct odp_buffer_pool_param_t { - uint32_t buf_size; /**< Buffer size in bytes. The maximum - number of bytes application will - store in each buffer. For packets, this - is the maximum packet data length, and - configured headroom and tailroom will be - added to this number */ - uint32_t buf_align; /**< Minimum buffer alignment in bytes. - Valid values are powers of two. Use 0 - for default alignment. Default will - always be a multiple of 8. */ - uint32_t num_bufs; /**< Number of buffers in the pool */ - int buf_type; /**< Buffer type */ -} odp_buffer_pool_param_t; - -#define ODP_BUFFER_TYPE_RAW 1 /**< Raw buffer, no additional metadata */ -#define ODP_BUFFER_TYPE_PACKET 2 /**< Packet buffer */ -#define ODP_BUFFER_TYPE_TIMEOUT 3 /**< Timeout buffer */ +typedef struct odp_pool_param_t { + union { + struct { + uint32_t size; /**< Buffer size in bytes. The + maximum number of bytes + application will store in each + buffer. */ + uint32_t align; /**< Minimum buffer alignment in bytes. + Valid values are powers of two. + Use 0 for default alignment. + Default will always be a multiple + of 8. */ + uint32_t num; /**< Number of buffers in the pool */ + } buf; +/* Reserved for packet and timeout specific params + struct { + uint32_t seg_size; + uint32_t seg_align; + uint32_t num; + } pkt; + struct { + } tmo; +*/ + }; + + int type; /**< Pool type */ + +} odp_pool_param_t; + +/**< Invalid pool type */ +#define ODP_POOL_TYPE_INVALID ODP_EVENT_TYPE_INVALID +/**< Packet pool*/ +#define ODP_POOL_PACKET ODP_EVENT_PACKET +/**< Buffer pool */ +#define ODP_POOL_BUFFER ODP_EVENT_BUFFER +/**< Timeout pool */ +#define ODP_POOL_TIMEOUT ODP_EVENT_TIMEOUT + /** * Create a buffer pool @@ -78,7 +99,7 @@ typedef struct odp_buffer_pool_param_t { odp_buffer_pool_t odp_buffer_pool_create(const char *name, odp_shm_t shm, - odp_buffer_pool_param_t *params); + odp_pool_param_t *params); /** * Destroy a buffer pool previously created by odp_buffer_pool_create() @@ -117,13 +138,13 @@ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name); * Used to get information about a buffer pool. */ typedef struct odp_buffer_pool_info_t { - const char *name; /**< pool name */ - odp_shm_t shm; /**< handle of shared memory area - supplied by application to - contain buffer pool, or - ODP_SHM_INVALID if this pool is - managed by ODP */ - odp_buffer_pool_param_t params; /**< pool parameters */ + const char *name; /**< pool name */ + odp_shm_t shm; /**< handle of shared memory area + supplied by application to + contain buffer pool, or + ODP_SHM_INVALID if this pool is + managed by ODP */ + odp_pool_param_t params; /**< pool parameters */ } odp_buffer_pool_info_t; /** diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h index ee264a6..000e673 100644 --- a/platform/linux-generic/include/odp_buffer_inlines.h +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -116,7 +116,7 @@ static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) /* A valid buffer index must be on stride, and must be in range */ if ((handle.index % buf_stride != 0) || - ((uint32_t)(handle.index / buf_stride) >= pool->s.params.num_bufs)) + ((uint32_t)(handle.index / buf_stride) >= pool->s.params.buf.num)) return NULL; buf_hdr = (odp_buffer_hdr_t *)(void *) diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index c3784ba..14c32c1 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -28,6 +28,7 @@ extern "C" { #include #include #include +#include #define ODP_BITSIZE(x) \ @@ -163,11 +164,6 @@ odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size); */ int _odp_buffer_type(odp_buffer_t buf); -#define _ODP_BUFFER_TYPE_INVALID (-1) /**< Buffer type invalid */ -#define _ODP_BUFFER_TYPE_ANY 0 /**< Buffer that can hold any other - buffer type */ - - #ifdef __cplusplus } diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h index b0e696e..4ace3c3 100644 --- a/platform/linux-generic/include/odp_buffer_pool_internal.h +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h @@ -84,8 +84,8 @@ struct pool_entry_s { odp_spinlock_t lock ODP_ALIGNED_CACHE; #endif - char name[ODP_BUFFER_POOL_NAME_LEN]; - odp_buffer_pool_param_t params; + char name[ODP_POOL_NAME_LEN]; + odp_pool_param_t params; _odp_buffer_pool_init_t init_params; odp_buffer_pool_t pool_hdl; uint32_t pool_id; @@ -239,7 +239,7 @@ static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) buf->allocator = ODP_FREEBUF; /* Mark buffer free */ - if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { + if (!buf->flags.hdrdata && buf->type != ODP_EVENT_BUFFER) { while (buf->segcount > 0) { if (buffer_is_secure(buf) || pool_is_secure(pool)) memset(buf->addr[buf->segcount - 1], diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c index d243045..fb65c2d 100644 --- a/platform/linux-generic/odp_buffer_pool.c +++ b/platform/linux-generic/odp_buffer_pool.c @@ -101,7 +101,7 @@ int odp_buffer_pool_init_global(void) odp_buffer_pool_t odp_buffer_pool_create(const char *name, odp_shm_t shm, - odp_buffer_pool_param_t *params) + odp_pool_param_t *params) { odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; pool_entry_t *pool; @@ -131,7 +131,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, 0; uint32_t blk_size, buf_stride; - uint32_t buf_align = params->buf_align; + uint32_t buf_align = params->buf.align; /* Validate requested buffer alignment */ if (buf_align > ODP_CONFIG_BUFFER_ALIGN_MAX || @@ -145,36 +145,35 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, buf_align = ODP_CONFIG_BUFFER_ALIGN_MIN; /* Calculate space needed for buffer blocks and metadata */ - switch (params->buf_type) { - case ODP_BUFFER_TYPE_RAW: - case ODP_BUFFER_TYPE_TIMEOUT: - blk_size = params->buf_size; + switch (params->type) { + case ODP_POOL_BUFFER: + case ODP_POOL_TIMEOUT: + blk_size = params->buf.size; /* Optimize small raw buffers */ - if (blk_size > ODP_MAX_INLINE_BUF || params->buf_align != 0) + if (blk_size > ODP_MAX_INLINE_BUF || params->buf.align != 0) blk_size = ODP_ALIGN_ROUNDUP(blk_size, buf_align); - buf_stride = params->buf_type == ODP_BUFFER_TYPE_RAW ? + buf_stride = params->type == ODP_POOL_BUFFER ? sizeof(odp_buffer_hdr_stride) : sizeof(odp_timeout_hdr_stride); break; - case ODP_BUFFER_TYPE_PACKET: - case _ODP_BUFFER_TYPE_ANY: + case ODP_POOL_PACKET: headroom = ODP_CONFIG_PACKET_HEADROOM; tailroom = ODP_CONFIG_PACKET_TAILROOM; - unsegmented = params->buf_size > ODP_CONFIG_PACKET_BUF_LEN_MAX; + unsegmented = params->buf.size > ODP_CONFIG_PACKET_BUF_LEN_MAX; if (unsegmented) blk_size = ODP_ALIGN_ROUNDUP( - headroom + params->buf_size + tailroom, + headroom + params->buf.size + tailroom, buf_align); else blk_size = ODP_ALIGN_ROUNDUP( - headroom + params->buf_size + tailroom, + headroom + params->buf.size + tailroom, ODP_CONFIG_PACKET_BUF_LEN_MIN); - buf_stride = params->buf_type == ODP_BUFFER_TYPE_PACKET ? + buf_stride = params->type == ODP_POOL_PACKET ? sizeof(odp_packet_hdr_stride) : sizeof(odp_any_hdr_stride); break; @@ -184,7 +183,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, } /* Validate requested number of buffers against addressable limits */ - if (params->num_bufs > + if (params->buf.num > (ODP_BUFFER_MAX_BUFFERS / (buf_stride / ODP_CACHE_LINE_SIZE))) return ODP_BUFFER_POOL_INVALID; @@ -207,8 +206,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, pool->s.name[0] = 0; } else { strncpy(pool->s.name, name, - ODP_BUFFER_POOL_NAME_LEN - 1); - pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; + ODP_POOL_NAME_LEN - 1); + pool->s.name[ODP_POOL_NAME_LEN - 1] = 0; pool->s.flags.has_name = 1; } @@ -221,13 +220,13 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, block_size = 0; pool->s.buf_align = blk_size == 0 ? 0 : sizeof(void *); } else { - block_size = params->num_bufs * blk_size; + block_size = params->buf.num * blk_size; pool->s.buf_align = buf_align; } pad_size = ODP_CACHE_LINE_SIZE_ROUNDUP(block_size) - block_size; - mdata_size = params->num_bufs * buf_stride; - udata_size = params->num_bufs * udata_stride; + mdata_size = params->buf.num * buf_stride; + udata_size = params->buf.num * udata_stride; pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(block_size + pad_size + @@ -310,7 +309,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, tmp->flags.zeroized = zeroized; tmp->size = 0; odp_atomic_store_u32(&tmp->ref_count, 0); - tmp->type = params->buf_type; + tmp->type = params->type; tmp->pool_hdl = pool->s.pool_hdl; tmp->udata_addr = (void *)udat; tmp->udata_size = init_params->udata_size; @@ -365,8 +364,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, pool->s.tailroom = tailroom; /* Watermarks are hard-coded for now to control caching */ - pool->s.high_wm = params->num_bufs / 2; - pool->s.low_wm = params->num_bufs / 4; + pool->s.high_wm = params->buf.num / 2; + pool->s.low_wm = params->buf.num / 4; pool_hdl = pool->s.pool_hdl; break; @@ -408,10 +407,10 @@ int odp_buffer_pool_info(odp_buffer_pool_t pool_hdl, info->name = pool->s.name; info->shm = pool->s.flags.user_supplied_shm ? pool->s.pool_shm : ODP_SHM_INVALID; - info->params.buf_size = pool->s.params.buf_size; - info->params.buf_align = pool->s.params.buf_align; - info->params.num_bufs = pool->s.params.num_bufs; - info->params.buf_type = pool->s.params.buf_type; + info->params.buf.size = pool->s.params.buf.size; + info->params.buf.align = pool->s.params.buf.align; + info->params.buf.num = pool->s.params.buf.num; + info->params.type = pool->s.params.type; return 0; } @@ -437,7 +436,7 @@ int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl) flush_cache(&local_cache[pool_id], &pool->s); /* Call fails if pool has allocated buffers */ - if (odp_atomic_load_u32(&pool->s.bufcount) < pool->s.params.num_bufs) { + if (odp_atomic_load_u32(&pool->s.bufcount) < pool->s.params.buf.num) { POOL_UNLOCK(&pool->s.lock); return -1; } @@ -494,7 +493,7 @@ odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) /* By default, buffers inherit their pool's zeroization setting */ buf->buf.flags.zeroized = pool->s.flags.zeroized; - if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) { + if (buf->buf.type == ODP_EVENT_PACKET) { packet_init(pool, &buf->pkt, size); if (pool->s.init_params.buf_init != NULL) @@ -509,7 +508,7 @@ odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) { return buffer_alloc(pool_hdl, - odp_pool_to_entry(pool_hdl)->s.params.buf_size); + odp_pool_to_entry(pool_hdl)->s.params.buf.size); } void odp_buffer_free(odp_buffer_t buf) @@ -558,11 +557,10 @@ void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) ODP_DBG(" name %s\n", pool->s.flags.has_name ? pool->s.name : "Unnamed Pool"); ODP_DBG(" pool type %s\n", - pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? "raw" : - (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET ? "packet" : - (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT ? "timeout" : - (pool->s.params.buf_type == _ODP_BUFFER_TYPE_ANY ? "any" : - "unknown")))); + pool->s.params.type == ODP_POOL_BUFFER ? "buffer" : + (pool->s.params.type == ODP_POOL_PACKET ? "packet" : + (pool->s.params.type == ODP_POOL_TIMEOUT ? "timeout" : + "unknown"))); ODP_DBG(" pool storage %sODP managed\n", pool->s.flags.user_supplied_shm ? "application provided, " : ""); @@ -578,14 +576,14 @@ void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) ODP_DBG(" pool mdata base %p\n", pool->s.pool_mdata_addr); ODP_DBG(" udata size %zu\n", pool->s.init_params.udata_size); ODP_DBG(" headroom %u\n", pool->s.headroom); - ODP_DBG(" buf size %zu\n", pool->s.params.buf_size); + ODP_DBG(" buf size %zu\n", pool->s.params.buf.size); ODP_DBG(" tailroom %u\n", pool->s.tailroom); ODP_DBG(" buf align %u requested, %u used\n", - pool->s.params.buf_align, pool->s.buf_align); - ODP_DBG(" num bufs %u\n", pool->s.params.num_bufs); + pool->s.params.buf.align, pool->s.buf_align); + ODP_DBG(" num bufs %u\n", pool->s.params.buf.num); ODP_DBG(" bufs available %u %s\n", bufcount, pool->s.low_wm_assert ? " **low wm asserted**" : ""); - ODP_DBG(" bufs in use %u\n", pool->s.params.num_bufs - bufcount); + ODP_DBG(" bufs in use %u\n", pool->s.params.buf.num - bufcount); ODP_DBG(" buf allocs %lu\n", bufallocs); ODP_DBG(" buf frees %lu\n", buffrees); ODP_DBG(" buf empty %lu\n", bufempty); diff --git a/platform/linux-generic/odp_event.c b/platform/linux-generic/odp_event.c index f3e70b9..c4291f8 100644 --- a/platform/linux-generic/odp_event.c +++ b/platform/linux-generic/odp_event.c @@ -15,14 +15,5 @@ int odp_event_type(odp_event_t event) buf = odp_buffer_from_event(event); - switch (_odp_buffer_type(buf)) { - case ODP_BUFFER_TYPE_RAW: - return ODP_EVENT_BUFFER; - case ODP_BUFFER_TYPE_PACKET: - return ODP_EVENT_PACKET; - case ODP_BUFFER_TYPE_TIMEOUT: - return ODP_EVENT_TIMEOUT; - default: - return ODP_EVENT_TYPE_INVALID; - } + return _odp_buffer_type(buf); } diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 3ce6f47..6b38b80 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -29,17 +29,17 @@ odp_packet_t odp_packet_alloc(odp_buffer_pool_t pool_hdl, uint32_t len) { pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - if (pool->s.params.buf_type != ODP_BUFFER_TYPE_PACKET) + if (pool->s.params.type != ODP_POOL_PACKET) return ODP_PACKET_INVALID; /* Handle special case for zero-length packets */ if (len == 0) { odp_packet_t pkt = (odp_packet_t)buffer_alloc(pool_hdl, - pool->s.params.buf_size); + pool->s.params.buf.size); if (pkt != ODP_PACKET_INVALID) pull_tail(odp_packet_hdr(pkt), - pool->s.params.buf_size); + pool->s.params.buf.size); return pkt; } @@ -581,7 +581,7 @@ int odp_packet_is_valid(odp_packet_t pkt) { odp_buffer_hdr_t *buf = validate_buf((odp_buffer_t)pkt); - return (buf != NULL && buf->type == ODP_BUFFER_TYPE_PACKET); + return (buf != NULL && buf->type == ODP_EVENT_PACKET); } /* @@ -627,11 +627,11 @@ odp_packet_t _odp_packet_alloc(odp_buffer_pool_t pool_hdl) { pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - if (pool->s.params.buf_type != ODP_BUFFER_TYPE_PACKET) + if (pool->s.params.type != ODP_POOL_PACKET) return ODP_PACKET_INVALID; return (odp_packet_t)buffer_alloc(pool_hdl, - pool->s.params.buf_size); + pool->s.params.buf.size); } /** diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 9ca08b7..2f6021d 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -85,7 +85,7 @@ int odp_schedule_init_global(void) odp_shm_t shm; odp_buffer_pool_t pool; int i, j; - odp_buffer_pool_param_t params; + odp_pool_param_t params; ODP_DBG("Schedule init ... "); @@ -100,10 +100,10 @@ int odp_schedule_init_global(void) return -1; } - params.buf_size = sizeof(queue_desc_t); - params.buf_align = 0; - params.num_bufs = SCHED_POOL_SIZE/sizeof(queue_desc_t); - params.buf_type = ODP_BUFFER_TYPE_RAW; + params.buf.size = sizeof(queue_desc_t); + params.buf.align = 0; + params.buf.num = SCHED_POOL_SIZE/sizeof(queue_desc_t); + params.type = ODP_POOL_BUFFER; pool = odp_buffer_pool_create("odp_sched_pool", ODP_SHM_NULL, ¶ms); diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index 29027e6..3194310 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -557,7 +557,7 @@ static unsigned timer_expire(odp_timer_pool *tp, uint32_t idx, uint64_t tick) #endif if (odp_likely(tmo_buf != ODP_BUFFER_INVALID)) { /* Fill in metadata fields in system timeout buffer */ - if (_odp_buffer_type(tmo_buf) == ODP_BUFFER_TYPE_TIMEOUT) { + if (_odp_buffer_type(tmo_buf) == ODP_EVENT_TIMEOUT) { /* Convert from buffer to timeout hdr */ odp_timeout_hdr_t *tmo_hdr = timeout_hdr_from_buf(tmo_buf); diff --git a/test/performance/odp_scheduling.c b/test/performance/odp_scheduling.c index 9f6eca1..bf6cf10 100644 --- a/test/performance/odp_scheduling.c +++ b/test/performance/odp_scheduling.c @@ -828,8 +828,8 @@ int main(int argc, char *argv[]) int prios; odp_shm_t shm; test_globals_t *globals; - odp_buffer_pool_param_t params; char cpumaskstr[64]; + odp_pool_param_t params; printf("\nODP example starts\n\n"); @@ -904,10 +904,10 @@ int main(int argc, char *argv[]) * Create message pool */ - params.buf_size = sizeof(test_message_t); - params.buf_align = 0; - params.num_bufs = MSG_POOL_SIZE/sizeof(test_message_t); - params.buf_type = ODP_BUFFER_TYPE_RAW; + params.buf.size = sizeof(test_message_t); + params.buf.align = 0; + params.buf.num = MSG_POOL_SIZE/sizeof(test_message_t); + params.type = ODP_POOL_BUFFER; pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); diff --git a/test/validation/buffer/odp_buffer_pool_test.c b/test/validation/buffer/odp_buffer_pool_test.c index 850f856..4960693 100644 --- a/test/validation/buffer/odp_buffer_pool_test.c +++ b/test/validation/buffer/odp_buffer_pool_test.c @@ -13,12 +13,14 @@ static const int default_buffer_num = 1000; odp_buffer_pool_t pool_create(int buf_num, int buf_size, int buf_type) { odp_buffer_pool_t pool; - char pool_name[ODP_BUFFER_POOL_NAME_LEN]; - odp_buffer_pool_param_t params = { - .buf_size = buf_size, - .buf_align = ODP_CACHE_LINE_SIZE, - .num_bufs = buf_num, - .buf_type = buf_type, + char pool_name[ODP_POOL_NAME_LEN]; + odp_pool_param_t params = { + .buf = { + .size = buf_size, + .align = ODP_CACHE_LINE_SIZE, + .num = buf_num, + }, + .type = buf_type, }; snprintf(pool_name, sizeof(pool_name), @@ -40,32 +42,34 @@ static void pool_create_destroy_type(int type) static void pool_create_destroy_raw(void) { - pool_create_destroy_type(ODP_BUFFER_TYPE_RAW); + pool_create_destroy_type(ODP_POOL_BUFFER); } static void pool_create_destroy_packet(void) { - pool_create_destroy_type(ODP_BUFFER_TYPE_PACKET); + pool_create_destroy_type(ODP_POOL_PACKET); } static void pool_create_destroy_timeout(void) { - pool_create_destroy_type(ODP_BUFFER_TYPE_TIMEOUT); + pool_create_destroy_type(ODP_POOL_TIMEOUT); } static void pool_create_destroy_raw_shm(void) { odp_buffer_pool_t pool; odp_shm_t test_shm; - odp_buffer_pool_param_t params = { - .buf_size = 1500, - .buf_align = ODP_CACHE_LINE_SIZE, - .num_bufs = 10, - .buf_type = ODP_BUFFER_TYPE_RAW, + odp_pool_param_t params = { + .buf = { + .size = 1500, + .align = ODP_CACHE_LINE_SIZE, + .num = 10, + }, + .type = ODP_POOL_BUFFER, }; test_shm = odp_shm_reserve("test_shm", - params.buf_size * params.num_bufs * 2, + params.buf.size * params.buf.num * 2, ODP_CACHE_LINE_SIZE, 0); CU_ASSERT_FATAL(test_shm != ODP_SHM_INVALID); @@ -82,11 +86,13 @@ static void pool_lookup_info_print(void) odp_buffer_pool_t pool; const char pool_name[] = "pool_for_lookup_test"; odp_buffer_pool_info_t info; - odp_buffer_pool_param_t params = { - .buf_size = default_buffer_size, - .buf_align = ODP_CACHE_LINE_SIZE, - .num_bufs = default_buffer_num, - .buf_type = ODP_BUFFER_TYPE_RAW, + odp_pool_param_t params = { + .buf = { + .size = default_buffer_size, + .align = ODP_CACHE_LINE_SIZE, + .num = default_buffer_num, + }, + .type = ODP_POOL_BUFFER, }; pool = odp_buffer_pool_create(pool_name, ODP_SHM_INVALID, ¶ms); @@ -98,10 +104,10 @@ static void pool_lookup_info_print(void) CU_ASSERT_FATAL(odp_buffer_pool_info(pool, &info) == 0); CU_ASSERT(strncmp(pool_name, info.name, sizeof(pool_name)) == 0); CU_ASSERT(info.shm == ODP_SHM_INVALID); - CU_ASSERT(params.buf_size <= info.params.buf_size); - CU_ASSERT(params.buf_align <= info.params.buf_align); - CU_ASSERT(params.num_bufs <= info.params.num_bufs); - CU_ASSERT(params.buf_type == info.params.buf_type); + CU_ASSERT(params.buf.size <= info.params.buf.size); + CU_ASSERT(params.buf.align <= info.params.buf.align); + CU_ASSERT(params.buf.num <= info.params.buf.num); + CU_ASSERT(params.type == info.params.type); odp_buffer_pool_print(pool); @@ -125,7 +131,7 @@ static void pool_alloc_type(int type) /* Try to allocate num items from the pool */ for (index = 0; index < num; index++) { switch (type) { - case ODP_BUFFER_TYPE_RAW: + case ODP_POOL_BUFFER: buffer[index] = odp_buffer_alloc(pool); if (buffer[index] == ODP_BUFFER_INVALID) @@ -140,7 +146,7 @@ static void pool_alloc_type(int type) odp_buffer_print(buffer[index]); break; - case ODP_BUFFER_TYPE_PACKET: + case ODP_POOL_PACKET: packet[index] = odp_packet_alloc(pool, size); if (packet[index] == ODP_PACKET_INVALID) @@ -176,24 +182,24 @@ static void pool_alloc_type(int type) static void pool_alloc_buffer_raw(void) { - pool_alloc_type(ODP_BUFFER_TYPE_RAW); + pool_alloc_type(ODP_POOL_BUFFER); } static void pool_alloc_buffer_packet(void) { - pool_alloc_type(ODP_BUFFER_TYPE_PACKET); + pool_alloc_type(ODP_POOL_PACKET); } static void pool_alloc_buffer_timeout(void) { - pool_alloc_type(ODP_BUFFER_TYPE_TIMEOUT); + pool_alloc_type(ODP_POOL_TIMEOUT); } static void pool_free_buffer(void) { odp_buffer_pool_t pool; odp_buffer_t buffer; - pool = pool_create(1, 64, ODP_BUFFER_TYPE_RAW); + pool = pool_create(1, 64, ODP_POOL_BUFFER); /* Allocate the only buffer from the pool */ buffer = odp_buffer_alloc(pool); diff --git a/test/validation/buffer/odp_buffer_test.c b/test/validation/buffer/odp_buffer_test.c index 9fd5bb8..b004a94 100644 --- a/test/validation/buffer/odp_buffer_test.c +++ b/test/validation/buffer/odp_buffer_test.c @@ -12,11 +12,13 @@ static const size_t raw_buffer_size = 1500; int buffer_testsuite_init(void) { - odp_buffer_pool_param_t params = { - .buf_size = raw_buffer_size, - .buf_align = ODP_CACHE_LINE_SIZE, - .num_bufs = 100, - .buf_type = ODP_BUFFER_TYPE_RAW, + odp_pool_param_t params = { + .buf = { + .size = raw_buffer_size, + .align = ODP_CACHE_LINE_SIZE, + .num = 100, + }, + .type = ODP_POOL_BUFFER, }; raw_pool = odp_buffer_pool_create("raw_pool", ODP_SHM_INVALID, ¶ms); diff --git a/test/validation/buffer/odp_packet_test.c b/test/validation/buffer/odp_packet_test.c index 6e44443..19e05c3 100644 --- a/test/validation/buffer/odp_packet_test.c +++ b/test/validation/buffer/odp_packet_test.c @@ -21,11 +21,13 @@ odp_packet_t test_packet; int packet_testsuite_init(void) { - odp_buffer_pool_param_t params = { - .buf_size = PACKET_BUF_LEN, - .buf_align = ODP_CACHE_LINE_SIZE, - .num_bufs = 100, - .buf_type = ODP_BUFFER_TYPE_PACKET, + odp_pool_param_t params = { + .buf = { + .size = PACKET_BUF_LEN, + .align = ODP_CACHE_LINE_SIZE, + .num = 100, + }, + .type = ODP_POOL_PACKET, }; packet_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_INVALID, @@ -52,7 +54,7 @@ static void packet_alloc_free(void) { odp_buffer_pool_t pool; odp_packet_t packet; - pool = pool_create(1, PACKET_BUF_LEN, ODP_BUFFER_TYPE_PACKET); + pool = pool_create(1, PACKET_BUF_LEN, ODP_POOL_PACKET); /* Allocate the only buffer from the pool */ packet = odp_packet_alloc(pool, packet_len); diff --git a/test/validation/odp_crypto.c b/test/validation/odp_crypto.c index 72cf0f0..64b9a4f 100644 --- a/test/validation/odp_crypto.c +++ b/test/validation/odp_crypto.c @@ -25,14 +25,14 @@ CU_SuiteInfo odp_testsuites[] = { int tests_global_init(void) { - odp_buffer_pool_param_t params; + odp_pool_param_t params; odp_buffer_pool_t pool; odp_queue_t out_queue; - params.buf_size = SHM_PKT_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = SHM_PKT_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE; + params.type = ODP_POOL_BUFFER; pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, ¶ms); @@ -47,10 +47,10 @@ int tests_global_init(void) return -1; } - params.buf_size = SHM_COMPL_POOL_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE; - params.buf_type = ODP_BUFFER_TYPE_RAW; + params.buf.size = SHM_COMPL_POOL_BUF_SIZE; + params.buf.align = 0; + params.buf.num = SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE; + params.type = ODP_POOL_BUFFER; pool = odp_buffer_pool_create("compl_pool", ODP_SHM_NULL, ¶ms); diff --git a/test/validation/odp_pktio.c b/test/validation/odp_pktio.c index 2e7b50b..70704f4 100644 --- a/test/validation/odp_pktio.c +++ b/test/validation/odp_pktio.c @@ -180,15 +180,15 @@ static int pktio_fixup_checksums(odp_packet_t pkt) static int default_pool_create(void) { - odp_buffer_pool_param_t params; + odp_pool_param_t params; if (default_pkt_pool != ODP_BUFFER_POOL_INVALID) return -1; - params.buf_size = PKT_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = PKT_BUF_NUM; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = PKT_BUF_SIZE; + params.buf.align = 0; + params.buf.num = PKT_BUF_NUM; + params.type = ODP_POOL_PACKET; default_pkt_pool = odp_buffer_pool_create("pkt_pool_default", ODP_SHM_NULL, ¶ms); @@ -202,13 +202,13 @@ static odp_pktio_t create_pktio(const char *iface) { odp_buffer_pool_t pool; odp_pktio_t pktio; - char pool_name[ODP_BUFFER_POOL_NAME_LEN]; - odp_buffer_pool_param_t params; + char pool_name[ODP_POOL_NAME_LEN]; + odp_pool_param_t params; - params.buf_size = PKT_BUF_SIZE; - params.buf_align = 0; - params.num_bufs = PKT_BUF_NUM; - params.buf_type = ODP_BUFFER_TYPE_PACKET; + params.buf.size = PKT_BUF_SIZE; + params.buf.align = 0; + params.buf.num = PKT_BUF_NUM; + params.type = ODP_POOL_PACKET; snprintf(pool_name, sizeof(pool_name), "pkt_pool_%s", iface); pool = odp_buffer_pool_lookup(pool_name); diff --git a/test/validation/odp_queue.c b/test/validation/odp_queue.c index 07e37ee..2863bef 100644 --- a/test/validation/odp_queue.c +++ b/test/validation/odp_queue.c @@ -16,12 +16,12 @@ static int queue_contest = 0xff; static int init_queue_suite(void) { odp_buffer_pool_t pool; - odp_buffer_pool_param_t params; + odp_pool_param_t params; - params.buf_size = 0; - params.buf_align = ODP_CACHE_LINE_SIZE; - params.num_bufs = 1024 * 10; - params.buf_type = ODP_BUFFER_TYPE_RAW; + params.buf.size = 0; + params.buf.align = ODP_CACHE_LINE_SIZE; + params.buf.num = 1024 * 10; + params.type = ODP_POOL_BUFFER; pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, ¶ms); diff --git a/test/validation/odp_schedule.c b/test/validation/odp_schedule.c index 74f9b6f..b2502f7 100644 --- a/test/validation/odp_schedule.c +++ b/test/validation/odp_schedule.c @@ -519,12 +519,12 @@ static int schd_suite_init(void) odp_buffer_pool_t pool; test_globals_t *globals; thread_args_t *thr_args; - odp_buffer_pool_param_t params; + odp_pool_param_t params; - params.buf_size = BUF_SIZE; - params.buf_align = 0; - params.num_bufs = MSG_POOL_SIZE/BUF_SIZE; - params.buf_type = ODP_BUFFER_TYPE_RAW; + params.buf.size = BUF_SIZE; + params.buf.align = 0; + params.buf.num = MSG_POOL_SIZE/BUF_SIZE; + params.type = ODP_POOL_BUFFER; pool = odp_buffer_pool_create(MSG_POOL_NAME, ODP_SHM_NULL, ¶ms); diff --git a/test/validation/odp_timer.c b/test/validation/odp_timer.c index 4e4cc5b..25c85c2 100644 --- a/test/validation/odp_timer.c +++ b/test/validation/odp_timer.c @@ -256,17 +256,17 @@ static void *worker_entrypoint(void *arg) /* @private Timer test case entrypoint */ static void test_odp_timer_all(void) { - odp_buffer_pool_param_t params; + odp_pool_param_t params; odp_timer_pool_param_t tparam; /* This is a stressfull test - need to reserve some cpu cycles * @TODO move to test/performance */ int num_workers = min(odp_sys_cpu_count()-1, MAX_WORKERS); /* Create timeout buffer pools */ - params.buf_size = 0; - params.buf_align = ODP_CACHE_LINE_SIZE; - params.num_bufs = (NTIMERS + 1) * num_workers; - params.buf_type = ODP_BUFFER_TYPE_TIMEOUT; + params.buf.size = 0; + params.buf.align = ODP_CACHE_LINE_SIZE; + params.buf.num = (NTIMERS + 1) * num_workers; + params.type = ODP_POOL_TIMEOUT; tbp = odp_buffer_pool_create("tmo_pool", ODP_SHM_INVALID, ¶ms); if (tbp == ODP_BUFFER_POOL_INVALID) CU_FAIL_FATAL("Timeout buffer pool create failed");