From patchwork Wed Jan 21 04:57:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 43438 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DF4212034D for ; Wed, 21 Jan 2015 04:58:45 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id b13sf2775335wgh.1 for ; Tue, 20 Jan 2015 20:58:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=e4VaUms/i4B7wP2XvAwI/YlYmv2PaT+7wvcTBfjCJXA=; b=GLPYaS9vta69ZOc6wsYu8jGpm85Zch/5OjK/VM3GRkGJQuUs7wduaVN22rXEAnB2bY AiO2zV9qzQBVrrrZ/DFL3b7FZfp1TKoW9ZuHFH4wOQvOtnMFMf485HrdkusIcV/Ty6PX RVmdPxejbT45VbqYpmu3D6bFi9gZFa5UzO5XGdefQyGMl3X44L9UZ8wiEfrfLcJvSPvY LtaDwaDwRLZpA1dCW7OndywqTp1VX2nxo2ejfzOinxjPGQciQsD2Eue+Lt0S4Cc75URZ MqB9ol25QqgefF0NVatXNTB5bIRWsfBSnODBVc20uRHsthF1pfZG/edVjQqCQRaq6058 2biA== X-Gm-Message-State: ALoCoQnD0yI0NZdS9Ogioh8gtd9lg65ypNsU+QQOmD1EN2z5opdlBptxAl+PBiRjANjzeE8WkTkj X-Received: by 10.112.99.37 with SMTP id en5mr12217lbb.17.1421816325149; Tue, 20 Jan 2015 20:58:45 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.178.195 with SMTP id da3ls884150lac.32.gmail; Tue, 20 Jan 2015 20:58:44 -0800 (PST) X-Received: by 10.112.131.1 with SMTP id oi1mr41446463lbb.2.1421816324868; Tue, 20 Jan 2015 20:58:44 -0800 (PST) Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com. [209.85.217.174]) by mx.google.com with ESMTPS id zd4si16936791lbb.75.2015.01.20.20.58.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 20 Jan 2015 20:58:44 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) client-ip=209.85.217.174; Received: by mail-lb0-f174.google.com with SMTP id 10so37240308lbg.5 for ; Tue, 20 Jan 2015 20:58:44 -0800 (PST) X-Received: by 10.152.5.226 with SMTP id v2mr42230745lav.34.1421816324775; Tue, 20 Jan 2015 20:58:44 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp1651623lbb; Tue, 20 Jan 2015 20:58:43 -0800 (PST) X-Received: by 10.224.72.8 with SMTP id k8mr21006387qaj.26.1421816322900; Tue, 20 Jan 2015 20:58:42 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id q4si3287039qah.0.2015.01.20.20.58.41 (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 20 Jan 2015 20:58:42 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YDnNG-0002Pp-Le; Wed, 21 Jan 2015 04:58:38 +0000 Received: from mail-ob0-f180.google.com ([209.85.214.180]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YDnMs-0002GZ-2s for lng-odp@lists.linaro.org; Wed, 21 Jan 2015 04:58:14 +0000 Received: by mail-ob0-f180.google.com with SMTP id uz6so22650513obc.11 for ; Tue, 20 Jan 2015 20:58:08 -0800 (PST) X-Received: by 10.60.177.234 with SMTP id ct10mr5700510oec.43.1421816288831; Tue, 20 Jan 2015 20:58:08 -0800 (PST) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by mx.google.com with ESMTPSA id a15sm2891071oic.18.2015.01.20.20.58.07 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 20 Jan 2015 20:58:08 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Tue, 20 Jan 2015 22:57:46 -0600 Message-Id: <1421816266-31223-9-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1421816266-31223-1-git-send-email-bill.fischofer@linaro.org> References: <1421816266-31223-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCHv2 8/8] api: pools: enable use of odp_pool_create() X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Enables use of odp_pool_xxx() routines that replace corresponding odp_buffer_pool_xxx() routines. Note that this deletes ODP_BUFFER_TYPE_ANY. Signed-off-by: Bill Fischofer --- platform/linux-generic/include/api/odp_buffer.h | 7 +- .../linux-generic/include/odp_buffer_inlines.h | 2 +- .../include/odp_buffer_pool_internal.h | 22 +-- platform/linux-generic/odp_buffer_pool.c | 152 +++++++++++++-------- platform/linux-generic/odp_packet.c | 16 +-- test/validation/buffer/odp_buffer_pool_test.c | 12 -- 6 files changed, 113 insertions(+), 98 deletions(-) diff --git a/platform/linux-generic/include/api/odp_buffer.h b/platform/linux-generic/include/api/odp_buffer.h index f6c2087..8ad9e9b 100644 --- a/platform/linux-generic/include/api/odp_buffer.h +++ b/platform/linux-generic/include/api/odp_buffer.h @@ -77,11 +77,10 @@ uint32_t odp_buffer_size(odp_buffer_t buf); int odp_buffer_type(odp_buffer_t buf); #define ODP_BUFFER_TYPE_INVALID (-1) /**< Buffer type invalid */ -#define ODP_BUFFER_TYPE_ANY 0 /**< Buffer that can hold any other - buffer type */ + +#define ODP_BUFFER_TYPE_PACKET 0 /**< Packet buffer */ #define ODP_BUFFER_TYPE_RAW 1 /**< Raw buffer, no additional metadata */ -#define ODP_BUFFER_TYPE_PACKET 2 /**< Packet buffer */ -#define ODP_BUFFER_TYPE_TIMEOUT 3 /**< Timeout buffer */ +#define ODP_BUFFER_TYPE_TIMEOUT 2 /**< Timeout buffer */ /** * Tests if buffer is valid diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h index c120b69..6a30a07 100644 --- a/platform/linux-generic/include/odp_buffer_inlines.h +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -116,7 +116,7 @@ static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) /* A valid buffer index must be on stride, and must be in range */ if ((handle.index % buf_stride != 0) || - ((uint32_t)(handle.index / buf_stride) >= pool->s.params.num_bufs)) + ((uint32_t)(handle.index / buf_stride) >= pool->s.params.buf.num)) return NULL; buf_hdr = (odp_buffer_hdr_t *)(void *) diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h index 2e48ac3..d6f44d9 100644 --- a/platform/linux-generic/include/odp_buffer_pool_internal.h +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h @@ -21,7 +21,7 @@ extern "C" { #include #include #include -#include +#include #include #include #include @@ -84,10 +84,10 @@ struct pool_entry_s { odp_spinlock_t lock ODP_ALIGNED_CACHE; #endif - char name[ODP_BUFFER_POOL_NAME_LEN]; - odp_buffer_pool_param_t params; + char name[ODP_POOL_NAME_LEN]; + odp_pool_param_t params; _odp_buffer_pool_init_t init_params; - odp_buffer_pool_t pool_hdl; + odp_pool_t pool_hdl; uint32_t pool_id; odp_shm_t pool_shm; union { @@ -239,7 +239,7 @@ static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) buf->allocator = ODP_FREEBUF; /* Mark buffer free */ - if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { + if (!buf->flags.hdrdata && buf->type != ODP_EVENT_BUFFER) { while (buf->segcount > 0) { if (buffer_is_secure(buf) || pool_is_secure(pool)) memset(buf->addr[buf->segcount - 1], @@ -333,12 +333,12 @@ static inline void flush_cache(local_cache_t *buf_cache, buf_cache->buffrees = 0; } -static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id) +static inline odp_pool_t pool_index_to_handle(uint32_t pool_id) { return pool_id; } -static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl) +static inline uint32_t pool_handle_to_index(odp_pool_t pool_hdl) { return pool_hdl; } @@ -348,7 +348,7 @@ static inline void *get_pool_entry(uint32_t pool_id) return pool_entry_ptr[pool_id]; } -static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t pool) +static inline pool_entry_t *odp_pool_to_entry(odp_pool_t pool) { return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool)); } @@ -358,17 +358,17 @@ static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) return odp_pool_to_entry(buf->pool_hdl); } -static inline uint32_t odp_buffer_pool_segment_size(odp_buffer_pool_t pool) +static inline uint32_t odp_buffer_pool_segment_size(odp_pool_t pool) { return odp_pool_to_entry(pool)->s.seg_size; } -static inline uint32_t odp_buffer_pool_headroom(odp_buffer_pool_t pool) +static inline uint32_t odp_buffer_pool_headroom(odp_pool_t pool) { return odp_pool_to_entry(pool)->s.headroom; } -static inline uint32_t odp_buffer_pool_tailroom(odp_buffer_pool_t pool) +static inline uint32_t odp_buffer_pool_tailroom(odp_pool_t pool) { return odp_pool_to_entry(pool)->s.tailroom; } diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c index caadc7f..6a2e4eb 100644 --- a/platform/linux-generic/odp_buffer_pool.c +++ b/platform/linux-generic/odp_buffer_pool.c @@ -5,7 +5,7 @@ */ #include -#include +#include #include #include #include @@ -96,14 +96,14 @@ int odp_buffer_pool_init_global(void) } /** - * Buffer pool creation + * Pool creation */ -odp_buffer_pool_t odp_buffer_pool_create(const char *name, - odp_shm_t shm, - odp_buffer_pool_param_t *params) +odp_pool_t odp_pool_create(const char *name, + odp_shm_t shm, + odp_pool_param_t *params) { - odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID; + odp_pool_t pool_hdl = ODP_POOL_INVALID; pool_entry_t *pool; uint32_t i, headroom = 0, tailroom = 0; @@ -117,7 +117,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, _odp_buffer_pool_init_t *init_params = &default_init_params; if (params == NULL) - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; /* Restriction for v1.0: All non-packet buffers are unsegmented */ int unsegmented = 1; @@ -131,12 +131,12 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, 0; uint32_t blk_size, buf_stride; - uint32_t buf_align = params->buf_align; + uint32_t buf_align = params->buf.align; /* Validate requested buffer alignment */ if (buf_align > ODP_CONFIG_BUFFER_ALIGN_MAX || buf_align != ODP_ALIGN_ROUNDDOWN_POWER_2(buf_align, buf_align)) - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; /* Set correct alignment based on input request */ if (buf_align == 0) @@ -145,48 +145,47 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, buf_align = ODP_CONFIG_BUFFER_ALIGN_MIN; /* Calculate space needed for buffer blocks and metadata */ - switch (params->buf_type) { - case ODP_BUFFER_TYPE_RAW: - case ODP_BUFFER_TYPE_TIMEOUT: - blk_size = params->buf_size; + switch (params->type) { + case ODP_POOL_BUFFER: + case ODP_POOL_TIMEOUT: + blk_size = params->buf.size; /* Optimize small raw buffers */ - if (blk_size > ODP_MAX_INLINE_BUF || params->buf_align != 0) + if (blk_size > ODP_MAX_INLINE_BUF || params->buf.align != 0) blk_size = ODP_ALIGN_ROUNDUP(blk_size, buf_align); - buf_stride = params->buf_type == ODP_BUFFER_TYPE_RAW ? + buf_stride = params->type == ODP_POOL_BUFFER ? sizeof(odp_buffer_hdr_stride) : sizeof(odp_timeout_hdr_stride); break; - case ODP_BUFFER_TYPE_PACKET: - case ODP_BUFFER_TYPE_ANY: + case ODP_POOL_PACKET: headroom = ODP_CONFIG_PACKET_HEADROOM; tailroom = ODP_CONFIG_PACKET_TAILROOM; - unsegmented = params->buf_size > ODP_CONFIG_PACKET_BUF_LEN_MAX; + unsegmented = params->buf.size > ODP_CONFIG_PACKET_BUF_LEN_MAX; if (unsegmented) blk_size = ODP_ALIGN_ROUNDUP( - headroom + params->buf_size + tailroom, + headroom + params->buf.size + tailroom, buf_align); else blk_size = ODP_ALIGN_ROUNDUP( - headroom + params->buf_size + tailroom, + headroom + params->buf.size + tailroom, ODP_CONFIG_PACKET_BUF_LEN_MIN); - buf_stride = params->buf_type == ODP_BUFFER_TYPE_PACKET ? + buf_stride = params->type == ODP_POOL_PACKET ? sizeof(odp_packet_hdr_stride) : sizeof(odp_any_hdr_stride); break; default: - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; } /* Validate requested number of buffers against addressable limits */ - if (params->num_bufs > + if (params->buf.num > (ODP_BUFFER_MAX_BUFFERS / (buf_stride / ODP_CACHE_LINE_SIZE))) - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; /* Find an unused buffer pool slot and iniitalize it as requested */ for (i = 0; i < ODP_CONFIG_POOLS; i++) { @@ -207,8 +206,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, pool->s.name[0] = 0; } else { strncpy(pool->s.name, name, - ODP_BUFFER_POOL_NAME_LEN - 1); - pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0; + ODP_POOL_NAME_LEN - 1); + pool->s.name[ODP_POOL_NAME_LEN - 1] = 0; pool->s.flags.has_name = 1; } @@ -221,13 +220,13 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, block_size = 0; pool->s.buf_align = blk_size == 0 ? 0 : sizeof(void *); } else { - block_size = params->num_bufs * blk_size; + block_size = params->buf.num * blk_size; pool->s.buf_align = buf_align; } pad_size = ODP_CACHE_LINE_SIZE_ROUNDUP(block_size) - block_size; - mdata_size = params->num_bufs * buf_stride; - udata_size = params->num_bufs * udata_stride; + mdata_size = params->buf.num * buf_stride; + udata_size = params->buf.num * udata_stride; pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(block_size + pad_size + @@ -240,7 +239,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, ODP_PAGE_SIZE, 0); if (shm == ODP_SHM_INVALID) { POOL_UNLOCK(&pool->s.lock); - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; } pool->s.pool_base_addr = odp_shm_addr(shm); } else { @@ -248,7 +247,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, if (odp_shm_info(shm, &info) != 0 || info.size < pool->s.pool_size) { POOL_UNLOCK(&pool->s.lock); - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; } pool->s.pool_base_addr = odp_shm_addr(shm); void *page_addr = @@ -259,7 +258,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, ((size_t)page_addr - (size_t)pool->s.pool_base_addr)) { POOL_UNLOCK(&pool->s.lock); - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; } pool->s.pool_base_addr = page_addr; } @@ -310,7 +309,7 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, tmp->flags.zeroized = zeroized; tmp->size = 0; odp_atomic_store_u32(&tmp->ref_count, 0); - tmp->type = params->buf_type; + tmp->type = params->type; tmp->pool_hdl = pool->s.pool_hdl; tmp->udata_addr = (void *)udat; tmp->udata_size = init_params->udata_size; @@ -365,8 +364,8 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, pool->s.tailroom = tailroom; /* Watermarks are hard-coded for now to control caching */ - pool->s.high_wm = params->num_bufs / 2; - pool->s.low_wm = params->num_bufs / 4; + pool->s.high_wm = params->buf.num / 2; + pool->s.low_wm = params->buf.num / 4; pool_hdl = pool->s.pool_hdl; break; @@ -375,8 +374,24 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, return pool_hdl; } +odp_pool_t odp_buffer_pool_create(const char *name, + odp_shm_t shm, + odp_buffer_pool_param_t *params) +{ + odp_pool_param_t pool_params; + + if (params == NULL) + return ODP_POOL_INVALID; -odp_buffer_pool_t odp_buffer_pool_lookup(const char *name) + pool_params.buf.size = params->buf_size; + pool_params.buf.align = params->buf_align; + pool_params.buf.num = params->num_bufs; + pool_params.type = params->buf_type; + + return odp_pool_create(name, shm, &pool_params); +} + +odp_pool_t odp_pool_lookup(const char *name) { uint32_t i; pool_entry_t *pool; @@ -393,11 +408,15 @@ odp_buffer_pool_t odp_buffer_pool_lookup(const char *name) POOL_UNLOCK(&pool->s.lock); } - return ODP_BUFFER_POOL_INVALID; + return ODP_POOL_INVALID; +} + +odp_pool_t odp_buffer_pool_lookup(const char *name) +{ + return odp_pool_lookup(name); } -int odp_buffer_pool_info(odp_buffer_pool_t pool_hdl, - odp_buffer_pool_info_t *info) +int odp_pool_info(odp_pool_t pool_hdl, odp_pool_info_t *info) { uint32_t pool_id = pool_handle_to_index(pool_hdl); pool_entry_t *pool = get_pool_entry(pool_id); @@ -408,15 +427,20 @@ int odp_buffer_pool_info(odp_buffer_pool_t pool_hdl, info->name = pool->s.name; info->shm = pool->s.flags.user_supplied_shm ? pool->s.pool_shm : ODP_SHM_INVALID; - info->params.buf_size = pool->s.params.buf_size; - info->params.buf_align = pool->s.params.buf_align; - info->params.num_bufs = pool->s.params.num_bufs; - info->params.buf_type = pool->s.params.buf_type; + info->params.buf.size = pool->s.params.buf.size; + info->params.buf.align = pool->s.params.buf.align; + info->params.buf.num = pool->s.params.buf.num; + info->params.type = pool->s.params.type; return 0; } -int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl) +int odp_buffer_pool_info(odp_pool_t pool_hdl, odp_buffer_pool_info_t *info) +{ + return odp_pool_info(pool_hdl, (odp_pool_info_t *)info); +} + +int odp_pool_destroy(odp_pool_t pool_hdl) { uint32_t pool_id = pool_handle_to_index(pool_hdl); pool_entry_t *pool = get_pool_entry(pool_id); @@ -437,7 +461,7 @@ int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl) flush_cache(&local_cache[pool_id], &pool->s); /* Call fails if pool has allocated buffers */ - if (odp_atomic_load_u32(&pool->s.bufcount) < pool->s.params.num_bufs) { + if (odp_atomic_load_u32(&pool->s.bufcount) < pool->s.params.buf.num) { POOL_UNLOCK(&pool->s.lock); return -1; } @@ -451,7 +475,12 @@ int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl) return 0; } -odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) +int odp_buffer_pool_destroy(odp_pool_t pool_hdl) +{ + return odp_pool_destroy(pool_hdl); +} + +odp_buffer_t buffer_alloc(odp_pool_t pool_hdl, size_t size) { uint32_t pool_id = pool_handle_to_index(pool_hdl); pool_entry_t *pool = get_pool_entry(pool_id); @@ -494,7 +523,7 @@ odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) /* By default, buffers inherit their pool's zeroization setting */ buf->buf.flags.zeroized = pool->s.flags.zeroized; - if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) { + if (buf->buf.type == ODP_EVENT_PACKET) { packet_init(pool, &buf->pkt, size); if (pool->s.init_params.buf_init != NULL) @@ -506,10 +535,10 @@ odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size) return odp_hdr_to_buf(&buf->buf); } -odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl) +odp_buffer_t odp_buffer_alloc(odp_pool_t pool_hdl) { return buffer_alloc(pool_hdl, - odp_pool_to_entry(pool_hdl)->s.params.buf_size); + odp_pool_to_entry(pool_hdl)->s.params.buf.size); } void odp_buffer_free(odp_buffer_t buf) @@ -533,7 +562,7 @@ void _odp_flush_caches(void) } } -void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) +void odp_pool_print(odp_pool_t pool_hdl) { pool_entry_t *pool; uint32_t pool_id; @@ -558,11 +587,10 @@ void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) ODP_DBG(" name %s\n", pool->s.flags.has_name ? pool->s.name : "Unnamed Pool"); ODP_DBG(" pool type %s\n", - pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? "raw" : - (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET ? "packet" : - (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT ? "timeout" : - (pool->s.params.buf_type == ODP_BUFFER_TYPE_ANY ? "any" : - "unknown")))); + pool->s.params.type == ODP_POOL_BUFFER ? "buffer" : + (pool->s.params.type == ODP_POOL_PACKET ? "packet" : + (pool->s.params.type == ODP_POOL_TIMEOUT ? "timeout" : + "unknown"))); ODP_DBG(" pool storage %sODP managed\n", pool->s.flags.user_supplied_shm ? "application provided, " : ""); @@ -578,14 +606,14 @@ void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) ODP_DBG(" pool mdata base %p\n", pool->s.pool_mdata_addr); ODP_DBG(" udata size %zu\n", pool->s.init_params.udata_size); ODP_DBG(" headroom %u\n", pool->s.headroom); - ODP_DBG(" buf size %zu\n", pool->s.params.buf_size); + ODP_DBG(" buf size %zu\n", pool->s.params.buf.size); ODP_DBG(" tailroom %u\n", pool->s.tailroom); ODP_DBG(" buf align %u requested, %u used\n", - pool->s.params.buf_align, pool->s.buf_align); - ODP_DBG(" num bufs %u\n", pool->s.params.num_bufs); + pool->s.params.buf.align, pool->s.buf_align); + ODP_DBG(" num bufs %u\n", pool->s.params.buf.num); ODP_DBG(" bufs available %u %s\n", bufcount, pool->s.low_wm_assert ? " **low wm asserted**" : ""); - ODP_DBG(" bufs in use %u\n", pool->s.params.num_bufs - bufcount); + ODP_DBG(" bufs in use %u\n", pool->s.params.buf.num - bufcount); ODP_DBG(" buf allocs %lu\n", bufallocs); ODP_DBG(" buf frees %lu\n", buffrees); ODP_DBG(" buf empty %lu\n", bufempty); @@ -601,8 +629,12 @@ void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl) ODP_DBG(" low wm count %lu\n", lowmct); } +void odp_buffer_pool_print(odp_pool_t pool_hdl) +{ + odp_pool_print(pool_hdl); +} -odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf) +odp_pool_t odp_buffer_pool(odp_buffer_t buf) { return odp_buf_to_hdr(buf)->pool_hdl; } diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 1345618..0cf87b5 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -26,21 +26,20 @@ */ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) -odp_packet_t odp_packet_alloc(odp_buffer_pool_t pool_hdl, uint32_t len) { pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - if (pool->s.params.buf_type != ODP_BUFFER_TYPE_PACKET) + if (pool->s.params.type != ODP_POOL_PACKET) return ODP_PACKET_INVALID; /* Handle special case for zero-length packets */ if (len == 0) { odp_packet_t pkt = (odp_packet_t)buffer_alloc(pool_hdl, - pool->s.params.buf_size); + pool->s.params.buf.size); if (pkt != ODP_PACKET_INVALID) pull_tail(odp_packet_hdr(pkt), - pool->s.params.buf_size); + pool->s.params.buf.size); return pkt; } @@ -211,7 +210,6 @@ void *odp_packet_offset(odp_packet_t pkt, uint32_t offset, uint32_t *len, */ odp_pool_t odp_packet_pool(odp_packet_t pkt) -odp_buffer_pool_t odp_packet_pool(odp_packet_t pkt) { return odp_packet_hdr(pkt)->buf_hdr.pool_hdl; } @@ -466,7 +464,6 @@ odp_packet_t odp_packet_rem_data(odp_packet_t pkt, uint32_t offset, */ odp_packet_t odp_packet_copy(odp_packet_t pkt, odp_pool_t pool) -odp_packet_t odp_packet_copy(odp_packet_t pkt, odp_buffer_pool_t pool) { odp_packet_hdr_t *srchdr = odp_packet_hdr(pkt); uint32_t pktlen = srchdr->frame_len; @@ -584,7 +581,7 @@ int odp_packet_is_valid(odp_packet_t pkt) { odp_buffer_hdr_t *buf = validate_buf((odp_buffer_t)pkt); - return (buf != NULL && buf->type == ODP_BUFFER_TYPE_PACKET); + return (buf != NULL && buf->type == ODP_EVENT_PACKET); } /* @@ -627,15 +624,14 @@ int _odp_packet_copy_to_packet(odp_packet_t srcpkt, uint32_t srcoffset, } odp_packet_t _odp_packet_alloc(odp_pool_t pool_hdl) -odp_packet_t _odp_packet_alloc(odp_buffer_pool_t pool_hdl) { pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - if (pool->s.params.buf_type != ODP_BUFFER_TYPE_PACKET) + if (pool->s.params.type != ODP_POOL_PACKET) return ODP_PACKET_INVALID; return (odp_packet_t)buffer_alloc(pool_hdl, - pool->s.params.buf_size); + pool->s.params.buf.size); } /** diff --git a/test/validation/buffer/odp_buffer_pool_test.c b/test/validation/buffer/odp_buffer_pool_test.c index 6f5eaf7..e14896c 100644 --- a/test/validation/buffer/odp_buffer_pool_test.c +++ b/test/validation/buffer/odp_buffer_pool_test.c @@ -53,11 +53,6 @@ static void pool_create_destroy_timeout(void) pool_create_destroy_type(ODP_BUFFER_TYPE_TIMEOUT); } -static void pool_create_destroy_any(void) -{ - pool_create_destroy_type(ODP_BUFFER_TYPE_ANY); -} - static void pool_create_destroy_raw_shm(void) { odp_buffer_pool_t pool; @@ -168,11 +163,6 @@ static void pool_alloc_buffer_timeout(void) pool_alloc_buffer_type(ODP_BUFFER_TYPE_TIMEOUT); } -static void pool_alloc_buffer_any(void) -{ - pool_alloc_buffer_type(ODP_BUFFER_TYPE_ANY); -} - static void pool_free_buffer(void) { odp_buffer_pool_t pool; @@ -200,13 +190,11 @@ CU_TestInfo buffer_pool_tests[] = { _CU_TEST_INFO(pool_create_destroy_raw), _CU_TEST_INFO(pool_create_destroy_packet), _CU_TEST_INFO(pool_create_destroy_timeout), - _CU_TEST_INFO(pool_create_destroy_any), _CU_TEST_INFO(pool_create_destroy_raw_shm), _CU_TEST_INFO(pool_lookup_info_print), _CU_TEST_INFO(pool_alloc_buffer_raw), _CU_TEST_INFO(pool_alloc_buffer_packet), _CU_TEST_INFO(pool_alloc_buffer_timeout), - _CU_TEST_INFO(pool_alloc_buffer_any), _CU_TEST_INFO(pool_free_buffer), CU_TEST_INFO_NULL, };