diff mbox

ODP buffer pool restructure for v1.0 APIs

Message ID 1417398455-15967-1-git-send-email-bill.fischofer@linaro.org
State New
Headers show

Commit Message

Bill Fischofer Dec. 1, 2014, 1:47 a.m. UTC
ODP buffer pool restricture to enable support for v1.0 APIs.
Implements the following revised/new APIs:

 odp_buffer_pool_create()
 odp_buffer_pool_destroy()
 odp_buffer_pool_info()

Signed-off-by: Bill Fischofer <bill.fischofer@linaro.org>
---
 example/generator/odp_generator.c                  |  19 +-
 example/ipsec/odp_ipsec.c                          |  57 +-
 example/l2fwd/odp_l2fwd.c                          |  19 +-
 example/odp_example/odp_example.c                  |  18 +-
 example/packet/odp_pktio.c                         |  19 +-
 example/timer/odp_timer_test.c                     |  13 +-
 .../linux-generic/include/api/odp_buffer_pool.h    | 105 ++-
 platform/linux-generic/include/api/odp_config.h    |   6 +
 .../linux-generic/include/api/odp_platform_types.h |   9 +
 .../linux-generic/include/api/odp_shared_memory.h  |  10 +-
 .../linux-generic/include/odp_buffer_inlines.h     | 157 +++++
 .../linux-generic/include/odp_buffer_internal.h    | 136 ++--
 .../include/odp_buffer_pool_internal.h             | 278 ++++++--
 .../linux-generic/include/odp_packet_internal.h    |  50 +-
 .../linux-generic/include/odp_timer_internal.h     |  11 +-
 platform/linux-generic/odp_buffer.c                |  31 +-
 platform/linux-generic/odp_buffer_pool.c           | 740 ++++++++++-----------
 platform/linux-generic/odp_packet.c                |  41 +-
 platform/linux-generic/odp_queue.c                 |   1 +
 platform/linux-generic/odp_schedule.c              |  20 +-
 platform/linux-generic/odp_timer.c                 |   3 +-
 test/api_test/odp_timer_ping.c                     |  19 +-
 test/validation/odp_crypto.c                       |  43 +-
 test/validation/odp_queue.c                        |  19 +-
 24 files changed, 1068 insertions(+), 756 deletions(-)
 create mode 100644 platform/linux-generic/include/odp_buffer_inlines.h

Comments

Anders Roxell Dec. 1, 2014, 9:27 p.m. UTC | #1
Drop "for v1.0 APIs"

You introduced some warnings that we didn't have:
odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro definition) of group odp_compiler_optim is not documented.
odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro definition) of group odp_compiler_optim is not documented.
odp_buffer_pool.h:127: warning: Member name (variable) of class odp_buffer_pool_info_t is not documented.
odp_buffer_pool.h:128: warning: Member params (variable) of class odp_buffer_pool_info_t is not documented.
odp_buffer_pool.h:55: warning: Member buf_size (variable) of class odp_buffer_pool_param_t is not documented.
odp_buffer_pool.h:56: warning: Member buf_align (variable) of class odp_buffer_pool_param_t is not documented.
odp_buffer_pool.h:57: warning: Member num_bufs (variable) of class odp_buffer_pool_param_t is not documented.
odp_buffer_pool.h:58: warning: Member buf_type (variable) of class odp_buffer_pool_param_t is not documented.


On 2014-11-30 19:47, Bill Fischofer wrote:
> ODP buffer pool restricture to enable support for v1.0 APIs.

Remove, doesn't add any value.

> Implements the following revised/new APIs:
> 
>  odp_buffer_pool_create()
>  odp_buffer_pool_destroy()
>  odp_buffer_pool_info()

We said it during last weeks call, one patch per API.

Please split this into three patches.

Cheers,
Anders
Bill Fischofer Dec. 2, 2014, 12:15 a.m. UTC | #2
When I run the version of checkpatch that's part of odp.git I get this:

bill@Ubuntu13:~/linaro/v10bufpool$ ./scripts/checkpatch.pl *.patch
total: 0 errors, 0 warnings, 0 checks, 2392 lines checked

NOTE: Ignored message types: DEPRECATED_VARIABLE NEW_TYPEDEFS

0001-ODP-buffer-pool-restructure-for-v1.0-APIs.patch has no obvious style
problems and is ready for submission.
bill@Ubuntu13:~/linaro/v10bufpool$

What version of checkpatch are you running?

The two other routines mentioned represent approximately 30 lines of a
1000+ line patch.  If there's consensus that combining them is a barrier to
timely review of this patch I can split things up, but this seems like busy
work to me as we're not trying to support cherry-picking bits and pieces of
the approved APIs.

Bill

On Mon, Dec 1, 2014 at 4:27 PM, Anders Roxell <anders.roxell@linaro.org>
wrote:

> Drop "for v1.0 APIs"
>
> You introduced some warnings that we didn't have:
> odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro
> definition) of group odp_compiler_optim is not documented.
> odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro
> definition) of group odp_compiler_optim is not documented.
> odp_buffer_pool.h:127: warning: Member name (variable) of class
> odp_buffer_pool_info_t is not documented.
> odp_buffer_pool.h:128: warning: Member params (variable) of class
> odp_buffer_pool_info_t is not documented.
> odp_buffer_pool.h:55: warning: Member buf_size (variable) of class
> odp_buffer_pool_param_t is not documented.
> odp_buffer_pool.h:56: warning: Member buf_align (variable) of class
> odp_buffer_pool_param_t is not documented.
> odp_buffer_pool.h:57: warning: Member num_bufs (variable) of class
> odp_buffer_pool_param_t is not documented.
> odp_buffer_pool.h:58: warning: Member buf_type (variable) of class
> odp_buffer_pool_param_t is not documented.
>
>
> On 2014-11-30 19:47, Bill Fischofer wrote:
> > ODP buffer pool restricture to enable support for v1.0 APIs.
>
> Remove, doesn't add any value.
>
> > Implements the following revised/new APIs:
> >
> >  odp_buffer_pool_create()
> >  odp_buffer_pool_destroy()
> >  odp_buffer_pool_info()
>
> We said it during last weeks call, one patch per API.
>
> Please split this into three patches.
>
> Cheers,
> Anders
>
Anders Roxell Dec. 2, 2014, 6:59 a.m. UTC | #3
On 2 December 2014 at 01:15, Bill Fischofer <bill.fischofer@linaro.org> wrote:
> When I run the version of checkpatch that's part of odp.git I get this:
>
> bill@Ubuntu13:~/linaro/v10bufpool$ ./scripts/checkpatch.pl *.patch
> total: 0 errors, 0 warnings, 0 checks, 2392 lines checked
>
> NOTE: Ignored message types: DEPRECATED_VARIABLE NEW_TYPEDEFS
>
> 0001-ODP-buffer-pool-restructure-for-v1.0-APIs.patch has no obvious style
> problems and is ready for submission.
> bill@Ubuntu13:~/linaro/v10bufpool$
>
> What version of checkpatch are you running?

this is not checkpatch that gives this warnings.
The warning says that you haven't documented some members, its doxygen
that gives this warnings.

>
> The two other routines mentioned represent approximately 30 lines of a 1000+
> line patch.  If there's consensus that combining them is a barrier to timely
> review of this patch I can split things up, but this seems like busy work to
> me as we're not trying to support cherry-picking bits and pieces of the
> approved APIs.

Thats great, then it will only take you 15min to split the patch up
into three patches!


Go through all the doxygen comments and change "@return" to @retval
and if it can return success or
failure split it up into multiple lines and explain how it can fail.

Cheers,
Anders

>
> Bill
>
> On Mon, Dec 1, 2014 at 4:27 PM, Anders Roxell <anders.roxell@linaro.org>
> wrote:
>>
>> Drop "for v1.0 APIs"
>>
>> You introduced some warnings that we didn't have:
>> odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro
>> definition) of group odp_compiler_optim is not documented.
>> odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro
>> definition) of group odp_compiler_optim is not documented.
>> odp_buffer_pool.h:127: warning: Member name (variable) of class
>> odp_buffer_pool_info_t is not documented.
>> odp_buffer_pool.h:128: warning: Member params (variable) of class
>> odp_buffer_pool_info_t is not documented.
>> odp_buffer_pool.h:55: warning: Member buf_size (variable) of class
>> odp_buffer_pool_param_t is not documented.
>> odp_buffer_pool.h:56: warning: Member buf_align (variable) of class
>> odp_buffer_pool_param_t is not documented.
>> odp_buffer_pool.h:57: warning: Member num_bufs (variable) of class
>> odp_buffer_pool_param_t is not documented.
>> odp_buffer_pool.h:58: warning: Member buf_type (variable) of class
>> odp_buffer_pool_param_t is not documented.
>>
>>
>> On 2014-11-30 19:47, Bill Fischofer wrote:
>> > ODP buffer pool restricture to enable support for v1.0 APIs.
>>
>> Remove, doesn't add any value.
>>
>> > Implements the following revised/new APIs:
>> >
>> >  odp_buffer_pool_create()
>> >  odp_buffer_pool_destroy()
>> >  odp_buffer_pool_info()
>>
>> We said it during last weeks call, one patch per API.
>>
>> Please split this into three patches.
>>
>> Cheers,
>> Anders
>
>
Maxim Uvarov Dec. 2, 2014, 9:41 a.m. UTC | #4
On 12/02/2014 09:59 AM, Anders Roxell wrote:
> On 2 December 2014 at 01:15, Bill Fischofer <bill.fischofer@linaro.org> wrote:
>> When I run the version of checkpatch that's part of odp.git I get this:
>>
>> bill@Ubuntu13:~/linaro/v10bufpool$ ./scripts/checkpatch.pl *.patch
>> total: 0 errors, 0 warnings, 0 checks, 2392 lines checked
>>
>> NOTE: Ignored message types: DEPRECATED_VARIABLE NEW_TYPEDEFS
>>
>> 0001-ODP-buffer-pool-restructure-for-v1.0-APIs.patch has no obvious style
>> problems and is ready for submission.
>> bill@Ubuntu13:~/linaro/v10bufpool$
>>
>> What version of checkpatch are you running?
> this is not checkpatch that gives this warnings.
> The warning says that you haven't documented some members, its doxygen
> that gives this warnings.
Bill, you can see that warnings on top with:

make doxygen-doc  2>&1 |less

Maxim.
>> The two other routines mentioned represent approximately 30 lines of a 1000+
>> line patch.  If there's consensus that combining them is a barrier to timely
>> review of this patch I can split things up, but this seems like busy work to
>> me as we're not trying to support cherry-picking bits and pieces of the
>> approved APIs.
> Thats great, then it will only take you 15min to split the patch up
> into three patches!
>
>
> Go through all the doxygen comments and change "@return" to @retval
> and if it can return success or
> failure split it up into multiple lines and explain how it can fail.
>
> Cheers,
> Anders
>
>> Bill
>>
>> On Mon, Dec 1, 2014 at 4:27 PM, Anders Roxell <anders.roxell@linaro.org>
>> wrote:
>>> Drop "for v1.0 APIs"
>>>
>>> You introduced some warnings that we didn't have:
>>> odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro
>>> definition) of group odp_compiler_optim is not documented.
>>> odp_config.h:55: warning: Member ODP_CONFIG_BUF_MAX_SIZE (macro
>>> definition) of group odp_compiler_optim is not documented.
>>> odp_buffer_pool.h:127: warning: Member name (variable) of class
>>> odp_buffer_pool_info_t is not documented.
>>> odp_buffer_pool.h:128: warning: Member params (variable) of class
>>> odp_buffer_pool_info_t is not documented.
>>> odp_buffer_pool.h:55: warning: Member buf_size (variable) of class
>>> odp_buffer_pool_param_t is not documented.
>>> odp_buffer_pool.h:56: warning: Member buf_align (variable) of class
>>> odp_buffer_pool_param_t is not documented.
>>> odp_buffer_pool.h:57: warning: Member num_bufs (variable) of class
>>> odp_buffer_pool_param_t is not documented.
>>> odp_buffer_pool.h:58: warning: Member buf_type (variable) of class
>>> odp_buffer_pool_param_t is not documented.
>>>
>>>
>>> On 2014-11-30 19:47, Bill Fischofer wrote:
>>>> ODP buffer pool restricture to enable support for v1.0 APIs.
>>> Remove, doesn't add any value.
>>>
>>>> Implements the following revised/new APIs:
>>>>
>>>>   odp_buffer_pool_create()
>>>>   odp_buffer_pool_destroy()
>>>>   odp_buffer_pool_info()
>>> We said it during last weeks call, one patch per API.
>>>
>>> Please split this into three patches.
>>>
>>> Cheers,
>>> Anders
>>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
Taras Kondratiuk Dec. 2, 2014, 5:09 p.m. UTC | #5
On 12/01/2014 03:47 AM, Bill Fischofer wrote:
> +int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl)
> +{
> +	uint32_t pool_id = pool_handle_to_index(pool_hdl);
> +	pool_entry_t *pool = get_pool_entry(pool_id);
>
> -	if (chunk->chunk.num_bufs == 0) {
> -		/* give the chunk buffer */
> -		local_chunk[pool_id] = NULL;
> -		chunk->buf_hdr.type = pool->s.buf_type;
> +	if (pool == NULL)
> +		return -1;
>
> -		handle = chunk->buf_hdr.handle;
> -	} else {
> -		odp_buffer_hdr_t *hdr;
> -		uint32_t index;
> -		index = rem_buf_index(chunk);
> -		hdr = index_to_hdr(pool, index);
> +	LOCK(&pool->s.lock);
>
> -		handle = hdr->handle;
> +	if (pool->s.pool_shm == ODP_SHM_INVALID ||
> +	    odp_atomic_load_u32(&pool->s.bufcount) > 0 ||
> +	    pool->s.flags.predefined) {
> +		UNLOCK(&pool->s.lock);
> +		return -1;
>   	}
>
> -	return handle.u32;
> -}
> +	if (!pool->s.flags.user_supplied_shm)
> +		odp_shm_free(pool->s.pool_shm);
>
> +	pool->s.pool_shm = 0;

should be ODP_SHM_INVALID instead of 0.

> +	UNLOCK(&pool->s.lock);
Bill Fischofer Dec. 2, 2014, 7:20 p.m. UTC | #6
I've posted v2 of this patch, which incorporates the comments from Taras an
Anders.  It's also now split into 3 parts that can be applied sequentially
if desired.

Bill

On Tue, Dec 2, 2014 at 12:09 PM, Taras Kondratiuk <
taras.kondratiuk@linaro.org> wrote:

> On 12/01/2014 03:47 AM, Bill Fischofer wrote:
>
>> +int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl)
>> +{
>> +       uint32_t pool_id = pool_handle_to_index(pool_hdl);
>> +       pool_entry_t *pool = get_pool_entry(pool_id);
>>
>> -       if (chunk->chunk.num_bufs == 0) {
>> -               /* give the chunk buffer */
>> -               local_chunk[pool_id] = NULL;
>> -               chunk->buf_hdr.type = pool->s.buf_type;
>> +       if (pool == NULL)
>> +               return -1;
>>
>> -               handle = chunk->buf_hdr.handle;
>> -       } else {
>> -               odp_buffer_hdr_t *hdr;
>> -               uint32_t index;
>> -               index = rem_buf_index(chunk);
>> -               hdr = index_to_hdr(pool, index);
>> +       LOCK(&pool->s.lock);
>>
>> -               handle = hdr->handle;
>> +       if (pool->s.pool_shm == ODP_SHM_INVALID ||
>> +           odp_atomic_load_u32(&pool->s.bufcount) > 0 ||
>> +           pool->s.flags.predefined) {
>> +               UNLOCK(&pool->s.lock);
>> +               return -1;
>>         }
>>
>> -       return handle.u32;
>> -}
>> +       if (!pool->s.flags.user_supplied_shm)
>> +               odp_shm_free(pool->s.pool_shm);
>>
>> +       pool->s.pool_shm = 0;
>>
>
> should be ODP_SHM_INVALID instead of 0.
>
>  +       UNLOCK(&pool->s.lock);
>>
>
diff mbox

Patch

diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c
index 73b0369..476cbef 100644
--- a/example/generator/odp_generator.c
+++ b/example/generator/odp_generator.c
@@ -522,11 +522,11 @@  int main(int argc, char *argv[])
 	odph_linux_pthread_t thread_tbl[MAX_WORKERS];
 	odp_buffer_pool_t pool;
 	int num_workers;
-	void *pool_base;
 	int i;
 	int first_core;
 	int core_count;
 	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	/* Init ODP before calling anything else */
 	if (odp_init_global(NULL, NULL)) {
@@ -589,20 +589,13 @@  int main(int argc, char *argv[])
 	printf("First core:         %i\n\n", first_core);
 
 	/* Create packet pool */
-	shm = odp_shm_reserve("shm_packet_pool",
-			      SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-	pool_base = odp_shm_addr(shm);
+	params.buf_size  = SHM_PKT_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_PACKET;
 
-	if (pool_base == NULL) {
-		EXAMPLE_ERR("Error: packet pool mem alloc failed.\n");
-		exit(EXIT_FAILURE);
-	}
+	pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, &params);
 
-	pool = odp_buffer_pool_create("packet_pool", pool_base,
-				      SHM_PKT_POOL_SIZE,
-				      SHM_PKT_POOL_BUF_SIZE,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_PACKET);
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		EXAMPLE_ERR("Error: packet pool create failed.\n");
 		exit(EXIT_FAILURE);
diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c
index 76d27c5..f96338c 100644
--- a/example/ipsec/odp_ipsec.c
+++ b/example/ipsec/odp_ipsec.c
@@ -367,8 +367,7 @@  static
 void ipsec_init_pre(void)
 {
 	odp_queue_param_t qparam;
-	void *pool_base;
-	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	/*
 	 * Create queues
@@ -401,16 +400,12 @@  void ipsec_init_pre(void)
 	}
 
 	/* Create output buffer pool */
-	shm = odp_shm_reserve("shm_out_pool",
-			      SHM_OUT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-
-	pool_base = odp_shm_addr(shm);
+	params.buf_size  = SHM_OUT_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_PKT_POOL_BUF_COUNT;
+	params.buf_type  = ODP_BUFFER_TYPE_PACKET;
 
-	out_pool = odp_buffer_pool_create("out_pool", pool_base,
-					  SHM_OUT_POOL_SIZE,
-					  SHM_OUT_POOL_BUF_SIZE,
-					  ODP_CACHE_LINE_SIZE,
-					  ODP_BUFFER_TYPE_PACKET);
+	out_pool = odp_buffer_pool_create("out_pool", ODP_SHM_NULL, &params);
 
 	if (ODP_BUFFER_POOL_INVALID == out_pool) {
 		EXAMPLE_ERR("Error: message pool create failed.\n");
@@ -1176,12 +1171,12 @@  main(int argc, char *argv[])
 {
 	odph_linux_pthread_t thread_tbl[MAX_WORKERS];
 	int num_workers;
-	void *pool_base;
 	int i;
 	int first_core;
 	int core_count;
 	int stream_count;
 	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	/* Init ODP before calling anything else */
 	if (odp_init_global(NULL, NULL)) {
@@ -1241,42 +1236,28 @@  main(int argc, char *argv[])
 	printf("First core:         %i\n\n", first_core);
 
 	/* Create packet buffer pool */
-	shm = odp_shm_reserve("shm_packet_pool",
-			      SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
+	params.buf_size  = SHM_PKT_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_PKT_POOL_BUF_COUNT;
+	params.buf_type  = ODP_BUFFER_TYPE_PACKET;
 
-	pool_base = odp_shm_addr(shm);
-
-	if (NULL == pool_base) {
-		EXAMPLE_ERR("Error: packet pool mem alloc failed.\n");
-		exit(EXIT_FAILURE);
-	}
+	pkt_pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL,
+					  &params);
 
-	pkt_pool = odp_buffer_pool_create("packet_pool", pool_base,
-					  SHM_PKT_POOL_SIZE,
-					  SHM_PKT_POOL_BUF_SIZE,
-					  ODP_CACHE_LINE_SIZE,
-					  ODP_BUFFER_TYPE_PACKET);
 	if (ODP_BUFFER_POOL_INVALID == pkt_pool) {
 		EXAMPLE_ERR("Error: packet pool create failed.\n");
 		exit(EXIT_FAILURE);
 	}
 
 	/* Create context buffer pool */
-	shm = odp_shm_reserve("shm_ctx_pool",
-			      SHM_CTX_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-
-	pool_base = odp_shm_addr(shm);
+	params.buf_size  = SHM_CTX_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_CTX_POOL_BUF_COUNT;
+	params.buf_type  = ODP_BUFFER_TYPE_RAW;
 
-	if (NULL == pool_base) {
-		EXAMPLE_ERR("Error: context pool mem alloc failed.\n");
-		exit(EXIT_FAILURE);
-	}
+	ctx_pool = odp_buffer_pool_create("ctx_pool", ODP_SHM_NULL,
+					  &params);
 
-	ctx_pool = odp_buffer_pool_create("ctx_pool", pool_base,
-					  SHM_CTX_POOL_SIZE,
-					  SHM_CTX_POOL_BUF_SIZE,
-					  ODP_CACHE_LINE_SIZE,
-					  ODP_BUFFER_TYPE_RAW);
 	if (ODP_BUFFER_POOL_INVALID == ctx_pool) {
 		EXAMPLE_ERR("Error: context pool create failed.\n");
 		exit(EXIT_FAILURE);
diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c
index ebac8c5..3c1fd6a 100644
--- a/example/l2fwd/odp_l2fwd.c
+++ b/example/l2fwd/odp_l2fwd.c
@@ -314,12 +314,12 @@  int main(int argc, char *argv[])
 {
 	odph_linux_pthread_t thread_tbl[MAX_WORKERS];
 	odp_buffer_pool_t pool;
-	void *pool_base;
 	int i;
 	int first_core;
 	int core_count;
 	odp_pktio_t pktio;
 	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	/* Init ODP before calling anything else */
 	if (odp_init_global(NULL, NULL)) {
@@ -383,20 +383,13 @@  int main(int argc, char *argv[])
 	printf("First core:         %i\n\n", first_core);
 
 	/* Create packet pool */
-	shm = odp_shm_reserve("shm_packet_pool",
-			      SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-	pool_base = odp_shm_addr(shm);
+	params.buf_size  = SHM_PKT_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_PACKET;
 
-	if (pool_base == NULL) {
-		EXAMPLE_ERR("Error: packet pool mem alloc failed.\n");
-		exit(EXIT_FAILURE);
-	}
+	pool = odp_buffer_pool_create("packet pool", ODP_SHM_NULL, &params);
 
-	pool = odp_buffer_pool_create("packet_pool", pool_base,
-				      SHM_PKT_POOL_SIZE,
-				      SHM_PKT_POOL_BUF_SIZE,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_PACKET);
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		EXAMPLE_ERR("Error: packet pool create failed.\n");
 		exit(EXIT_FAILURE);
diff --git a/example/odp_example/odp_example.c b/example/odp_example/odp_example.c
index 96a2912..8373f12 100644
--- a/example/odp_example/odp_example.c
+++ b/example/odp_example/odp_example.c
@@ -954,13 +954,13 @@  int main(int argc, char *argv[])
 	test_args_t args;
 	int num_workers;
 	odp_buffer_pool_t pool;
-	void *pool_base;
 	odp_queue_t queue;
 	int i, j;
 	int prios;
 	int first_core;
 	odp_shm_t shm;
 	test_globals_t *globals;
+	odp_buffer_pool_param_t params;
 
 	printf("\nODP example starts\n\n");
 
@@ -1042,19 +1042,13 @@  int main(int argc, char *argv[])
 	/*
 	 * Create message pool
 	 */
-	shm = odp_shm_reserve("msg_pool",
-			      MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
 
-	pool_base = odp_shm_addr(shm);
+	params.buf_size  = sizeof(test_message_t);
+	params.buf_align = 0;
+	params.num_bufs  = MSG_POOL_SIZE/sizeof(test_message_t);
+	params.buf_type  = ODP_BUFFER_TYPE_RAW;
 
-	if (pool_base == NULL) {
-		EXAMPLE_ERR("Shared memory reserve failed.\n");
-		return -1;
-	}
-
-	pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE,
-				      sizeof(test_message_t),
-				      ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW);
+	pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, &params);
 
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		EXAMPLE_ERR("Pool create failed.\n");
diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c
index 1763c84..27318d4 100644
--- a/example/packet/odp_pktio.c
+++ b/example/packet/odp_pktio.c
@@ -331,11 +331,11 @@  int main(int argc, char *argv[])
 	odph_linux_pthread_t thread_tbl[MAX_WORKERS];
 	odp_buffer_pool_t pool;
 	int num_workers;
-	void *pool_base;
 	int i;
 	int first_core;
 	int core_count;
 	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	/* Init ODP before calling anything else */
 	if (odp_init_global(NULL, NULL)) {
@@ -389,20 +389,13 @@  int main(int argc, char *argv[])
 	printf("First core:         %i\n\n", first_core);
 
 	/* Create packet pool */
-	shm = odp_shm_reserve("shm_packet_pool",
-			      SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-	pool_base = odp_shm_addr(shm);
+	params.buf_size  = SHM_PKT_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_PACKET;
 
-	if (pool_base == NULL) {
-		EXAMPLE_ERR("Error: packet pool mem alloc failed.\n");
-		exit(EXIT_FAILURE);
-	}
+	pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, &params);
 
-	pool = odp_buffer_pool_create("packet_pool", pool_base,
-				      SHM_PKT_POOL_SIZE,
-				      SHM_PKT_POOL_BUF_SIZE,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_PACKET);
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		EXAMPLE_ERR("Error: packet pool create failed.\n");
 		exit(EXIT_FAILURE);
diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c
index 9968bfe..0d6e31a 100644
--- a/example/timer/odp_timer_test.c
+++ b/example/timer/odp_timer_test.c
@@ -244,12 +244,12 @@  int main(int argc, char *argv[])
 	test_args_t args;
 	int num_workers;
 	odp_buffer_pool_t pool;
-	void *pool_base;
 	odp_queue_t queue;
 	int first_core;
 	uint64_t cycles, ns;
 	odp_queue_param_t param;
 	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	printf("\nODP timer example starts\n");
 
@@ -313,12 +313,13 @@  int main(int argc, char *argv[])
 	 */
 	shm = odp_shm_reserve("msg_pool",
 			      MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-	pool_base = odp_shm_addr(shm);
 
-	pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE,
-				      0,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_TIMEOUT);
+	params.buf_size  = 0;
+	params.buf_align = 0;
+	params.num_bufs  = MSG_POOL_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_TIMEOUT;
+
+	pool = odp_buffer_pool_create("msg_pool", shm, &params);
 
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		EXAMPLE_ERR("Pool create failed.\n");
diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h b/platform/linux-generic/include/api/odp_buffer_pool.h
index 30b83e0..d5f138e 100644
--- a/platform/linux-generic/include/api/odp_buffer_pool.h
+++ b/platform/linux-generic/include/api/odp_buffer_pool.h
@@ -36,32 +36,115 @@  extern "C" {
 #define ODP_BUFFER_POOL_INVALID   0
 
 /**
+ * Buffer pool parameters
+ *
+ * @param[in] buf_size   Buffer size in bytes. The maximum number
+ *                       of bytes application will store in each
+ *                       buffer.
+ *
+ * @param[in] buf_align  Minimum buffer alignment in bytes.  Valid values
+ *                       are powers of two. Use 0 for the default
+ *                       alignment. Default alignment is a multiple
+ *                       of 8.
+ *
+ * @param[in] num_bufs   Number of buffers in the pool.
+ *
+ * @param[in] buf_type   Buffer type
+ */
+typedef struct odp_buffer_pool_param_t {
+	size_t   buf_size;
+	size_t   buf_align;
+	uint32_t num_bufs;
+	int      buf_type;
+} odp_buffer_pool_param_t;
+
+/**
  * Create a buffer pool
+ * This routine is used to create a buffer pool. It take three
+ * arguments: the optional name of the pool to be created, an optional shared
+ * memory handle, and a parameter struct that describes the pool to be
+ * created. If a name is not specified the result is an anonymous pool that
+ * cannot be referenced by odp_buffer_pool_lookup().
  *
- * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 chars)
- * @param base_addr Pool base address
- * @param size      Pool size in bytes
- * @param buf_size  Buffer size in bytes
- * @param buf_align Minimum buffer alignment
- * @param buf_type  Buffer type
+ * @param[in] name     Name of the pool, max ODP_BUFFER_POOL_NAME_LEN-1 chars.
+ *                     May be specified as NULL for anonymous pools.
  *
- * @return Buffer pool handle
+ * @param[in] shm      The shared memory object in which to create the pool.
+ *                     Use ODP_SHM_NULL to reserve default memory type
+ *                     for the buffer type.
+ *
+ * @param[in] params   Buffer pool parameters.
+ *
+ * @return Buffer pool handle or ODP_BUFFER_POOL_INVALID if call failed.
  */
+
 odp_buffer_pool_t odp_buffer_pool_create(const char *name,
-					 void *base_addr, uint64_t size,
-					 size_t buf_size, size_t buf_align,
-					 int buf_type);
+					 odp_shm_t shm,
+					 odp_buffer_pool_param_t *params);
 
+/**
+ * Destroy a buffer pool previously created by odp_buffer_pool_create()
+ *
+ * @param[in] pool    Handle of the buffer pool to be destroyed
+ *
+ * @return            0 on Success, -1 on Failure.
+ *
+ * @note This routine destroys a previously created buffer pool. This call
+ * does not destroy any shared memory object passed to
+ * odp_buffer_pool_create() used to store the buffer pool contents. The caller
+ * takes responsibility for that. If no shared memory object was passed as
+ * part of the create call, then this routine will destroy any internal shared
+ * memory objects associated with the buffer pool. Results are undefined if
+ * an attempt is made to destroy a buffer pool that contains allocated or
+ * otherwise active buffers.
+ */
+int odp_buffer_pool_destroy(odp_buffer_pool_t pool);
 
 /**
  * Find a buffer pool by name
  *
- * @param name      Name of the pool
+ * @param[in] name      Name of the pool
  *
  * @return Buffer pool handle, or ODP_BUFFER_POOL_INVALID if not found.
+ *
+ * @note This routine cannot be used to look up an anonymous pool (one created
+ * with no name).
  */
 odp_buffer_pool_t odp_buffer_pool_lookup(const char *name);
 
+/**
+ * @param[out] name   Pointer to the name of the buffer pool. NULL if this
+ *                    pool is anonymous.
+ *
+ * @param[out] shm    Handle of the shared memory object containing this pool,
+ *                    if pool was created from an application supplied shared
+ *                    memory area. Otherwise ODP_SHM_NULL.
+ *
+ * @param[out] params Copy of the odp_buffer_pool_param_t used to create this
+ *                    pool.
+ */
+typedef struct odp_buffer_pool_info_t {
+	const char *name;
+	odp_buffer_pool_param_t params;
+} odp_buffer_pool_info_t;
+
+/**
+ * Retrieve information about a buffer pool
+ *
+ * @param[in]  pool    Buffer pool handle
+ *
+ * @param[out] shm     Recieves odp_shm_t supplied by caller at
+ *                     pool creation, or ODP_SHM_NULL if the
+ *                     pool is managed internally.
+ *
+ * @param[out] info    Receives an odp_buffer_pool_info_t object
+ *                     that describes the pool.
+ *
+ * @return 0 on success, -1 if info could not be retrieved.
+ */
+
+int odp_buffer_pool_info(odp_buffer_pool_t pool, odp_shm_t *shm,
+			 odp_buffer_pool_info_t *info);
 
 /**
  * Print buffer pool info
diff --git a/platform/linux-generic/include/api/odp_config.h b/platform/linux-generic/include/api/odp_config.h
index 906897c..65cc5b5 100644
--- a/platform/linux-generic/include/api/odp_config.h
+++ b/platform/linux-generic/include/api/odp_config.h
@@ -49,6 +49,12 @@  extern "C" {
 #define ODP_CONFIG_PKTIO_ENTRIES 64
 
 /**
+ * Packet processing limits
+ */
+#define ODP_CONFIG_BUF_SEG_SIZE (512*3)
+#define ODP_CONFIG_BUF_MAX_SIZE (ODP_CONFIG_BUF_SEG_SIZE*7)
+
+/**
  * @}
  */
 
diff --git a/platform/linux-generic/include/api/odp_platform_types.h b/platform/linux-generic/include/api/odp_platform_types.h
index 4db47d3..b9b3aea 100644
--- a/platform/linux-generic/include/api/odp_platform_types.h
+++ b/platform/linux-generic/include/api/odp_platform_types.h
@@ -65,6 +65,15 @@  typedef uint32_t odp_pktio_t;
 #define ODP_PKTIO_ANY ((odp_pktio_t)~0)
 
 /**
+ * ODP shared memory block
+ */
+typedef uint32_t odp_shm_t;
+
+/** Invalid shared memory block */
+#define ODP_SHM_INVALID 0
+#define ODP_SHM_NULL ODP_SHM_INVALID /**< Synonym for buffer pool use */
+
+/**
  * @}
  */
 
diff --git a/platform/linux-generic/include/api/odp_shared_memory.h b/platform/linux-generic/include/api/odp_shared_memory.h
index 26e208b..f70db5a 100644
--- a/platform/linux-generic/include/api/odp_shared_memory.h
+++ b/platform/linux-generic/include/api/odp_shared_memory.h
@@ -20,6 +20,7 @@  extern "C" {
 
 
 #include <odp_std_types.h>
+#include <odp_platform_types.h>
 
 /** @defgroup odp_shared_memory ODP SHARED MEMORY
  *  Operations on shared memory.
@@ -38,15 +39,6 @@  extern "C" {
 #define ODP_SHM_PROC    0x2 /**< Share with external processes */
 
 /**
- * ODP shared memory block
- */
-typedef uint32_t odp_shm_t;
-
-/** Invalid shared memory block */
-#define ODP_SHM_INVALID 0
-
-
-/**
  * Shared memory block info
  */
 typedef struct odp_shm_info_t {
diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h
new file mode 100644
index 0000000..f33b41d
--- /dev/null
+++ b/platform/linux-generic/include/odp_buffer_inlines.h
@@ -0,0 +1,157 @@ 
+/* Copyright (c) 2014, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier:     BSD-3-Clause
+ */
+
+/**
+ * @file
+ *
+ * Inline functions for ODP buffer mgmt routines - implementation internal
+ */
+
+#ifndef ODP_BUFFER_INLINES_H_
+#define ODP_BUFFER_INLINES_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr)
+{
+	odp_buffer_bits_t handle;
+	uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl);
+	struct pool_entry_s *pool = get_pool_entry(pool_id);
+
+	handle.pool_id = pool_id;
+	handle.index = ((uint8_t *)hdr - pool->pool_base_addr) /
+		ODP_CACHE_LINE_SIZE;
+	handle.seg = 0;
+
+	return handle.u32;
+}
+
+static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr)
+{
+	odp_buffer_t hdl = odp_buffer_encode_handle(hdr);
+	if (hdl != hdr->handle.handle) {
+		ODP_DBG("buf %p should have handle %x but is cached as %x\n",
+			hdr, hdl, hdr->handle.handle);
+		hdr->handle.handle = hdl;
+	}
+	return hdr->handle.handle;
+}
+
+static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf)
+{
+	odp_buffer_bits_t handle;
+	uint32_t pool_id;
+	uint32_t index;
+	struct pool_entry_s *pool;
+
+	handle.u32 = buf;
+	pool_id    = handle.pool_id;
+	index      = handle.index;
+
+#ifdef POOL_ERROR_CHECK
+	if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) {
+		ODP_ERR("odp_buf_to_hdr: Bad pool id\n");
+		return NULL;
+	}
+#endif
+
+	pool = get_pool_entry(pool_id);
+
+#ifdef POOL_ERROR_CHECK
+	if (odp_unlikely(index > pool->params.num_bufs - 1)) {
+		ODP_ERR("odp_buf_to_hdr: Bad buffer index\n");
+		return NULL;
+	}
+#endif
+
+	return (odp_buffer_hdr_t *)(void *)
+		(pool->pool_base_addr + (index * ODP_CACHE_LINE_SIZE));
+}
+
+static inline uint32_t odp_buffer_refcount(odp_buffer_hdr_t *buf)
+{
+	return odp_atomic_load_u32(&buf->ref_count);
+}
+
+static inline uint32_t odp_buffer_incr_refcount(odp_buffer_hdr_t *buf,
+						uint32_t val)
+{
+	return odp_atomic_fetch_add_u32(&buf->ref_count, val) + val;
+}
+
+static inline uint32_t odp_buffer_decr_refcount(odp_buffer_hdr_t *buf,
+						uint32_t val)
+{
+	uint32_t tmp;
+
+	tmp = odp_atomic_fetch_sub_u32(&buf->ref_count, val);
+
+	if (tmp < val) {
+		odp_atomic_fetch_add_u32(&buf->ref_count, val - tmp);
+		return 0;
+	} else {
+		return tmp - val;
+	}
+}
+
+static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf)
+{
+	odp_buffer_bits_t handle;
+	odp_buffer_hdr_t *buf_hdr;
+	handle.u32 = buf;
+
+	/* For buffer handles, segment index must be 0 */
+	if (handle.seg != 0)
+		return NULL;
+
+	pool_entry_t *pool = odp_pool_to_entry(handle.pool_id);
+
+	/* If pool not created, handle is invalid */
+	if (pool->s.pool_shm == ODP_SHM_INVALID)
+		return NULL;
+
+	uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE;
+
+	/* A valid buffer index must be on stride, and must be in range */
+	if ((handle.index % buf_stride != 0) ||
+	    ((uint32_t)(handle.index / buf_stride) >= pool->s.params.num_bufs))
+		return NULL;
+
+	buf_hdr = (odp_buffer_hdr_t *)(void *)
+		(pool->s.pool_base_addr +
+		 (handle.index * ODP_CACHE_LINE_SIZE));
+
+	/* Handle is valid, so buffer is valid if it is allocated */
+	if (buf_hdr->segsize > 0 && buf_hdr->segcount == 0)
+		return NULL;
+	else
+		return buf_hdr;
+}
+
+int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf);
+
+static inline void *buffer_map(odp_buffer_hdr_t *buf,
+			       size_t offset,
+			       size_t *seglen,
+			       size_t limit)
+{
+	int seg_index  = offset / buf->segsize;
+	int seg_offset = offset % buf->segsize;
+	size_t buf_left = limit - offset;
+
+	*seglen = buf_left < buf->segsize ?
+		buf_left : buf->segsize - seg_offset;
+
+	return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h
index 0027bfc..1751a9d 100644
--- a/platform/linux-generic/include/odp_buffer_internal.h
+++ b/platform/linux-generic/include/odp_buffer_internal.h
@@ -24,99 +24,117 @@  extern "C" {
 #include <odp_buffer.h>
 #include <odp_debug.h>
 #include <odp_align.h>
-
-/* TODO: move these to correct files */
-
-typedef uint64_t odp_phys_addr_t;
-
-#define ODP_BUFFER_MAX_INDEX     (ODP_BUFFER_MAX_BUFFERS - 2)
-#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1)
-
-#define ODP_BUFS_PER_CHUNK       16
-#define ODP_BUFS_PER_SCATTER      4
-
-#define ODP_BUFFER_TYPE_CHUNK    0xffff
-
+#include <odp_config.h>
+#include <odp_byteorder.h>
+#include <odp_thread.h>
+
+
+#define ODP_BUFFER_MAX_SEG     (ODP_CONFIG_BUF_MAX_SIZE/ODP_CONFIG_BUF_SEG_SIZE)
+
+ODP_STATIC_ASSERT((ODP_CONFIG_BUF_SEG_SIZE % ODP_CACHE_LINE_SIZE) == 0,
+		  "ODP Segment size must be a multiple of cache line size");
+
+#define ODP_SEGBITS(x) \
+	((x) <    2 ?  1 : \
+	 ((x) <    4 ?  2 : \
+	  ((x) <    8 ?  3 : \
+	   ((x) <   16 ?  4 : \
+	    ((x) <   32 ?  5 : \
+	     ((x) <   64 ?  6 :	\
+	      ((x) <  128 ?  7 : \
+	       ((x) <  256 ?  8 : \
+		((x) <  512 ?  9 : \
+		 ((x) < 1024 ? 10 : \
+		  ((x) < 2048 ? 11 : \
+		   ((x) < 4096 ? 12 : \
+		    (0/0)))))))))))))
+
+ODP_STATIC_ASSERT(ODP_SEGBITS(ODP_BUFFER_MAX_SEG) <
+		  ODP_SEGBITS(ODP_CACHE_LINE_SIZE),
+		  "Number of segments must not exceed log of cache line size");
 
 #define ODP_BUFFER_POOL_BITS   4
-#define ODP_BUFFER_INDEX_BITS  (32 - ODP_BUFFER_POOL_BITS)
+#define ODP_BUFFER_SEG_BITS    ODP_SEGBITS(ODP_CACHE_LINE_SIZE)
+#define ODP_BUFFER_INDEX_BITS  (32 - ODP_BUFFER_POOL_BITS - ODP_BUFFER_SEG_BITS)
+#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + ODP_BUFFER_INDEX_BITS)
 #define ODP_BUFFER_MAX_POOLS   (1 << ODP_BUFFER_POOL_BITS)
 #define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS)
 
+#define ODP_BUFFER_MAX_INDEX     (ODP_BUFFER_MAX_BUFFERS - 2)
+#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1)
+
 typedef union odp_buffer_bits_t {
 	uint32_t     u32;
 	odp_buffer_t handle;
 
 	struct {
+#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN
 		uint32_t pool_id:ODP_BUFFER_POOL_BITS;
 		uint32_t index:ODP_BUFFER_INDEX_BITS;
+		uint32_t seg:ODP_BUFFER_SEG_BITS;
+#else
+		uint32_t seg:ODP_BUFFER_SEG_BITS;
+		uint32_t index:ODP_BUFFER_INDEX_BITS;
+		uint32_t pool_id:ODP_BUFFER_POOL_BITS;
+#endif
 	};
-} odp_buffer_bits_t;
 
+	struct {
+#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN
+		uint32_t prefix:ODP_BUFFER_PREFIX_BITS;
+		uint32_t pfxseg:ODP_BUFFER_SEG_BITS;
+#else
+		uint32_t pfxseg:ODP_BUFFER_SEG_BITS;
+		uint32_t prefix:ODP_BUFFER_PREFIX_BITS;
+#endif
+	};
+} odp_buffer_bits_t;
 
 /* forward declaration */
 struct odp_buffer_hdr_t;
 
-
-/*
- * Scatter/gather list of buffers
- */
-typedef struct odp_buffer_scatter_t {
-	/* buffer pointers */
-	struct odp_buffer_hdr_t *buf[ODP_BUFS_PER_SCATTER];
-	int                      num_bufs;   /* num buffers */
-	int                      pos;        /* position on the list */
-	size_t                   total_len;  /* Total length */
-} odp_buffer_scatter_t;
-
-
-/*
- * Chunk of buffers (in single pool)
- */
-typedef struct odp_buffer_chunk_t {
-	uint32_t num_bufs;                      /* num buffers */
-	uint32_t buf_index[ODP_BUFS_PER_CHUNK]; /* buffers */
-} odp_buffer_chunk_t;
-
-
 /* Common buffer header */
 typedef struct odp_buffer_hdr_t {
 	struct odp_buffer_hdr_t *next;       /* next buf in a list */
+	int                      allocator;  /* allocating thread id */
 	odp_buffer_bits_t        handle;     /* handle */
-	odp_phys_addr_t          phys_addr;  /* physical data start address */
-	void                    *addr;       /* virtual data start address */
-	uint32_t                 index;	     /* buf index in the pool */
+	union {
+		uint32_t all;
+		struct {
+			uint32_t zeroized:1; /* Zeroize buf data on free */
+			uint32_t hdrdata:1;  /* Data is in buffer hdr */
+		};
+	} flags;
+	int                      type;       /* buffer type */
 	size_t                   size;       /* max data size */
-	size_t                   cur_offset; /* current offset */
 	odp_atomic_u32_t         ref_count;  /* reference count */
-	odp_buffer_scatter_t     scatter;    /* Scatter/gather list */
-	int                      type;       /* type of next header */
 	odp_buffer_pool_t        pool_hdl;   /* buffer pool handle */
-
+	union {
+		void            *buf_ctx;    /* user context */
+		void            *udata_addr; /* user metadata addr */
+	};
+	size_t                   udata_size; /* size of user metadata */
+	uint32_t                 segcount;   /* segment count */
+	uint32_t                 segsize;    /* segment size */
+	void                    *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */
 } odp_buffer_hdr_t;
 
-/* Ensure next header starts from 8 byte align */
-ODP_STATIC_ASSERT((sizeof(odp_buffer_hdr_t) % 8) == 0, "ODP_BUFFER_HDR_T__SIZE_ERROR");
+typedef struct odp_buffer_hdr_stride {
+	uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))];
+} odp_buffer_hdr_stride;
 
+typedef struct odp_buf_blk_t {
+	struct odp_buf_blk_t *next;
+	struct odp_buf_blk_t *prev;
+} odp_buf_blk_t;
 
 /* Raw buffer header */
 typedef struct {
 	odp_buffer_hdr_t buf_hdr;    /* common buffer header */
-	uint8_t          buf_data[]; /* start of buffer data area */
 } odp_raw_buffer_hdr_t;
 
-
-/* Chunk header */
-typedef struct odp_buffer_chunk_hdr_t {
-	odp_buffer_hdr_t   buf_hdr;
-	odp_buffer_chunk_t chunk;
-} odp_buffer_chunk_hdr_t;
-
-
-int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf);
-
-void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src);
-
+/* Forward declarations */
+odp_buffer_t buffer_alloc(odp_buffer_pool_t pool, size_t size);
 
 #ifdef __cplusplus
 }
diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h
index e0210bd..d9c9cfa 100644
--- a/platform/linux-generic/include/odp_buffer_pool_internal.h
+++ b/platform/linux-generic/include/odp_buffer_pool_internal.h
@@ -25,6 +25,35 @@  extern "C" {
 #include <odp_hints.h>
 #include <odp_config.h>
 #include <odp_debug.h>
+#include <odp_shared_memory.h>
+#include <odp_atomic.h>
+#include <odp_atomic_internal.h>
+#include <string.h>
+
+/**
+ * Buffer initialization routine prototype
+ *
+ * @note Routines of this type MAY be passed as part of the
+ * _odp_buffer_pool_init_t structure to be called whenever a
+ * buffer is allocated to initialize the user metadata
+ * associated with that buffer.
+ */
+typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg);
+
+/**
+ * Buffer pool initialization parameters
+ *
+ * @param[in] udata_size     Size of the user metadata for each buffer
+ * @param[in] buf_init       Function pointer to be called to initialize the
+ *                           user metadata for each buffer in the pool.
+ * @param[in] buf_init_arg   Argument to be passed to buf_init().
+ *
+ */
+typedef struct _odp_buffer_pool_init_t {
+	size_t udata_size;         /**< Size of user metadata for each buffer */
+	_odp_buf_init_t *buf_init; /**< Buffer initialization routine to use */
+	void *buf_init_arg;        /**< Argument to be passed to buf_init() */
+} _odp_buffer_pool_init_t;         /**< Type of buffer initialization struct */
 
 /* Use ticketlock instead of spinlock */
 #define POOL_USE_TICKETLOCK
@@ -39,6 +68,17 @@  extern "C" {
 #include <odp_spinlock.h>
 #endif
 
+#ifdef POOL_USE_TICKETLOCK
+#include <odp_ticketlock.h>
+#define LOCK(a)      odp_ticketlock_lock(a)
+#define UNLOCK(a)    odp_ticketlock_unlock(a)
+#define LOCK_INIT(a) odp_ticketlock_init(a)
+#else
+#include <odp_spinlock.h>
+#define LOCK(a)      odp_spinlock_lock(a)
+#define UNLOCK(a)    odp_spinlock_unlock(a)
+#define LOCK_INIT(a) odp_spinlock_init(a)
+#endif
 
 struct pool_entry_s {
 #ifdef POOL_USE_TICKETLOCK
@@ -47,66 +87,222 @@  struct pool_entry_s {
 	odp_spinlock_t          lock ODP_ALIGNED_CACHE;
 #endif
 
-	odp_buffer_chunk_hdr_t *head;
-	uint64_t                free_bufs;
 	char                    name[ODP_BUFFER_POOL_NAME_LEN];
-
-	odp_buffer_pool_t       pool_hdl ODP_ALIGNED_CACHE;
-	uintptr_t               buf_base;
-	size_t                  buf_size;
-	size_t                  buf_offset;
-	uint64_t                num_bufs;
-	void                   *pool_base_addr;
-	uint64_t                pool_size;
-	size_t                  user_size;
-	size_t                  user_align;
-	int                     buf_type;
-	size_t                  hdr_size;
+	odp_buffer_pool_param_t params;
+	_odp_buffer_pool_init_t init_params;
+	odp_buffer_pool_t       pool_hdl;
+	odp_shm_t               pool_shm;
+	union {
+		uint32_t all;
+		struct {
+			uint32_t has_name:1;
+			uint32_t user_supplied_shm:1;
+			uint32_t unsegmented:1;
+			uint32_t zeroized:1;
+			uint32_t quiesced:1;
+			uint32_t low_wm_assert:1;
+			uint32_t predefined:1;
+		};
+	} flags;
+	uint8_t                *pool_base_addr;
+	size_t                  pool_size;
+	uint32_t                buf_stride;
+	_odp_atomic_ptr_t       buf_freelist;
+	_odp_atomic_ptr_t       blk_freelist;
+	odp_atomic_u32_t        bufcount;
+	odp_atomic_u32_t        blkcount;
+	odp_atomic_u64_t        bufallocs;
+	odp_atomic_u64_t        buffrees;
+	odp_atomic_u64_t        blkallocs;
+	odp_atomic_u64_t        blkfrees;
+	odp_atomic_u64_t        bufempty;
+	odp_atomic_u64_t        blkempty;
+	odp_atomic_u64_t        high_wm_count;
+	odp_atomic_u64_t        low_wm_count;
+	size_t                  seg_size;
+	size_t                  high_wm;
+	size_t                  low_wm;
+	size_t                  headroom;
+	size_t                  tailroom;
 };
 
+typedef union pool_entry_u {
+	struct pool_entry_s s;
+
+	uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))];
+} pool_entry_t;
 
 extern void *pool_entry_ptr[];
 
+#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1)
+#define buffer_is_secure(buf) (buf->flags.zeroized)
+#define pool_is_secure(pool) (pool->flags.zeroized)
+#else
+#define buffer_is_secure(buf) 0
+#define pool_is_secure(pool) 0
+#endif
+
+#define TAG_ALIGN ((size_t)16)
 
-static inline void *get_pool_entry(uint32_t pool_id)
+#define odp_cs(ptr, old, new) \
+	_odp_atomic_ptr_cmp_xchg_strong(&ptr, (void **)&old, (void *)new, \
+					_ODP_MEMMODEL_SC, \
+					_ODP_MEMMODEL_SC)
+
+/* Helper functions for pointer tagging to avoid ABA race conditions */
+#define odp_tag(ptr) \
+	(((size_t)ptr) & (TAG_ALIGN - 1))
+
+#define odp_detag(ptr) \
+	((typeof(ptr))(((size_t)ptr) & -TAG_ALIGN))
+
+#define odp_retag(ptr, tag) \
+	((typeof(ptr))(((size_t)ptr) | odp_tag(tag)))
+
+
+static inline void *get_blk(struct pool_entry_s *pool)
 {
-	return pool_entry_ptr[pool_id];
-}
+	void *oldhead, *myhead, *newhead;
+
+	oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ);
+
+	do {
+		size_t tag = odp_tag(oldhead);
+		myhead = odp_detag(oldhead);
+		if (myhead == NULL)
+			break;
+		newhead = odp_retag(((odp_buf_blk_t *)myhead)->next, tag + 1);
+	} while (odp_cs(pool->blk_freelist, oldhead, newhead) == 0);
+
+	if (myhead == NULL) {
+		odp_atomic_inc_u64(&pool->blkempty);
+	} else {
+		uint64_t blkcount =
+			odp_atomic_fetch_sub_u32(&pool->blkcount, 1);
 
+		/* Check for low watermark condition */
+		if (blkcount == pool->low_wm) {
+			LOCK(&pool->lock);
+			if (blkcount <= pool->low_wm &&
+			    !pool->flags.low_wm_assert) {
+				pool->flags.low_wm_assert = 1;
+				odp_atomic_inc_u64(&pool->low_wm_count);
+			}
+			UNLOCK(&pool->lock);
+		}
+		odp_atomic_inc_u64(&pool->blkallocs);
+	}
+
+	return (void *)myhead;
+}
 
-static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf)
+static inline void ret_blk(struct pool_entry_s *pool, void *block)
 {
-	odp_buffer_bits_t handle;
-	uint32_t pool_id;
-	uint32_t index;
-	struct pool_entry_s *pool;
-	odp_buffer_hdr_t *hdr;
-
-	handle.u32 = buf;
-	pool_id    = handle.pool_id;
-	index      = handle.index;
-
-#ifdef POOL_ERROR_CHECK
-	if (odp_unlikely(pool_id > ODP_CONFIG_BUFFER_POOLS)) {
-		ODP_ERR("odp_buf_to_hdr: Bad pool id\n");
-		return NULL;
+	void *oldhead, *myhead, *myblock;
+
+	oldhead = _odp_atomic_ptr_load(&pool->blk_freelist, _ODP_MEMMODEL_ACQ);
+
+	do {
+		size_t tag = odp_tag(oldhead);
+		myhead = odp_detag(oldhead);
+		((odp_buf_blk_t *)block)->next = myhead;
+		myblock = odp_retag(block, tag + 1);
+	} while (odp_cs(pool->blk_freelist, oldhead, myblock) == 0);
+
+	odp_atomic_inc_u64(&pool->blkfrees);
+	uint64_t blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1);
+
+	/* Check if low watermark condition should be deasserted */
+	if (blkcount == pool->high_wm) {
+		LOCK(&pool->lock);
+		if (blkcount == pool->high_wm && pool->flags.low_wm_assert) {
+			pool->flags.low_wm_assert = 0;
+			odp_atomic_inc_u64(&pool->high_wm_count);
+		}
+		UNLOCK(&pool->lock);
 	}
-#endif
+}
 
-	pool = get_pool_entry(pool_id);
+static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool)
+{
+	odp_buffer_hdr_t *oldhead, *myhead, *newhead;
+
+	oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ);
 
-#ifdef POOL_ERROR_CHECK
-	if (odp_unlikely(index > pool->num_bufs - 1)) {
-		ODP_ERR("odp_buf_to_hdr: Bad buffer index\n");
-		return NULL;
+	do {
+		size_t tag = odp_tag(oldhead);
+		myhead = odp_detag(oldhead);
+		if (myhead == NULL)
+			break;
+		newhead = odp_retag(myhead->next, tag + 1);
+	} while (odp_cs(pool->buf_freelist, oldhead, newhead) == 0);
+
+	if (myhead != NULL) {
+		myhead->next = myhead;
+		myhead->allocator = odp_thread_id();
+		odp_atomic_inc_u32(&pool->bufcount);
+		odp_atomic_inc_u64(&pool->bufallocs);
+	} else {
+		odp_atomic_inc_u64(&pool->bufempty);
 	}
-#endif
 
-	hdr = (odp_buffer_hdr_t *)(pool->buf_base + index * pool->buf_size);
+	return (void *)myhead;
+}
+
+static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf)
+{
+	odp_buffer_hdr_t *oldhead, *myhead, *mybuf;
+
+	if (!buf->flags.hdrdata)
+		while (buf->segcount > 0) {
+			if (buffer_is_secure(buf) || pool_is_secure(pool))
+				memset(buf->addr[buf->segcount - 1],
+				       0, buf->segsize);
+			ret_blk(pool, buf->addr[--buf->segcount]);
+		}
+
+	oldhead = _odp_atomic_ptr_load(&pool->buf_freelist, _ODP_MEMMODEL_ACQ);
+
+	do {
+		size_t tag = odp_tag(oldhead);
+		myhead = odp_detag(oldhead);
+		buf->next = myhead;
+		mybuf = odp_retag(buf, tag + 1);
+	} while (odp_cs(pool->buf_freelist, oldhead, mybuf) == 0);
+
+	odp_atomic_dec_u32(&pool->bufcount);
+	odp_atomic_inc_u64(&pool->buffrees);
+}
+
+static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id)
+{
+	return pool_id + 1;
+}
 
-	return hdr;
+static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl)
+{
+	return pool_hdl - 1;
+}
+
+static inline void *get_pool_entry(uint32_t pool_id)
+{
+	return pool_entry_ptr[pool_id];
 }
 
+static inline pool_entry_t *odp_pool_to_entry(odp_buffer_pool_t pool)
+{
+	return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool));
+}
+
+static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf)
+{
+	return odp_pool_to_entry(buf->pool_hdl);
+}
+
+static inline size_t odp_buffer_pool_segment_size(odp_buffer_pool_t pool)
+{
+	return odp_pool_to_entry(pool)->s.seg_size;
+}
 
 #ifdef __cplusplus
 }
diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h
index 49c59b2..f34a83d 100644
--- a/platform/linux-generic/include/odp_packet_internal.h
+++ b/platform/linux-generic/include/odp_packet_internal.h
@@ -22,6 +22,7 @@  extern "C" {
 #include <odp_debug.h>
 #include <odp_buffer_internal.h>
 #include <odp_buffer_pool_internal.h>
+#include <odp_buffer_inlines.h>
 #include <odp_packet.h>
 #include <odp_packet_io.h>
 
@@ -92,7 +93,8 @@  typedef union {
 	};
 } output_flags_t;
 
-ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t), "OUTPUT_FLAGS_SIZE_ERROR");
+ODP_STATIC_ASSERT(sizeof(output_flags_t) == sizeof(uint32_t),
+		  "OUTPUT_FLAGS_SIZE_ERROR");
 
 /**
  * Internal Packet header
@@ -105,25 +107,23 @@  typedef struct {
 	error_flags_t  error_flags;
 	output_flags_t output_flags;
 
-	uint32_t frame_offset; /**< offset to start of frame, even on error */
 	uint32_t l2_offset; /**< offset to L2 hdr, e.g. Eth */
 	uint32_t l3_offset; /**< offset to L3 hdr, e.g. IPv4, IPv6 */
 	uint32_t l4_offset; /**< offset to L4 hdr (TCP, UDP, SCTP, also ICMP) */
 
 	uint32_t frame_len;
+	uint32_t headroom;
+	uint32_t tailroom;
 
 	uint64_t user_ctx;        /* user context */
 
 	odp_pktio_t input;
-
-	uint32_t pad;
-	uint8_t  buf_data[]; /* start of buffer data area */
 } odp_packet_hdr_t;
 
-ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) == ODP_OFFSETOF(odp_packet_hdr_t, buf_data),
-	   "ODP_PACKET_HDR_T__SIZE_ERR");
-ODP_STATIC_ASSERT(sizeof(odp_packet_hdr_t) % sizeof(uint64_t) == 0,
-	   "ODP_PACKET_HDR_T__SIZE_ERR2");
+typedef struct odp_packet_hdr_stride {
+	uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))];
+} odp_packet_hdr_stride;
+
 
 /**
  * Return the packet header
@@ -138,6 +138,38 @@  static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt)
  */
 void odp_packet_parse(odp_packet_t pkt, size_t len, size_t l2_offset);
 
+/**
+ * Initialize packet buffer
+ */
+static inline void packet_init(pool_entry_t *pool,
+			       odp_packet_hdr_t *pkt_hdr,
+			       size_t size)
+{
+       /*
+	* Reset parser metadata.  Note that we clear via memset to make
+	* this routine indepenent of any additional adds to packet metadata.
+	*/
+	const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr);
+	uint8_t *start;
+	size_t len;
+
+	start = (uint8_t *)pkt_hdr + start_offset;
+	len = sizeof(odp_packet_hdr_t) - start_offset;
+	memset(start, 0, len);
+
+       /*
+	* Packet headroom is set from the pool's headroom
+	* Packet tailroom is rounded up to fill the last
+	* segment occupied by the allocated length.
+	*/
+	pkt_hdr->frame_len = size;
+	pkt_hdr->headroom  = pool->s.headroom;
+	pkt_hdr->tailroom  =
+		(pool->s.seg_size * pkt_hdr->buf_hdr.segcount) -
+		(pool->s.headroom + size);
+}
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h
index ad28f53..2ff36ce 100644
--- a/platform/linux-generic/include/odp_timer_internal.h
+++ b/platform/linux-generic/include/odp_timer_internal.h
@@ -51,14 +51,9 @@  typedef struct odp_timeout_hdr_t {
 	uint8_t buf_data[];
 } odp_timeout_hdr_t;
 
-
-
-ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) ==
-	   ODP_OFFSETOF(odp_timeout_hdr_t, buf_data),
-	   "ODP_TIMEOUT_HDR_T__SIZE_ERR");
-
-ODP_STATIC_ASSERT(sizeof(odp_timeout_hdr_t) % sizeof(uint64_t) == 0,
-	   "ODP_TIMEOUT_HDR_T__SIZE_ERR2");
+typedef struct odp_timeout_hdr_stride {
+	uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))];
+} odp_timeout_hdr_stride;
 
 
 /**
diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c
index bcbb99a..366190c 100644
--- a/platform/linux-generic/odp_buffer.c
+++ b/platform/linux-generic/odp_buffer.c
@@ -5,8 +5,9 @@ 
  */
 
 #include <odp_buffer.h>
-#include <odp_buffer_internal.h>
 #include <odp_buffer_pool_internal.h>
+#include <odp_buffer_internal.h>
+#include <odp_buffer_inlines.h>
 
 #include <string.h>
 #include <stdio.h>
@@ -16,7 +17,7 @@  void *odp_buffer_addr(odp_buffer_t buf)
 {
 	odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf);
 
-	return hdr->addr;
+	return hdr->addr[0];
 }
 
 
@@ -38,11 +39,7 @@  int odp_buffer_type(odp_buffer_t buf)
 
 int odp_buffer_is_valid(odp_buffer_t buf)
 {
-	odp_buffer_bits_t handle;
-
-	handle.u32 = buf;
-
-	return (handle.index != ODP_BUFFER_INVALID_INDEX);
+	return validate_buf(buf) != NULL;
 }
 
 
@@ -63,28 +60,14 @@  int odp_buffer_snprint(char *str, size_t n, odp_buffer_t buf)
 	len += snprintf(&str[len], n-len,
 			"  pool         %i\n",        hdr->pool_hdl);
 	len += snprintf(&str[len], n-len,
-			"  index        %"PRIu32"\n", hdr->index);
-	len += snprintf(&str[len], n-len,
-			"  phy_addr     %"PRIu64"\n", hdr->phys_addr);
-	len += snprintf(&str[len], n-len,
 			"  addr         %p\n",        hdr->addr);
 	len += snprintf(&str[len], n-len,
 			"  size         %zu\n",       hdr->size);
 	len += snprintf(&str[len], n-len,
-			"  cur_offset   %zu\n",       hdr->cur_offset);
-	len += snprintf(&str[len], n-len,
 			"  ref_count    %i\n",
 			odp_atomic_load_u32(&hdr->ref_count));
 	len += snprintf(&str[len], n-len,
 			"  type         %i\n",        hdr->type);
-	len += snprintf(&str[len], n-len,
-			"  Scatter list\n");
-	len += snprintf(&str[len], n-len,
-			"    num_bufs   %i\n",        hdr->scatter.num_bufs);
-	len += snprintf(&str[len], n-len,
-			"    pos        %i\n",        hdr->scatter.pos);
-	len += snprintf(&str[len], n-len,
-			"    total_len  %zu\n",       hdr->scatter.total_len);
 
 	return len;
 }
@@ -101,9 +84,3 @@  void odp_buffer_print(odp_buffer_t buf)
 
 	ODP_PRINT("\n%s\n", str);
 }
-
-void odp_buffer_copy_scatter(odp_buffer_t buf_dst, odp_buffer_t buf_src)
-{
-	(void)buf_dst;
-	(void)buf_src;
-}
diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c
index 6a0a6b2..72f04f3 100644
--- a/platform/linux-generic/odp_buffer_pool.c
+++ b/platform/linux-generic/odp_buffer_pool.c
@@ -6,8 +6,9 @@ 
 
 #include <odp_std_types.h>
 #include <odp_buffer_pool.h>
-#include <odp_buffer_pool_internal.h>
 #include <odp_buffer_internal.h>
+#include <odp_buffer_pool_internal.h>
+#include <odp_buffer_inlines.h>
 #include <odp_packet_internal.h>
 #include <odp_timer_internal.h>
 #include <odp_shared_memory.h>
@@ -16,57 +17,35 @@ 
 #include <odp_config.h>
 #include <odp_hints.h>
 #include <odp_debug.h>
+#include <odp_atomic_internal.h>
 
 #include <string.h>
 #include <stdlib.h>
 
 
-#ifdef POOL_USE_TICKETLOCK
-#include <odp_ticketlock.h>
-#define LOCK(a)      odp_ticketlock_lock(a)
-#define UNLOCK(a)    odp_ticketlock_unlock(a)
-#define LOCK_INIT(a) odp_ticketlock_init(a)
-#else
-#include <odp_spinlock.h>
-#define LOCK(a)      odp_spinlock_lock(a)
-#define UNLOCK(a)    odp_spinlock_unlock(a)
-#define LOCK_INIT(a) odp_spinlock_init(a)
-#endif
-
-
 #if ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS
 #error ODP_CONFIG_BUFFER_POOLS > ODP_BUFFER_MAX_POOLS
 #endif
 
-#define NULL_INDEX ((uint32_t)-1)
 
-union buffer_type_any_u {
+typedef union buffer_type_any_u {
 	odp_buffer_hdr_t  buf;
 	odp_packet_hdr_t  pkt;
 	odp_timeout_hdr_t tmo;
-};
-
-ODP_STATIC_ASSERT((sizeof(union buffer_type_any_u) % 8) == 0,
-	   "BUFFER_TYPE_ANY_U__SIZE_ERR");
+} odp_anybuf_t;
 
 /* Any buffer type header */
 typedef struct {
 	union buffer_type_any_u any_hdr;    /* any buffer type */
-	uint8_t                 buf_data[]; /* start of buffer data area */
 } odp_any_buffer_hdr_t;
 
-
-typedef union pool_entry_u {
-	struct pool_entry_s s;
-
-	uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))];
-
-} pool_entry_t;
+typedef struct odp_any_hdr_stride {
+	uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))];
+} odp_any_hdr_stride;
 
 
 typedef struct pool_table_t {
 	pool_entry_t pool[ODP_CONFIG_BUFFER_POOLS];
-
 } pool_table_t;
 
 
@@ -77,38 +56,6 @@  static pool_table_t *pool_tbl;
 void *pool_entry_ptr[ODP_CONFIG_BUFFER_POOLS];
 
 
-static __thread odp_buffer_chunk_hdr_t *local_chunk[ODP_CONFIG_BUFFER_POOLS];
-
-
-static inline odp_buffer_pool_t pool_index_to_handle(uint32_t pool_id)
-{
-	return pool_id + 1;
-}
-
-
-static inline uint32_t pool_handle_to_index(odp_buffer_pool_t pool_hdl)
-{
-	return pool_hdl -1;
-}
-
-
-static inline void set_handle(odp_buffer_hdr_t *hdr,
-			      pool_entry_t *pool, uint32_t index)
-{
-	odp_buffer_pool_t pool_hdl = pool->s.pool_hdl;
-	uint32_t          pool_id  = pool_handle_to_index(pool_hdl);
-
-	if (pool_id >= ODP_CONFIG_BUFFER_POOLS)
-		ODP_ABORT("set_handle: Bad pool handle %u\n", pool_hdl);
-
-	if (index > ODP_BUFFER_MAX_INDEX)
-		ODP_ERR("set_handle: Bad buffer index\n");
-
-	hdr->handle.pool_id = pool_id;
-	hdr->handle.index   = index;
-}
-
-
 int odp_buffer_pool_init_global(void)
 {
 	uint32_t i;
@@ -142,269 +89,243 @@  int odp_buffer_pool_init_global(void)
 	return 0;
 }
 
+/**
+ * Buffer pool creation
+ */
 
-static odp_buffer_hdr_t *index_to_hdr(pool_entry_t *pool, uint32_t index)
-{
-	odp_buffer_hdr_t *hdr;
-
-	hdr = (odp_buffer_hdr_t *)(pool->s.buf_base + index * pool->s.buf_size);
-	return hdr;
-}
-
-
-static void add_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr, uint32_t index)
-{
-	uint32_t i = chunk_hdr->chunk.num_bufs;
-	chunk_hdr->chunk.buf_index[i] = index;
-	chunk_hdr->chunk.num_bufs++;
-}
-
-
-static uint32_t rem_buf_index(odp_buffer_chunk_hdr_t *chunk_hdr)
+odp_buffer_pool_t odp_buffer_pool_create(const char *name,
+					 odp_shm_t shm,
+					 odp_buffer_pool_param_t *params)
 {
-	uint32_t index;
+	odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID;
+	pool_entry_t *pool;
 	uint32_t i;
 
-	i = chunk_hdr->chunk.num_bufs - 1;
-	index = chunk_hdr->chunk.buf_index[i];
-	chunk_hdr->chunk.num_bufs--;
-	return index;
-}
-
-
-static odp_buffer_chunk_hdr_t *next_chunk(pool_entry_t *pool,
-					  odp_buffer_chunk_hdr_t *chunk_hdr)
-{
-	uint32_t index;
-
-	index = chunk_hdr->chunk.buf_index[ODP_BUFS_PER_CHUNK-1];
-	if (index == NULL_INDEX)
-		return NULL;
-	else
-		return (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index);
-}
-
-
-static odp_buffer_chunk_hdr_t *rem_chunk(pool_entry_t *pool)
-{
-	odp_buffer_chunk_hdr_t *chunk_hdr;
-
-	chunk_hdr = pool->s.head;
-	if (chunk_hdr == NULL) {
-		/* Pool is empty */
-		return NULL;
-	}
-
-	pool->s.head = next_chunk(pool, chunk_hdr);
-	pool->s.free_bufs -= ODP_BUFS_PER_CHUNK;
-
-	/* unlink */
-	rem_buf_index(chunk_hdr);
-	return chunk_hdr;
-}
-
+	/* Default initialization paramters */
+	static _odp_buffer_pool_init_t default_init_params = {
+		.udata_size = 0,
+		.buf_init = NULL,
+		.buf_init_arg = NULL,
+	};
 
-static void add_chunk(pool_entry_t *pool, odp_buffer_chunk_hdr_t *chunk_hdr)
-{
-	if (pool->s.head) /* link pool head to the chunk */
-		add_buf_index(chunk_hdr, pool->s.head->buf_hdr.index);
-	else
-		add_buf_index(chunk_hdr, NULL_INDEX);
+	_odp_buffer_pool_init_t *init_params = &default_init_params;
 
-	pool->s.head = chunk_hdr;
-	pool->s.free_bufs += ODP_BUFS_PER_CHUNK;
-}
+	if (params == NULL)
+		return ODP_BUFFER_POOL_INVALID;
 
+	/* Restriction for v1.0: All buffers are unsegmented */
+	const int unsegmented = 1;
 
-static void check_align(pool_entry_t *pool, odp_buffer_hdr_t *hdr)
-{
-	if (!ODP_ALIGNED_CHECK_POWER_2(hdr->addr, pool->s.user_align)) {
-		ODP_ABORT("check_align: user data align error %p, align %zu\n",
-			  hdr->addr, pool->s.user_align);
-	}
-
-	if (!ODP_ALIGNED_CHECK_POWER_2(hdr, ODP_CACHE_LINE_SIZE)) {
-		ODP_ABORT("check_align: hdr align error %p, align %i\n",
-			  hdr, ODP_CACHE_LINE_SIZE);
-	}
-}
+	/* Restriction for v1.0: No zeroization support */
+	const int zeroized = 0;
 
+	/* Restriction for v1.0: No udata support */
+	uint32_t udata_stride = (init_params->udata_size > sizeof(void *)) ?
+		ODP_CACHE_LINE_SIZE_ROUNDUP(init_params->udata_size) :
+		0;
 
-static void fill_hdr(void *ptr, pool_entry_t *pool, uint32_t index,
-		     int buf_type)
-{
-	odp_buffer_hdr_t *hdr = (odp_buffer_hdr_t *)ptr;
-	size_t size = pool->s.hdr_size;
-	uint8_t *buf_data;
+	uint32_t blk_size, buf_stride;
 
-	if (buf_type == ODP_BUFFER_TYPE_CHUNK)
-		size = sizeof(odp_buffer_chunk_hdr_t);
+	switch (params->buf_type) {
+	case ODP_BUFFER_TYPE_RAW:
+		blk_size = params->buf_size;
 
-	switch (pool->s.buf_type) {
-		odp_raw_buffer_hdr_t *raw_hdr;
-		odp_packet_hdr_t *packet_hdr;
-		odp_timeout_hdr_t *tmo_hdr;
-		odp_any_buffer_hdr_t *any_hdr;
+		/* Optimize small raw buffers */
+		if (blk_size > TAG_ALIGN)
+			blk_size = ODP_ALIGN_ROUNDUP(blk_size, TAG_ALIGN);
 
-	case ODP_BUFFER_TYPE_RAW:
-		raw_hdr  = ptr;
-		buf_data = raw_hdr->buf_data;
+		buf_stride = sizeof(odp_buffer_hdr_stride);
 		break;
+
 	case ODP_BUFFER_TYPE_PACKET:
-		packet_hdr = ptr;
-		buf_data   = packet_hdr->buf_data;
+		if (unsegmented)
+			blk_size =
+				ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size);
+		else
+			blk_size = ODP_ALIGN_ROUNDUP(params->buf_size,
+						     ODP_CONFIG_BUF_SEG_SIZE);
+		buf_stride = sizeof(odp_packet_hdr_stride);
 		break;
+
 	case ODP_BUFFER_TYPE_TIMEOUT:
-		tmo_hdr  = ptr;
-		buf_data = tmo_hdr->buf_data;
+		blk_size = 0; /* Timeouts have no block data, only metadata */
+		buf_stride = sizeof(odp_timeout_hdr_stride);
 		break;
+
 	case ODP_BUFFER_TYPE_ANY:
-		any_hdr  = ptr;
-		buf_data = any_hdr->buf_data;
+		if (unsegmented)
+			blk_size =
+				ODP_CACHE_LINE_SIZE_ROUNDUP(params->buf_size);
+		else
+			blk_size = ODP_ALIGN_ROUNDUP(params->buf_size,
+						     ODP_CONFIG_BUF_SEG_SIZE);
+		buf_stride = sizeof(odp_any_hdr_stride);
 		break;
-	default:
-		ODP_ABORT("Bad buffer type\n");
-	}
-
-	memset(hdr, 0, size);
-
-	set_handle(hdr, pool, index);
-
-	hdr->addr     = &buf_data[pool->s.buf_offset - pool->s.hdr_size];
-	hdr->index    = index;
-	hdr->size     = pool->s.user_size;
-	hdr->pool_hdl = pool->s.pool_hdl;
-	hdr->type     = buf_type;
 
-	check_align(pool, hdr);
-}
-
-
-static void link_bufs(pool_entry_t *pool)
-{
-	odp_buffer_chunk_hdr_t *chunk_hdr;
-	size_t hdr_size;
-	size_t data_size;
-	size_t data_align;
-	size_t tot_size;
-	size_t offset;
-	size_t min_size;
-	uint64_t pool_size;
-	uintptr_t buf_base;
-	uint32_t index;
-	uintptr_t pool_base;
-	int buf_type;
-
-	buf_type   = pool->s.buf_type;
-	data_size  = pool->s.user_size;
-	data_align = pool->s.user_align;
-	pool_size  = pool->s.pool_size;
-	pool_base  = (uintptr_t) pool->s.pool_base_addr;
-
-	if (buf_type == ODP_BUFFER_TYPE_RAW) {
-		hdr_size = sizeof(odp_raw_buffer_hdr_t);
-	} else if (buf_type == ODP_BUFFER_TYPE_PACKET) {
-		hdr_size = sizeof(odp_packet_hdr_t);
-	} else if (buf_type == ODP_BUFFER_TYPE_TIMEOUT) {
-		hdr_size = sizeof(odp_timeout_hdr_t);
-	} else if (buf_type == ODP_BUFFER_TYPE_ANY) {
-		hdr_size = sizeof(odp_any_buffer_hdr_t);
-	} else
-		ODP_ABORT("odp_buffer_pool_create: Bad type %i\n", buf_type);
-
-
-	/* Chunk must fit into buffer data area.*/
-	min_size = sizeof(odp_buffer_chunk_hdr_t) - hdr_size;
-	if (data_size < min_size)
-		data_size = min_size;
-
-	/* Roundup data size to full cachelines */
-	data_size = ODP_CACHE_LINE_SIZE_ROUNDUP(data_size);
-
-	/* Min cacheline alignment for buffer header and data */
-	data_align = ODP_CACHE_LINE_SIZE_ROUNDUP(data_align);
-	offset     = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size);
-
-	/* Multiples of cacheline size */
-	if (data_size > data_align)
-		tot_size = data_size + offset;
-	else
-		tot_size = data_align + offset;
-
-	/* First buffer */
-	buf_base = ODP_ALIGN_ROUNDUP(pool_base + offset, data_align) - offset;
-
-	pool->s.hdr_size   = hdr_size;
-	pool->s.buf_base   = buf_base;
-	pool->s.buf_size   = tot_size;
-	pool->s.buf_offset = offset;
-	index = 0;
-
-	chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool, index);
-	pool->s.head   = NULL;
-	pool_size     -= buf_base - pool_base;
-
-	while (pool_size > ODP_BUFS_PER_CHUNK * tot_size) {
-		int i;
-
-		fill_hdr(chunk_hdr, pool, index, ODP_BUFFER_TYPE_CHUNK);
-
-		index++;
-
-		for (i = 0; i < ODP_BUFS_PER_CHUNK - 1; i++) {
-			odp_buffer_hdr_t *hdr = index_to_hdr(pool, index);
-
-			fill_hdr(hdr, pool, index, buf_type);
-
-			add_buf_index(chunk_hdr, index);
-			index++;
-		}
-
-		add_chunk(pool, chunk_hdr);
-
-		chunk_hdr = (odp_buffer_chunk_hdr_t *)index_to_hdr(pool,
-								   index);
-		pool->s.num_bufs += ODP_BUFS_PER_CHUNK;
-		pool_size -=  ODP_BUFS_PER_CHUNK * tot_size;
+	default:
+		return ODP_BUFFER_POOL_INVALID;
 	}
-}
-
-
-odp_buffer_pool_t odp_buffer_pool_create(const char *name,
-					 void *base_addr, uint64_t size,
-					 size_t buf_size, size_t buf_align,
-					 int buf_type)
-{
-	odp_buffer_pool_t pool_hdl = ODP_BUFFER_POOL_INVALID;
-	pool_entry_t *pool;
-	uint32_t i;
 
+	/* Find an unused buffer pool slot and iniitalize it as requested */
 	for (i = 0; i < ODP_CONFIG_BUFFER_POOLS; i++) {
 		pool = get_pool_entry(i);
 
 		LOCK(&pool->s.lock);
+		if (pool->s.pool_shm != ODP_SHM_INVALID) {
+			UNLOCK(&pool->s.lock);
+			continue;
+		}
 
-		if (pool->s.buf_base == 0) {
-			/* found free pool */
+		/* found free pool */
+		size_t block_size, mdata_size, udata_size;
 
+		pool->s.flags.all = 0;
+
+		if (name == NULL) {
+			pool->s.name[0] = 0;
+		} else {
 			strncpy(pool->s.name, name,
 				ODP_BUFFER_POOL_NAME_LEN - 1);
 			pool->s.name[ODP_BUFFER_POOL_NAME_LEN - 1] = 0;
-			pool->s.pool_base_addr = base_addr;
-			pool->s.pool_size      = size;
-			pool->s.user_size      = buf_size;
-			pool->s.user_align     = buf_align;
-			pool->s.buf_type       = buf_type;
-
-			link_bufs(pool);
-
-			UNLOCK(&pool->s.lock);
+			pool->s.flags.has_name = 1;
+		}
 
-			pool_hdl = pool->s.pool_hdl;
-			break;
+		pool->s.params = *params;
+		pool->s.init_params = *init_params;
+
+		mdata_size = params->num_bufs * buf_stride;
+		udata_size = params->num_bufs * udata_stride;
+
+		/* Optimize for short buffers: Data stored in buffer hdr */
+		if (blk_size <= TAG_ALIGN)
+			block_size = 0;
+		else
+			block_size = params->num_bufs * blk_size;
+
+		pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(mdata_size +
+							  udata_size +
+							  block_size);
+
+		if (shm == ODP_SHM_NULL) {
+			shm = odp_shm_reserve(pool->s.name,
+					      pool->s.pool_size,
+					      ODP_PAGE_SIZE, 0);
+			if (shm == ODP_SHM_INVALID) {
+				UNLOCK(&pool->s.lock);
+				return ODP_BUFFER_INVALID;
+			}
+			pool->s.pool_base_addr = odp_shm_addr(shm);
+		} else {
+			odp_shm_info_t info;
+			if (odp_shm_info(shm, &info) != 0 ||
+			    info.size < pool->s.pool_size) {
+				UNLOCK(&pool->s.lock);
+				return ODP_BUFFER_POOL_INVALID;
+			}
+			pool->s.pool_base_addr = odp_shm_addr(shm);
+			void *page_addr =
+				ODP_ALIGN_ROUNDUP_PTR(pool->s.pool_base_addr,
+						      ODP_PAGE_SIZE);
+			if (pool->s.pool_base_addr != page_addr) {
+				if (info.size < pool->s.pool_size +
+				    ((size_t)page_addr -
+				     (size_t)pool->s.pool_base_addr)) {
+					UNLOCK(&pool->s.lock);
+					return ODP_BUFFER_POOL_INVALID;
+				}
+				pool->s.pool_base_addr = page_addr;
+			}
+			pool->s.flags.user_supplied_shm = 1;
 		}
 
+		pool->s.pool_shm = shm;
+
+		/* Now safe to unlock since pool entry has been allocated */
 		UNLOCK(&pool->s.lock);
+
+		pool->s.flags.unsegmented = unsegmented;
+		pool->s.flags.zeroized = zeroized;
+		pool->s.seg_size = unsegmented ?
+			blk_size : ODP_CONFIG_BUF_SEG_SIZE;
+
+		uint8_t *udata_base_addr = pool->s.pool_base_addr + mdata_size;
+		uint8_t *block_base_addr = udata_base_addr + udata_size;
+
+		/* bufcount will decrement down to 0 as we populate freelist */
+		odp_atomic_store_u32(&pool->s.bufcount, params->num_bufs);
+		pool->s.buf_stride = buf_stride;
+		pool->s.high_wm = 0;
+		pool->s.low_wm = 0;
+		pool->s.headroom = 0;
+		pool->s.tailroom = 0;
+		_odp_atomic_ptr_store(&pool->s.buf_freelist, NULL,
+				      _ODP_MEMMODEL_RLX);
+		_odp_atomic_ptr_store(&pool->s.blk_freelist, NULL,
+				      _ODP_MEMMODEL_RLX);
+
+		uint8_t *buf = udata_base_addr - buf_stride;
+		uint8_t *udat = udata_stride == 0 ? NULL :
+			block_base_addr - udata_stride;
+
+		/* Init buffer common header and add to pool buffer freelist */
+		do {
+			odp_buffer_hdr_t *tmp =
+				(odp_buffer_hdr_t *)(void *)buf;
+
+			/* Iniitalize buffer metadata */
+			tmp->allocator = ODP_CONFIG_MAX_THREADS;
+			tmp->flags.all = 0;
+			tmp->flags.zeroized = zeroized;
+			tmp->size = 0;
+			odp_atomic_store_u32(&tmp->ref_count, 0);
+			tmp->type = params->buf_type;
+			tmp->pool_hdl = pool->s.pool_hdl;
+			tmp->udata_addr = (void *)udat;
+			tmp->udata_size = init_params->udata_size;
+			tmp->segcount = 0;
+			tmp->segsize = pool->s.seg_size;
+			tmp->handle.handle = odp_buffer_encode_handle(tmp);
+
+			/* Set 1st seg addr for zero-len buffers */
+			tmp->addr[0] = NULL;
+
+			/* Special case for short buffer data */
+			if (blk_size <= TAG_ALIGN) {
+				tmp->flags.hdrdata = 1;
+				if (blk_size > 0) {
+					tmp->segcount = 1;
+					tmp->addr[0] = &tmp->addr[1];
+				}
+			}
+
+			/* Push buffer onto pool's freelist */
+			ret_buf(&pool->s, tmp);
+			buf  -= buf_stride;
+			udat -= udata_stride;
+		} while (buf >= pool->s.pool_base_addr);
+
+		/* Form block freelist for pool */
+		uint8_t *blk = pool->s.pool_base_addr + pool->s.pool_size -
+			pool->s.seg_size;
+
+		if (blk_size > TAG_ALIGN)
+			do {
+				ret_blk(&pool->s, blk);
+				blk -= pool->s.seg_size;
+			} while (blk >= block_base_addr);
+
+		/* Initialize pool statistics counters */
+		odp_atomic_store_u64(&pool->s.bufallocs, 0);
+		odp_atomic_store_u64(&pool->s.buffrees, 0);
+		odp_atomic_store_u64(&pool->s.blkallocs, 0);
+		odp_atomic_store_u64(&pool->s.blkfrees, 0);
+		odp_atomic_store_u64(&pool->s.bufempty, 0);
+		odp_atomic_store_u64(&pool->s.blkempty, 0);
+		odp_atomic_store_u64(&pool->s.high_wm_count, 0);
+		odp_atomic_store_u64(&pool->s.low_wm_count, 0);
+
+		pool_hdl = pool->s.pool_hdl;
+		break;
 	}
 
 	return pool_hdl;
@@ -431,145 +352,170 @@  odp_buffer_pool_t odp_buffer_pool_lookup(const char *name)
 	return ODP_BUFFER_POOL_INVALID;
 }
 
-
-odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl)
+int odp_buffer_pool_info(odp_buffer_pool_t pool_hdl,
+			 odp_shm_t *shm,
+			 odp_buffer_pool_info_t *info)
 {
-	pool_entry_t *pool;
-	odp_buffer_chunk_hdr_t *chunk;
-	odp_buffer_bits_t handle;
 	uint32_t pool_id = pool_handle_to_index(pool_hdl);
+	pool_entry_t *pool = get_pool_entry(pool_id);
 
-	pool  = get_pool_entry(pool_id);
-	chunk = local_chunk[pool_id];
+	if (pool == NULL || info == NULL)
+		return -1;
 
-	if (chunk == NULL) {
-		LOCK(&pool->s.lock);
-		chunk = rem_chunk(pool);
-		UNLOCK(&pool->s.lock);
+	*shm = pool->s.flags.user_supplied_shm ?
+		pool->s.pool_shm : ODP_SHM_NULL;
+	info->name = pool->s.name;
+	info->params.buf_size  = pool->s.params.buf_size;
+	info->params.buf_align = pool->s.params.buf_align;
+	info->params.num_bufs  = pool->s.params.num_bufs;
+	info->params.buf_type  = pool->s.params.buf_type;
 
-		if (chunk == NULL)
-			return ODP_BUFFER_INVALID;
+	return 0;
+}
 
-		local_chunk[pool_id] = chunk;
-	}
+int odp_buffer_pool_destroy(odp_buffer_pool_t pool_hdl)
+{
+	uint32_t pool_id = pool_handle_to_index(pool_hdl);
+	pool_entry_t *pool = get_pool_entry(pool_id);
 
-	if (chunk->chunk.num_bufs == 0) {
-		/* give the chunk buffer */
-		local_chunk[pool_id] = NULL;
-		chunk->buf_hdr.type = pool->s.buf_type;
+	if (pool == NULL)
+		return -1;
 
-		handle = chunk->buf_hdr.handle;
-	} else {
-		odp_buffer_hdr_t *hdr;
-		uint32_t index;
-		index = rem_buf_index(chunk);
-		hdr = index_to_hdr(pool, index);
+	LOCK(&pool->s.lock);
 
-		handle = hdr->handle;
+	if (pool->s.pool_shm == ODP_SHM_INVALID ||
+	    odp_atomic_load_u32(&pool->s.bufcount) > 0 ||
+	    pool->s.flags.predefined) {
+		UNLOCK(&pool->s.lock);
+		return -1;
 	}
 
-	return handle.u32;
-}
+	if (!pool->s.flags.user_supplied_shm)
+		odp_shm_free(pool->s.pool_shm);
 
+	pool->s.pool_shm = 0;
+	UNLOCK(&pool->s.lock);
 
-void odp_buffer_free(odp_buffer_t buf)
-{
-	odp_buffer_hdr_t *hdr;
-	uint32_t pool_id;
-	pool_entry_t *pool;
-	odp_buffer_chunk_hdr_t *chunk_hdr;
-
-	hdr       = odp_buf_to_hdr(buf);
-	pool_id   = pool_handle_to_index(hdr->pool_hdl);
-	pool      = get_pool_entry(pool_id);
-	chunk_hdr = local_chunk[pool_id];
+	return 0;
+}
 
-	if (chunk_hdr && chunk_hdr->chunk.num_bufs == ODP_BUFS_PER_CHUNK - 1) {
-		/* Current chunk is full. Push back to the pool */
-		LOCK(&pool->s.lock);
-		add_chunk(pool, chunk_hdr);
-		UNLOCK(&pool->s.lock);
-		chunk_hdr = NULL;
+odp_buffer_t buffer_alloc(odp_buffer_pool_t pool_hdl, size_t size)
+{
+	pool_entry_t *pool = odp_pool_to_entry(pool_hdl);
+	size_t totsize = pool->s.headroom + size + pool->s.tailroom;
+	odp_anybuf_t *buf;
+	uint8_t *blk;
+
+	if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) ||
+	    (!pool->s.flags.unsegmented && totsize > ODP_CONFIG_BUF_MAX_SIZE))
+		return ODP_BUFFER_INVALID;
+
+	buf = (odp_anybuf_t *)(void *)get_buf(&pool->s);
+
+	if (buf == NULL)
+		return ODP_BUFFER_INVALID;
+
+	/* Get blocks for this buffer, if pool uses application data */
+	if (!buf->buf.flags.hdrdata)
+		do {
+			blk = get_blk(&pool->s);
+			if (blk == NULL) {
+				ret_buf(&pool->s, &buf->buf);
+				return ODP_BUFFER_INVALID;
+			}
+			buf->buf.addr[buf->buf.segcount++] = blk;
+			totsize -= pool->s.seg_size;
+		} while ((ssize_t)totsize > 0);
+
+	/* By default, buffers inherit their pool's zeroization setting */
+	buf->buf.flags.zeroized = pool->s.flags.zeroized;
+
+	if (buf->buf.type == ODP_BUFFER_TYPE_PACKET) {
+		packet_init(pool, &buf->pkt, size);
+
+		if (pool->s.init_params.buf_init != NULL)
+			(*pool->s.init_params.buf_init)
+				(buf->buf.handle.handle,
+				 pool->s.init_params.buf_init_arg);
 	}
 
-	if (chunk_hdr == NULL) {
-		/* Use this buffer */
-		chunk_hdr = (odp_buffer_chunk_hdr_t *)hdr;
-		local_chunk[pool_id] = chunk_hdr;
-		chunk_hdr->chunk.num_bufs = 0;
-	} else {
-		/* Add to current chunk */
-		add_buf_index(chunk_hdr, hdr->index);
-	}
+	return odp_hdr_to_buf(&buf->buf);
 }
 
-
-odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf)
+odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_hdl)
 {
-	odp_buffer_hdr_t *hdr;
-
-	hdr = odp_buf_to_hdr(buf);
-	return hdr->pool_hdl;
+	return buffer_alloc(pool_hdl,
+			    odp_pool_to_entry(pool_hdl)->s.params.buf_size);
 }
 
+void odp_buffer_free(odp_buffer_t buf)
+{
+	odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(buf);
+	pool_entry_t *pool = odp_buf_to_pool(buf_hdr);
+	ret_buf(&pool->s, buf_hdr);
+}
 
 void odp_buffer_pool_print(odp_buffer_pool_t pool_hdl)
 {
 	pool_entry_t *pool;
-	odp_buffer_chunk_hdr_t *chunk_hdr;
-	uint32_t i;
 	uint32_t pool_id;
 
 	pool_id = pool_handle_to_index(pool_hdl);
 	pool    = get_pool_entry(pool_id);
 
-	ODP_PRINT("Pool info\n");
-	ODP_PRINT("---------\n");
-	ODP_PRINT("  pool          %i\n",           pool->s.pool_hdl);
-	ODP_PRINT("  name          %s\n",           pool->s.name);
-	ODP_PRINT("  pool base     %p\n",           pool->s.pool_base_addr);
-	ODP_PRINT("  buf base      0x%"PRIxPTR"\n", pool->s.buf_base);
-	ODP_PRINT("  pool size     0x%"PRIx64"\n",  pool->s.pool_size);
-	ODP_PRINT("  buf size      %zu\n",          pool->s.user_size);
-	ODP_PRINT("  buf align     %zu\n",          pool->s.user_align);
-	ODP_PRINT("  hdr size      %zu\n",          pool->s.hdr_size);
-	ODP_PRINT("  alloc size    %zu\n",          pool->s.buf_size);
-	ODP_PRINT("  offset to hdr %zu\n",          pool->s.buf_offset);
-	ODP_PRINT("  num bufs      %"PRIu64"\n",    pool->s.num_bufs);
-	ODP_PRINT("  free bufs     %"PRIu64"\n",    pool->s.free_bufs);
-
-	/* first chunk */
-	chunk_hdr = pool->s.head;
-
-	if (chunk_hdr == NULL) {
-		ODP_ERR("  POOL EMPTY\n");
-		return;
-	}
-
-	ODP_PRINT("\n  First chunk\n");
-
-	for (i = 0; i < chunk_hdr->chunk.num_bufs - 1; i++) {
-		uint32_t index;
-		odp_buffer_hdr_t *hdr;
-
-		index = chunk_hdr->chunk.buf_index[i];
-		hdr   = index_to_hdr(pool, index);
-
-		ODP_PRINT("  [%i] addr %p, id %"PRIu32"\n", i, hdr->addr,
-			  index);
-	}
-
-	ODP_PRINT("  [%i] addr %p, id %"PRIu32"\n", i, chunk_hdr->buf_hdr.addr,
-		  chunk_hdr->buf_hdr.index);
-
-	/* next chunk */
-	chunk_hdr = next_chunk(pool, chunk_hdr);
+	uint32_t bufcount  = odp_atomic_load_u32(&pool->s.bufcount);
+	uint32_t blkcount  = odp_atomic_load_u32(&pool->s.blkcount);
+	uint64_t bufallocs = odp_atomic_load_u64(&pool->s.bufallocs);
+	uint64_t buffrees  = odp_atomic_load_u64(&pool->s.buffrees);
+	uint64_t blkallocs = odp_atomic_load_u64(&pool->s.blkallocs);
+	uint64_t blkfrees  = odp_atomic_load_u64(&pool->s.blkfrees);
+	uint64_t bufempty  = odp_atomic_load_u64(&pool->s.bufempty);
+	uint64_t blkempty  = odp_atomic_load_u64(&pool->s.blkempty);
+	uint64_t hiwmct    = odp_atomic_load_u64(&pool->s.high_wm_count);
+	uint64_t lowmct    = odp_atomic_load_u64(&pool->s.low_wm_count);
+
+	ODP_DBG("Pool info\n");
+	ODP_DBG("---------\n");
+	ODP_DBG("  pool           %i\n", pool->s.pool_hdl);
+	ODP_DBG("  name           %s\n",
+		pool->s.flags.has_name ? pool->s.name : "Unnamed Pool");
+	ODP_DBG("  pool type      %s\n",
+		pool->s.params.buf_type == ODP_BUFFER_TYPE_RAW ? "raw" :
+	       (pool->s.params.buf_type == ODP_BUFFER_TYPE_PACKET ? "packet" :
+	       (pool->s.params.buf_type == ODP_BUFFER_TYPE_TIMEOUT ? "timeout" :
+	       (pool->s.params.buf_type == ODP_BUFFER_TYPE_ANY ? "any" :
+		"unknown"))));
+	ODP_DBG("  pool storage   %sODP managed\n",
+		pool->s.flags.user_supplied_shm ?
+		"application provided, " : "");
+	ODP_DBG("  pool status    %s\n",
+		pool->s.flags.quiesced ? "quiesced" : "active");
+	ODP_DBG("  pool opts      %s, %s, %s\n",
+		pool->s.flags.unsegmented ? "unsegmented" : "segmented",
+		pool->s.flags.zeroized ? "zeroized" : "non-zeroized",
+		pool->s.flags.predefined  ? "predefined" : "created");
+	ODP_DBG("  pool base      %p\n",  pool->s.pool_base_addr);
+	ODP_DBG("  pool size      %zu (%zu pages)\n",
+		pool->s.pool_size, pool->s.pool_size / ODP_PAGE_SIZE);
+	ODP_DBG("  udata size     %zu\n", pool->s.init_params.udata_size);
+	ODP_DBG("  buf size       %zu\n", pool->s.params.buf_size);
+	ODP_DBG("  num bufs       %u\n",  pool->s.params.num_bufs);
+	ODP_DBG("  bufs in use    %u\n",  bufcount);
+	ODP_DBG("  buf allocs     %lu\n", bufallocs);
+	ODP_DBG("  buf frees      %lu\n", buffrees);
+	ODP_DBG("  buf empty      %lu\n", bufempty);
+	ODP_DBG("  blk size       %zu\n",
+		pool->s.seg_size > TAG_ALIGN ? pool->s.seg_size : 0);
+	ODP_DBG("  blks available %u\n",  blkcount);
+	ODP_DBG("  blk allocs     %lu\n", blkallocs);
+	ODP_DBG("  blk frees      %lu\n", blkfrees);
+	ODP_DBG("  blk empty      %lu\n", blkempty);
+	ODP_DBG("  high wm count  %lu\n", hiwmct);
+	ODP_DBG("  low wm count   %lu\n", lowmct);
+}
 
-	if (chunk_hdr) {
-		ODP_PRINT("  Next chunk\n");
-		ODP_PRINT("  addr %p, id %"PRIu32"\n", chunk_hdr->buf_hdr.addr,
-			  chunk_hdr->buf_hdr.index);
-	}
 
-	ODP_PRINT("\n");
+odp_buffer_pool_t odp_buffer_pool(odp_buffer_t buf)
+{
+	return odp_buf_to_hdr(buf)->pool_hdl;
 }
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c
index f8fd8ef..8deae3d 100644
--- a/platform/linux-generic/odp_packet.c
+++ b/platform/linux-generic/odp_packet.c
@@ -23,17 +23,9 @@  static inline uint8_t parse_ipv6(odp_packet_hdr_t *pkt_hdr,
 void odp_packet_init(odp_packet_t pkt)
 {
 	odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt);
-	const size_t start_offset = ODP_FIELD_SIZEOF(odp_packet_hdr_t, buf_hdr);
-	uint8_t *start;
-	size_t len;
-
-	start = (uint8_t *)pkt_hdr + start_offset;
-	len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset;
-	memset(start, 0, len);
+	pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr);
 
-	pkt_hdr->l2_offset = ODP_PACKET_OFFSET_INVALID;
-	pkt_hdr->l3_offset = ODP_PACKET_OFFSET_INVALID;
-	pkt_hdr->l4_offset = ODP_PACKET_OFFSET_INVALID;
+	packet_init(pool, pkt_hdr, 0);
 }
 
 odp_packet_t odp_packet_from_buffer(odp_buffer_t buf)
@@ -63,7 +55,7 @@  uint8_t *odp_packet_addr(odp_packet_t pkt)
 
 uint8_t *odp_packet_data(odp_packet_t pkt)
 {
-	return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->frame_offset;
+	return odp_packet_addr(pkt) + odp_packet_hdr(pkt)->headroom;
 }
 
 
@@ -130,20 +122,13 @@  void odp_packet_set_l4_offset(odp_packet_t pkt, size_t offset)
 
 int odp_packet_is_segmented(odp_packet_t pkt)
 {
-	odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt);
-
-	if (buf_hdr->scatter.num_bufs == 0)
-		return 0;
-	else
-		return 1;
+	return odp_packet_hdr(pkt)->buf_hdr.segcount > 1;
 }
 
 
 int odp_packet_seg_count(odp_packet_t pkt)
 {
-	odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr((odp_buffer_t)pkt);
-
-	return (int)buf_hdr->scatter.num_bufs + 1;
+	return odp_packet_hdr(pkt)->buf_hdr.segcount;
 }
 
 
@@ -169,7 +154,7 @@  void odp_packet_parse(odp_packet_t pkt, size_t len, size_t frame_offset)
 	uint8_t ip_proto = 0;
 
 	pkt_hdr->input_flags.eth = 1;
-	pkt_hdr->frame_offset = frame_offset;
+	pkt_hdr->l2_offset = frame_offset;
 	pkt_hdr->frame_len = len;
 
 	if (len > ODPH_ETH_LEN_MAX)
@@ -329,8 +314,6 @@  void odp_packet_print(odp_packet_t pkt)
 	len += snprintf(&str[len], n-len,
 			"  output_flags 0x%x\n", hdr->output_flags.all);
 	len += snprintf(&str[len], n-len,
-			"  frame_offset %u\n", hdr->frame_offset);
-	len += snprintf(&str[len], n-len,
 			"  l2_offset    %u\n", hdr->l2_offset);
 	len += snprintf(&str[len], n-len,
 			"  l3_offset    %u\n", hdr->l3_offset);
@@ -357,14 +340,13 @@  int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src)
 	if (pkt_dst == ODP_PACKET_INVALID || pkt_src == ODP_PACKET_INVALID)
 		return -1;
 
-	if (pkt_hdr_dst->buf_hdr.size <
-		pkt_hdr_src->frame_len + pkt_hdr_src->frame_offset)
+	if (pkt_hdr_dst->buf_hdr.size < pkt_hdr_src->frame_len)
 		return -1;
 
 	/* Copy packet header */
 	start_dst = (uint8_t *)pkt_hdr_dst + start_offset;
 	start_src = (uint8_t *)pkt_hdr_src + start_offset;
-	len = ODP_OFFSETOF(odp_packet_hdr_t, buf_data) - start_offset;
+	len = sizeof(odp_packet_hdr_t) - start_offset;
 	memcpy(start_dst, start_src, len);
 
 	/* Copy frame payload */
@@ -373,13 +355,6 @@  int odp_packet_copy(odp_packet_t pkt_dst, odp_packet_t pkt_src)
 	len = pkt_hdr_src->frame_len;
 	memcpy(start_dst, start_src, len);
 
-	/* Copy useful things from the buffer header */
-	pkt_hdr_dst->buf_hdr.cur_offset = pkt_hdr_src->buf_hdr.cur_offset;
-
-	/* Create a copy of the scatter list */
-	odp_buffer_copy_scatter(odp_packet_to_buffer(pkt_dst),
-				odp_packet_to_buffer(pkt_src));
-
 	return 0;
 }
 
diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c
index 1318bcd..b68a7c7 100644
--- a/platform/linux-generic/odp_queue.c
+++ b/platform/linux-generic/odp_queue.c
@@ -11,6 +11,7 @@ 
 #include <odp_buffer.h>
 #include <odp_buffer_internal.h>
 #include <odp_buffer_pool_internal.h>
+#include <odp_buffer_inlines.h>
 #include <odp_internal.h>
 #include <odp_shared_memory.h>
 #include <odp_schedule_internal.h>
diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c
index cc84e11..a8f1938 100644
--- a/platform/linux-generic/odp_schedule.c
+++ b/platform/linux-generic/odp_schedule.c
@@ -83,8 +83,8 @@  int odp_schedule_init_global(void)
 {
 	odp_shm_t shm;
 	odp_buffer_pool_t pool;
-	void *pool_base;
 	int i, j;
+	odp_buffer_pool_param_t params;
 
 	ODP_DBG("Schedule init ... ");
 
@@ -99,20 +99,12 @@  int odp_schedule_init_global(void)
 		return -1;
 	}
 
-	shm = odp_shm_reserve("odp_sched_pool",
-			      SCHED_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
+	params.buf_size  = sizeof(queue_desc_t);
+	params.buf_align = ODP_CACHE_LINE_SIZE;
+	params.num_bufs  = SCHED_POOL_SIZE/sizeof(queue_desc_t);
+	params.buf_type  = ODP_BUFFER_TYPE_RAW;
 
-	pool_base = odp_shm_addr(shm);
-
-	if (pool_base == NULL) {
-		ODP_ERR("Schedule init: Shm reserve failed.\n");
-		return -1;
-	}
-
-	pool = odp_buffer_pool_create("odp_sched_pool", pool_base,
-				      SCHED_POOL_SIZE, sizeof(queue_desc_t),
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_RAW);
+	pool = odp_buffer_pool_create("odp_sched_pool", ODP_SHM_NULL, &params);
 
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		ODP_ERR("Schedule init: Pool create failed.\n");
diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c
index 313c713..914cb58 100644
--- a/platform/linux-generic/odp_timer.c
+++ b/platform/linux-generic/odp_timer.c
@@ -5,9 +5,10 @@ 
  */
 
 #include <odp_timer.h>
-#include <odp_timer_internal.h>
 #include <odp_time.h>
 #include <odp_buffer_pool_internal.h>
+#include <odp_buffer_inlines.h>
+#include <odp_timer_internal.h>
 #include <odp_internal.h>
 #include <odp_atomic.h>
 #include <odp_spinlock.h>
diff --git a/test/api_test/odp_timer_ping.c b/test/api_test/odp_timer_ping.c
index 7704181..1566f4f 100644
--- a/test/api_test/odp_timer_ping.c
+++ b/test/api_test/odp_timer_ping.c
@@ -319,9 +319,8 @@  int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED)
 	ping_arg_t pingarg;
 	odp_queue_t queue;
 	odp_buffer_pool_t pool;
-	void *pool_base;
 	int i;
-	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
 	if (odp_test_global_init() != 0)
 		return -1;
@@ -334,14 +333,14 @@  int main(int argc ODP_UNUSED, char *argv[] ODP_UNUSED)
 	/*
 	 * Create message pool
 	 */
-	shm = odp_shm_reserve("msg_pool",
-			      MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
-	pool_base = odp_shm_addr(shm);
-
-	pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE,
-				      BUF_SIZE,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_RAW);
+
+	params.buf_size  = BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = MSG_POOL_SIZE/BUF_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_RAW;
+
+	pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, &params);
+
 	if (pool == ODP_BUFFER_POOL_INVALID) {
 		LOG_ERR("Pool create failed.\n");
 		return -1;
diff --git a/test/validation/odp_crypto.c b/test/validation/odp_crypto.c
index 9342aca..e329b05 100644
--- a/test/validation/odp_crypto.c
+++ b/test/validation/odp_crypto.c
@@ -31,8 +31,7 @@  CU_SuiteInfo suites[] = {
 
 int main(void)
 {
-	odp_shm_t shm;
-	void *pool_base;
+	odp_buffer_pool_param_t params;
 	odp_buffer_pool_t pool;
 	odp_queue_t out_queue;
 
@@ -42,21 +41,13 @@  int main(void)
 	}
 	odp_init_local();
 
-	shm = odp_shm_reserve("shm_packet_pool",
-			      SHM_PKT_POOL_SIZE,
-			      ODP_CACHE_LINE_SIZE, 0);
+	params.buf_size  = SHM_PKT_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_PKT_POOL_SIZE/SHM_PKT_POOL_BUF_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_PACKET;
 
-	pool_base = odp_shm_addr(shm);
-	if (!pool_base) {
-		fprintf(stderr, "Packet pool allocation failed.\n");
-		return -1;
-	}
+	pool = odp_buffer_pool_create("packet_pool", ODP_SHM_NULL, &params);
 
-	pool = odp_buffer_pool_create("packet_pool", pool_base,
-				      SHM_PKT_POOL_SIZE,
-				      SHM_PKT_POOL_BUF_SIZE,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_PACKET);
 	if (ODP_BUFFER_POOL_INVALID == pool) {
 		fprintf(stderr, "Packet pool creation failed.\n");
 		return -1;
@@ -67,20 +58,14 @@  int main(void)
 		fprintf(stderr, "Crypto outq creation failed.\n");
 		return -1;
 	}
-	shm = odp_shm_reserve("shm_compl_pool",
-			     SHM_COMPL_POOL_SIZE,
-			     ODP_CACHE_LINE_SIZE,
-			     ODP_SHM_SW_ONLY);
-	pool_base = odp_shm_addr(shm);
-	if (!pool_base) {
-		fprintf(stderr, "Completion pool allocation failed.\n");
-		return -1;
-	}
-	pool = odp_buffer_pool_create("compl_pool", pool_base,
-				      SHM_COMPL_POOL_SIZE,
-				      SHM_COMPL_POOL_BUF_SIZE,
-				      ODP_CACHE_LINE_SIZE,
-				      ODP_BUFFER_TYPE_RAW);
+
+	params.buf_size  = SHM_COMPL_POOL_BUF_SIZE;
+	params.buf_align = 0;
+	params.num_bufs  = SHM_COMPL_POOL_SIZE/SHM_COMPL_POOL_BUF_SIZE;
+	params.buf_type  = ODP_BUFFER_TYPE_RAW;
+
+	pool = odp_buffer_pool_create("compl_pool", ODP_SHM_NULL, &params);
+
 	if (ODP_BUFFER_POOL_INVALID == pool) {
 		fprintf(stderr, "Completion pool creation failed.\n");
 		return -1;
diff --git a/test/validation/odp_queue.c b/test/validation/odp_queue.c
index 09dba0e..9d0f3d7 100644
--- a/test/validation/odp_queue.c
+++ b/test/validation/odp_queue.c
@@ -16,21 +16,14 @@  static int queue_contest = 0xff;
 static int test_odp_buffer_pool_init(void)
 {
 	odp_buffer_pool_t pool;
-	void *pool_base;
-	odp_shm_t shm;
+	odp_buffer_pool_param_t params;
 
-	shm = odp_shm_reserve("msg_pool",
-			      MSG_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0);
+	params.buf_size  = 0;
+	params.buf_align = ODP_CACHE_LINE_SIZE;
+	params.num_bufs  = 1024 * 10;
+	params.buf_type  = ODP_BUFFER_TYPE_RAW;
 
-	pool_base = odp_shm_addr(shm);
-
-	if (NULL == pool_base) {
-		printf("Shared memory reserve failed.\n");
-		return -1;
-	}
-
-	pool = odp_buffer_pool_create("msg_pool", pool_base, MSG_POOL_SIZE, 0,
-				      ODP_CACHE_LINE_SIZE, ODP_BUFFER_TYPE_RAW);
+	pool = odp_buffer_pool_create("msg_pool", ODP_SHM_NULL, &params);
 
 	if (ODP_BUFFER_POOL_INVALID == pool) {
 		printf("Pool create failed.\n");