diff mbox

Add API odp_buffer_pool_delete

Message ID 1411676727128.92858@caviumnetworks.com
State New
Headers show

Commit Message

Rosenboim, Leonid Sept. 25, 2014, 8:25 p.m. UTC
Ola,


Do you have any specific platform in mind when writing this remark ?


I am having some difficulty wrapping my head around this remark,

because it is hard for me to imagine how a buffer pool can keep track of free buffers,

yet unable to tell how many such free buffers currently available in storage.

If it can not track the number of free buffers, how would it know to tell that it is

empty and refuse an allocation request.


Alternatively, if a pool implements a counter of free buffers rather than allocated (outstanding) buffers, all that is needed to implement the functionality I proposed, os to save the buffer count that was supplied at creation time, and compare that with free buffer count, presto.


In other words, what use is a buffer pool if it cant even track the number of buffers it has to offer for allocation ?


- Leo

Comments

Ola Liljedahl Sept. 26, 2014, 9:25 a.m. UTC | #1
No, no specific platform in mind. But maybe I have more of the SW
perspective in my thinking.
I see a difference in maintaining actual counters of free/used buffers and
maintaining "lists" (however they are implemented) of free buffers that can
be used for allocation. Of course the buffer manager must know which
buffers are free and thus can be allocated.
Bill's proposal was to specifically maintain free/used counters which I
object to.
By traversing its list of free buffers, a buffer manager could find out how
many buffers are free. But such an operation could take time. Imagine SW
traversing a linked list, lots of cache misses.
So a buffer manager should be able to report that buffer pool destroy
cannot succeed because buffers are in use (and you don't need to know the
actual number of used buffers). But I don't expect a buffer manager
necessarily to have an API that reports the number of used and free
buffers. If we should define such an API, it has to be marked as a
potential performance killer. Forcing a (SW) buffer pool implementation to
maintain SW-managed counters of free and user buffers is not a good idea.
And besides, you want the check for used buffers and the destroy operation
to be atomic, a separate API for returning the counters would not maintain
the atomicity and we would return to the situation where we might destroy a
buffer poll with buffers in use and corresponding undefined behavior.

-- Ola

On 25 September 2014 22:25, Rosenboim, Leonid <
Leonid.Rosenboim@caviumnetworks.com> wrote:

>  Ola,
>
>
>  Do you have any specific platform in mind when writing this remark ?
>
>
>  I am having some difficulty wrapping my head around this remark,
>
> because it is hard for me to imagine how a buffer pool can keep track of
> free buffers,
>
> yet unable to tell how many such free buffers currently available in
> storage.
>
> If it can not track the number of free buffers, how would it know to tell
> that it is
>
> empty and refuse an allocation request.
>
>
>  Alternatively, if a pool implements a counter of free buffers rather
> than allocated (outstanding) buffers, all that is needed to implement the
> functionality I proposed, os to save the buffer count that was supplied at
> creation time, and compare that with free buffer count, presto.
>
>
>  In other words, what use is a buffer pool if it cant even track the
> number of buffers it has to offer for allocation ?
>
>
>  - Leo
>
>
>  ------------------------------
> *From:* Ola Liljedahl <ola.liljedahl@linaro.org>
> *Sent:* Thursday, September 25, 2014 10:53 AM
> *To:* Bill Fischofer
> *Cc:* Rosenboim, Leonid; lng-odp-forward
>
> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>
>  Free/use counters updated on every alloc and free call might not be
> possible to support, e.g. for HW buffer pools which allow allocation from
> HW blocks. Also SW implementations of (shared) counters are not scalable.
> Better to attempt to check if any buffers are still in use when buffer pool
> destruction is attempted. Perhaps buffers should be marked as free or in
> use and odp_buffer_pool_destroy() could traverse the buffer pool and check
> the flags.
>
> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org>
> wrote:
>
>> That's an interesting suggestion and something that should be relatively
>> easy to implement since one of the pieces of buffer pool meta data could be
>> an allocation count that is incremented on each buffer_alloc() and
>> decremented on each buffer_free().
>>
>>  Bill
>>
>>  On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid <
>> Leonid.Rosenboim@caviumnetworks.com> wrote:
>>
>>>   How about checking that all buffers have been freed to the pool prior
>>> to destroying the pool itself ?
>>>
>>>
>>>  Also, since odp_buffer_pool_create() gets memory from the caller, the
>>> destroy description should spell out that the caller is also responsable
>>> for freeing this memory for complete reverse of the action.
>>>
>>>
>>>  Then, if some buffers have not been returned, what to do ?
>>>
>>>
>>>  One option is to warn and proceed, another option is to bail and
>>> return an error, a third option is to mark the pool as destined for
>>> deletion, which prevents new buffers from being doled out, but allows
>>> buffer frees, and trigger an automatic pool destruction when the last
>>> buffer is returned.
>>>
>>>
>>>  The risk of proceeding with destruction while some buffers are
>>> outstanding is potential memory corruption that is very hard to debug.
>>>
>>>
>>>  ------------------------------
>>> *From:* lng-odp-bounces@lists.linaro.org <
>>> lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell <
>>> anders.roxell@linaro.org>
>>> *Sent:* Thursday, September 25, 2014 5:48 AM
>>> *To:* lng-odp-forward
>>> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>>>
>>>
>>>
>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org>
>>> wrote:
>>>
>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
>>>> ---
>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
>>>> ++++++++++++++++++++++
>>>>  2 files changed, 34 insertions(+)
>>>>
>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
>>>> index fe88898..fca2c62 100644
>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
>>>>
>>>>
>>>>  /**
>>>> + * Delete a buffer pool
>>>> + *
>>>> + * @param pool_hdl Buffer pool handle
>>>> + *
>>>> + * @return 0 if successful else -1
>>>> + */
>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
>>>> +
>>>> +/**
>>>>   * Create a buffer pool
>>>>   *
>>>>   * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1
>>>> chars)
>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
>>>> b/platform/linux-generic/odp_buffer_pool.c
>>>> index 4d9ff45..7ba793d 100644
>>>> --- a/platform/linux-generic/odp_buffer_pool.c
>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
>>>>         }
>>>>  }
>>>>
>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
>>>> +{
>>>> +       pool_entry_t *pool;
>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n", pool_hdl);
>>>> +
>>>> +       pool = get_pool_entry(pool_hdl);
>>>> +       pool->s.free_bufs = 0;
>>>> +       strcpy(pool->s.name, "");
>>>> +       /* Need to restore this because this is setup in
>>>> +        * odp_buffer_pool_init_global
>>>> +        * */
>>>> +       pool->s.pool_hdl = pool_hdl;
>>>> +       pool->s.buf_base = 0;
>>>> +       pool->s.buf_size = 0;
>>>> +       pool->s.buf_offset = 0;
>>>> +       pool->s.num_bufs = 0;
>>>> +       pool->s.pool_base_addr = NULL;
>>>> +       pool->s.pool_size = 0;
>>>> +       pool->s.user_size = 0;
>>>> +       pool->s.user_align = 0;
>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
>>>> +       pool->s.hdr_size = 0;
>>>> +       return 0;
>>>> +}
>>>>
>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>>>                                          void *base_addr, uint64_t size,
>>>> --
>>>> 2.1.0
>>>>
>>>>
>>>
>>>  Hi all,
>>>
>>>  I had a chat with Ola and he proposed destroy instead of delete
>>>
>>>  I think I like odp_buffer_pool_destroy more than what I've done in the
>>> patch odp_buffer_pool_delete... =)
>>>
>>>  Comments?
>>>
>>>  Cheers,
>>> Anders
>>>
>>>  _______________________________________________
>>> lng-odp mailing list
>>> lng-odp@lists.linaro.org
>>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>>
>>>
>>
>> _______________________________________________
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>
Bill Fischofer Sept. 26, 2014, 10:39 a.m. UTC | #2
From the ODP Buffer Management API Design doc:
odp_buffer_pool_destroy

/**
* Destroy a buffer pool previously created by odp_buffer_pool_create()
*
* @param pool[in]    Handle of the buffer pool to be destroyed
*
* @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
*                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
*/
int odp_buffer_pool_destroy (odp_buffer_pool_t pool);

This routine destroys a previously created buffer pool.  Note that attempts
to destroy a predefined buffer pool will be rejected since the application
did not create it.  Results are undefined if an attempt is made to destroy
a buffer pool that contains allocated or otherwise active buffers.

--------

Is this not sufficient for ODP v1.0?

Bill

On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <ola.liljedahl@linaro.org>
wrote:

> No, no specific platform in mind. But maybe I have more of the SW
> perspective in my thinking.
> I see a difference in maintaining actual counters of free/used buffers and
> maintaining "lists" (however they are implemented) of free buffers that can
> be used for allocation. Of course the buffer manager must know which
> buffers are free and thus can be allocated.
> Bill's proposal was to specifically maintain free/used counters which I
> object to.
> By traversing its list of free buffers, a buffer manager could find out
> how many buffers are free. But such an operation could take time. Imagine
> SW traversing a linked list, lots of cache misses.
> So a buffer manager should be able to report that buffer pool destroy
> cannot succeed because buffers are in use (and you don't need to know the
> actual number of used buffers). But I don't expect a buffer manager
> necessarily to have an API that reports the number of used and free
> buffers. If we should define such an API, it has to be marked as a
> potential performance killer. Forcing a (SW) buffer pool implementation to
> maintain SW-managed counters of free and user buffers is not a good idea.
> And besides, you want the check for used buffers and the destroy operation
> to be atomic, a separate API for returning the counters would not maintain
> the atomicity and we would return to the situation where we might destroy a
> buffer poll with buffers in use and corresponding undefined behavior.
>
> -- Ola
>
> On 25 September 2014 22:25, Rosenboim, Leonid <
> Leonid.Rosenboim@caviumnetworks.com> wrote:
>
>>  Ola,
>>
>>
>>  Do you have any specific platform in mind when writing this remark ?
>>
>>
>>  I am having some difficulty wrapping my head around this remark,
>>
>> because it is hard for me to imagine how a buffer pool can keep track of
>> free buffers,
>>
>> yet unable to tell how many such free buffers currently available in
>> storage.
>>
>> If it can not track the number of free buffers, how would it know to tell
>> that it is
>>
>> empty and refuse an allocation request.
>>
>>
>>  Alternatively, if a pool implements a counter of free buffers rather
>> than allocated (outstanding) buffers, all that is needed to implement the
>> functionality I proposed, os to save the buffer count that was supplied at
>> creation time, and compare that with free buffer count, presto.
>>
>>
>>  In other words, what use is a buffer pool if it cant even track the
>> number of buffers it has to offer for allocation ?
>>
>>
>>  - Leo
>>
>>
>>  ------------------------------
>> *From:* Ola Liljedahl <ola.liljedahl@linaro.org>
>> *Sent:* Thursday, September 25, 2014 10:53 AM
>> *To:* Bill Fischofer
>> *Cc:* Rosenboim, Leonid; lng-odp-forward
>>
>> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>>
>>  Free/use counters updated on every alloc and free call might not be
>> possible to support, e.g. for HW buffer pools which allow allocation from
>> HW blocks. Also SW implementations of (shared) counters are not scalable.
>> Better to attempt to check if any buffers are still in use when buffer pool
>> destruction is attempted. Perhaps buffers should be marked as free or in
>> use and odp_buffer_pool_destroy() could traverse the buffer pool and check
>> the flags.
>>
>> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org>
>> wrote:
>>
>>> That's an interesting suggestion and something that should be relatively
>>> easy to implement since one of the pieces of buffer pool meta data could be
>>> an allocation count that is incremented on each buffer_alloc() and
>>> decremented on each buffer_free().
>>>
>>>  Bill
>>>
>>>  On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid <
>>> Leonid.Rosenboim@caviumnetworks.com> wrote:
>>>
>>>>   How about checking that all buffers have been freed to the pool
>>>> prior to destroying the pool itself ?
>>>>
>>>>
>>>>  Also, since odp_buffer_pool_create() gets memory from the caller, the
>>>> destroy description should spell out that the caller is also responsable
>>>> for freeing this memory for complete reverse of the action.
>>>>
>>>>
>>>>  Then, if some buffers have not been returned, what to do ?
>>>>
>>>>
>>>>  One option is to warn and proceed, another option is to bail and
>>>> return an error, a third option is to mark the pool as destined for
>>>> deletion, which prevents new buffers from being doled out, but allows
>>>> buffer frees, and trigger an automatic pool destruction when the last
>>>> buffer is returned.
>>>>
>>>>
>>>>  The risk of proceeding with destruction while some buffers are
>>>> outstanding is potential memory corruption that is very hard to debug.
>>>>
>>>>
>>>>  ------------------------------
>>>> *From:* lng-odp-bounces@lists.linaro.org <
>>>> lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell <
>>>> anders.roxell@linaro.org>
>>>> *Sent:* Thursday, September 25, 2014 5:48 AM
>>>> *To:* lng-odp-forward
>>>> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>>>>
>>>>
>>>>
>>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org>
>>>> wrote:
>>>>
>>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
>>>>> ---
>>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
>>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
>>>>> ++++++++++++++++++++++
>>>>>  2 files changed, 34 insertions(+)
>>>>>
>>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>> index fe88898..fca2c62 100644
>>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
>>>>>
>>>>>
>>>>>  /**
>>>>> + * Delete a buffer pool
>>>>> + *
>>>>> + * @param pool_hdl Buffer pool handle
>>>>> + *
>>>>> + * @return 0 if successful else -1
>>>>> + */
>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
>>>>> +
>>>>> +/**
>>>>>   * Create a buffer pool
>>>>>   *
>>>>>   * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN -
>>>>> 1 chars)
>>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
>>>>> b/platform/linux-generic/odp_buffer_pool.c
>>>>> index 4d9ff45..7ba793d 100644
>>>>> --- a/platform/linux-generic/odp_buffer_pool.c
>>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
>>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
>>>>>         }
>>>>>  }
>>>>>
>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
>>>>> +{
>>>>> +       pool_entry_t *pool;
>>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
>>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n", pool_hdl);
>>>>> +
>>>>> +       pool = get_pool_entry(pool_hdl);
>>>>> +       pool->s.free_bufs = 0;
>>>>> +       strcpy(pool->s.name, "");
>>>>> +       /* Need to restore this because this is setup in
>>>>> +        * odp_buffer_pool_init_global
>>>>> +        * */
>>>>> +       pool->s.pool_hdl = pool_hdl;
>>>>> +       pool->s.buf_base = 0;
>>>>> +       pool->s.buf_size = 0;
>>>>> +       pool->s.buf_offset = 0;
>>>>> +       pool->s.num_bufs = 0;
>>>>> +       pool->s.pool_base_addr = NULL;
>>>>> +       pool->s.pool_size = 0;
>>>>> +       pool->s.user_size = 0;
>>>>> +       pool->s.user_align = 0;
>>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
>>>>> +       pool->s.hdr_size = 0;
>>>>> +       return 0;
>>>>> +}
>>>>>
>>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>>>>                                          void *base_addr, uint64_t
>>>>> size,
>>>>> --
>>>>> 2.1.0
>>>>>
>>>>>
>>>>
>>>>  Hi all,
>>>>
>>>>  I had a chat with Ola and he proposed destroy instead of delete
>>>>
>>>>  I think I like odp_buffer_pool_destroy more than what I've done in
>>>> the patch odp_buffer_pool_delete... =)
>>>>
>>>>  Comments?
>>>>
>>>>  Cheers,
>>>> Anders
>>>>
>>>>  _______________________________________________
>>>> lng-odp mailing list
>>>> lng-odp@lists.linaro.org
>>>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>>>
>>>>
>>>
>>> _______________________________________________
>>> lng-odp mailing list
>>> lng-odp@lists.linaro.org
>>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>>
>>>
>>
>
Anders Roxell Sept. 26, 2014, 10:53 a.m. UTC | #3
On 2014-09-26 05:39, Bill Fischofer wrote:
> From the ODP Buffer Management API Design doc:
> odp_buffer_pool_destroy
> 
> /**
> * Destroy a buffer pool previously created by odp_buffer_pool_create()
> *
> * @param pool[in]    Handle of the buffer pool to be destroyed
> *
> * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
> *                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
> */
> int odp_buffer_pool_destroy (odp_buffer_pool_t pool);
> 
> This routine destroys a previously created buffer pool.  Note that attempts
> to destroy a predefined buffer pool will be rejected since the application
> did not create it.  Results are undefined if an attempt is made to destroy
> a buffer pool that contains allocated or otherwise active buffers.

I think this discussion is interesting and important. However, I
propose that we continue the general discussion in a new mail thread so that
we can come to a conclusion about the patch.

For the patch I propose that we only change s/delete/destroy/ . This means that
we can get the patch applied + that we can get the unit tests applied. So more
people can get involved and contribute to the unit tests.


Cheers,
Anders

> 
> --------
> 
> Is this not sufficient for ODP v1.0?
> 
> Bill
> 
> On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <ola.liljedahl@linaro.org>
> wrote:
> 
> > No, no specific platform in mind. But maybe I have more of the SW
> > perspective in my thinking.
> > I see a difference in maintaining actual counters of free/used buffers and
> > maintaining "lists" (however they are implemented) of free buffers that can
> > be used for allocation. Of course the buffer manager must know which
> > buffers are free and thus can be allocated.
> > Bill's proposal was to specifically maintain free/used counters which I
> > object to.
> > By traversing its list of free buffers, a buffer manager could find out
> > how many buffers are free. But such an operation could take time. Imagine
> > SW traversing a linked list, lots of cache misses.
> > So a buffer manager should be able to report that buffer pool destroy
> > cannot succeed because buffers are in use (and you don't need to know the
> > actual number of used buffers). But I don't expect a buffer manager
> > necessarily to have an API that reports the number of used and free
> > buffers. If we should define such an API, it has to be marked as a
> > potential performance killer. Forcing a (SW) buffer pool implementation to
> > maintain SW-managed counters of free and user buffers is not a good idea.
> > And besides, you want the check for used buffers and the destroy operation
> > to be atomic, a separate API for returning the counters would not maintain
> > the atomicity and we would return to the situation where we might destroy a
> > buffer poll with buffers in use and corresponding undefined behavior.
> >
> > -- Ola
> >
> > On 25 September 2014 22:25, Rosenboim, Leonid <
> > Leonid.Rosenboim@caviumnetworks.com> wrote:
> >
> >>  Ola,
> >>
> >>
> >>  Do you have any specific platform in mind when writing this remark ?
> >>
> >>
> >>  I am having some difficulty wrapping my head around this remark,
> >>
> >> because it is hard for me to imagine how a buffer pool can keep track of
> >> free buffers,
> >>
> >> yet unable to tell how many such free buffers currently available in
> >> storage.
> >>
> >> If it can not track the number of free buffers, how would it know to tell
> >> that it is
> >>
> >> empty and refuse an allocation request.
> >>
> >>
> >>  Alternatively, if a pool implements a counter of free buffers rather
> >> than allocated (outstanding) buffers, all that is needed to implement the
> >> functionality I proposed, os to save the buffer count that was supplied at
> >> creation time, and compare that with free buffer count, presto.
> >>
> >>
> >>  In other words, what use is a buffer pool if it cant even track the
> >> number of buffers it has to offer for allocation ?
> >>
> >>
> >>  - Leo
> >>
> >>
> >>  ------------------------------
> >> *From:* Ola Liljedahl <ola.liljedahl@linaro.org>
> >> *Sent:* Thursday, September 25, 2014 10:53 AM
> >> *To:* Bill Fischofer
> >> *Cc:* Rosenboim, Leonid; lng-odp-forward
> >>
> >> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> >>
> >>  Free/use counters updated on every alloc and free call might not be
> >> possible to support, e.g. for HW buffer pools which allow allocation from
> >> HW blocks. Also SW implementations of (shared) counters are not scalable.
> >> Better to attempt to check if any buffers are still in use when buffer pool
> >> destruction is attempted. Perhaps buffers should be marked as free or in
> >> use and odp_buffer_pool_destroy() could traverse the buffer pool and check
> >> the flags.
> >>
> >> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org>
> >> wrote:
> >>
> >>> That's an interesting suggestion and something that should be relatively
> >>> easy to implement since one of the pieces of buffer pool meta data could be
> >>> an allocation count that is incremented on each buffer_alloc() and
> >>> decremented on each buffer_free().
> >>>
> >>>  Bill
> >>>
> >>>  On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid <
> >>> Leonid.Rosenboim@caviumnetworks.com> wrote:
> >>>
> >>>>   How about checking that all buffers have been freed to the pool
> >>>> prior to destroying the pool itself ?
> >>>>
> >>>>
> >>>>  Also, since odp_buffer_pool_create() gets memory from the caller, the
> >>>> destroy description should spell out that the caller is also responsable
> >>>> for freeing this memory for complete reverse of the action.
> >>>>
> >>>>
> >>>>  Then, if some buffers have not been returned, what to do ?
> >>>>
> >>>>
> >>>>  One option is to warn and proceed, another option is to bail and
> >>>> return an error, a third option is to mark the pool as destined for
> >>>> deletion, which prevents new buffers from being doled out, but allows
> >>>> buffer frees, and trigger an automatic pool destruction when the last
> >>>> buffer is returned.
> >>>>
> >>>>
> >>>>  The risk of proceeding with destruction while some buffers are
> >>>> outstanding is potential memory corruption that is very hard to debug.
> >>>>
> >>>>
> >>>>  ------------------------------
> >>>> *From:* lng-odp-bounces@lists.linaro.org <
> >>>> lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell <
> >>>> anders.roxell@linaro.org>
> >>>> *Sent:* Thursday, September 25, 2014 5:48 AM
> >>>> *To:* lng-odp-forward
> >>>> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> >>>>
> >>>>
> >>>>
> >>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org>
> >>>> wrote:
> >>>>
> >>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
> >>>>> ---
> >>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
> >>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
> >>>>> ++++++++++++++++++++++
> >>>>>  2 files changed, 34 insertions(+)
> >>>>>
> >>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>> index fe88898..fca2c62 100644
> >>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
> >>>>>
> >>>>>
> >>>>>  /**
> >>>>> + * Delete a buffer pool
> >>>>> + *
> >>>>> + * @param pool_hdl Buffer pool handle
> >>>>> + *
> >>>>> + * @return 0 if successful else -1
> >>>>> + */
> >>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
> >>>>> +
> >>>>> +/**
> >>>>>   * Create a buffer pool
> >>>>>   *
> >>>>>   * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN -
> >>>>> 1 chars)
> >>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
> >>>>> b/platform/linux-generic/odp_buffer_pool.c
> >>>>> index 4d9ff45..7ba793d 100644
> >>>>> --- a/platform/linux-generic/odp_buffer_pool.c
> >>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
> >>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
> >>>>>         }
> >>>>>  }
> >>>>>
> >>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
> >>>>> +{
> >>>>> +       pool_entry_t *pool;
> >>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
> >>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n", pool_hdl);
> >>>>> +
> >>>>> +       pool = get_pool_entry(pool_hdl);
> >>>>> +       pool->s.free_bufs = 0;
> >>>>> +       strcpy(pool->s.name, "");
> >>>>> +       /* Need to restore this because this is setup in
> >>>>> +        * odp_buffer_pool_init_global
> >>>>> +        * */
> >>>>> +       pool->s.pool_hdl = pool_hdl;
> >>>>> +       pool->s.buf_base = 0;
> >>>>> +       pool->s.buf_size = 0;
> >>>>> +       pool->s.buf_offset = 0;
> >>>>> +       pool->s.num_bufs = 0;
> >>>>> +       pool->s.pool_base_addr = NULL;
> >>>>> +       pool->s.pool_size = 0;
> >>>>> +       pool->s.user_size = 0;
> >>>>> +       pool->s.user_align = 0;
> >>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
> >>>>> +       pool->s.hdr_size = 0;
> >>>>> +       return 0;
> >>>>> +}
> >>>>>
> >>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
> >>>>>                                          void *base_addr, uint64_t
> >>>>> size,
> >>>>> --
> >>>>> 2.1.0
> >>>>>
> >>>>>
> >>>>
> >>>>  Hi all,
> >>>>
> >>>>  I had a chat with Ola and he proposed destroy instead of delete
> >>>>
> >>>>  I think I like odp_buffer_pool_destroy more than what I've done in
> >>>> the patch odp_buffer_pool_delete... =)
> >>>>
> >>>>  Comments?
> >>>>
> >>>>  Cheers,
> >>>> Anders
> >>>>
> >>>>  _______________________________________________
> >>>> lng-odp mailing list
> >>>> lng-odp@lists.linaro.org
> >>>> http://lists.linaro.org/mailman/listinfo/lng-odp
> >>>>
> >>>>
> >>>
> >>> _______________________________________________
> >>> lng-odp mailing list
> >>> lng-odp@lists.linaro.org
> >>> http://lists.linaro.org/mailman/listinfo/lng-odp
> >>>
> >>>
> >>
> >

> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
Bill Fischofer Sept. 26, 2014, 10:57 a.m. UTC | #4
That's a good suggestion.  +1

On Fri, Sep 26, 2014 at 5:53 AM, Anders Roxell <anders.roxell@linaro.org>
wrote:

> On 2014-09-26 05:39, Bill Fischofer wrote:
> > From the ODP Buffer Management API Design doc:
> > odp_buffer_pool_destroy
> >
> > /**
> > * Destroy a buffer pool previously created by odp_buffer_pool_create()
> > *
> > * @param pool[in]    Handle of the buffer pool to be destroyed
> > *
> > * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
> > *                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
> > */
> > int odp_buffer_pool_destroy (odp_buffer_pool_t pool);
> >
> > This routine destroys a previously created buffer pool.  Note that
> attempts
> > to destroy a predefined buffer pool will be rejected since the
> application
> > did not create it.  Results are undefined if an attempt is made to
> destroy
> > a buffer pool that contains allocated or otherwise active buffers.
>
> I think this discussion is interesting and important. However, I
> propose that we continue the general discussion in a new mail thread so
> that
> we can come to a conclusion about the patch.
>
> For the patch I propose that we only change s/delete/destroy/ . This means
> that
> we can get the patch applied + that we can get the unit tests applied. So
> more
> people can get involved and contribute to the unit tests.
>
>
> Cheers,
> Anders
>
> >
> > --------
> >
> > Is this not sufficient for ODP v1.0?
> >
> > Bill
> >
> > On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <ola.liljedahl@linaro.org
> >
> > wrote:
> >
> > > No, no specific platform in mind. But maybe I have more of the SW
> > > perspective in my thinking.
> > > I see a difference in maintaining actual counters of free/used buffers
> and
> > > maintaining "lists" (however they are implemented) of free buffers
> that can
> > > be used for allocation. Of course the buffer manager must know which
> > > buffers are free and thus can be allocated.
> > > Bill's proposal was to specifically maintain free/used counters which I
> > > object to.
> > > By traversing its list of free buffers, a buffer manager could find out
> > > how many buffers are free. But such an operation could take time.
> Imagine
> > > SW traversing a linked list, lots of cache misses.
> > > So a buffer manager should be able to report that buffer pool destroy
> > > cannot succeed because buffers are in use (and you don't need to know
> the
> > > actual number of used buffers). But I don't expect a buffer manager
> > > necessarily to have an API that reports the number of used and free
> > > buffers. If we should define such an API, it has to be marked as a
> > > potential performance killer. Forcing a (SW) buffer pool
> implementation to
> > > maintain SW-managed counters of free and user buffers is not a good
> idea.
> > > And besides, you want the check for used buffers and the destroy
> operation
> > > to be atomic, a separate API for returning the counters would not
> maintain
> > > the atomicity and we would return to the situation where we might
> destroy a
> > > buffer poll with buffers in use and corresponding undefined behavior.
> > >
> > > -- Ola
> > >
> > > On 25 September 2014 22:25, Rosenboim, Leonid <
> > > Leonid.Rosenboim@caviumnetworks.com> wrote:
> > >
> > >>  Ola,
> > >>
> > >>
> > >>  Do you have any specific platform in mind when writing this remark ?
> > >>
> > >>
> > >>  I am having some difficulty wrapping my head around this remark,
> > >>
> > >> because it is hard for me to imagine how a buffer pool can keep track
> of
> > >> free buffers,
> > >>
> > >> yet unable to tell how many such free buffers currently available in
> > >> storage.
> > >>
> > >> If it can not track the number of free buffers, how would it know to
> tell
> > >> that it is
> > >>
> > >> empty and refuse an allocation request.
> > >>
> > >>
> > >>  Alternatively, if a pool implements a counter of free buffers rather
> > >> than allocated (outstanding) buffers, all that is needed to implement
> the
> > >> functionality I proposed, os to save the buffer count that was
> supplied at
> > >> creation time, and compare that with free buffer count, presto.
> > >>
> > >>
> > >>  In other words, what use is a buffer pool if it cant even track the
> > >> number of buffers it has to offer for allocation ?
> > >>
> > >>
> > >>  - Leo
> > >>
> > >>
> > >>  ------------------------------
> > >> *From:* Ola Liljedahl <ola.liljedahl@linaro.org>
> > >> *Sent:* Thursday, September 25, 2014 10:53 AM
> > >> *To:* Bill Fischofer
> > >> *Cc:* Rosenboim, Leonid; lng-odp-forward
> > >>
> > >> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> > >>
> > >>  Free/use counters updated on every alloc and free call might not be
> > >> possible to support, e.g. for HW buffer pools which allow allocation
> from
> > >> HW blocks. Also SW implementations of (shared) counters are not
> scalable.
> > >> Better to attempt to check if any buffers are still in use when
> buffer pool
> > >> destruction is attempted. Perhaps buffers should be marked as free or
> in
> > >> use and odp_buffer_pool_destroy() could traverse the buffer pool and
> check
> > >> the flags.
> > >>
> > >> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org
> >
> > >> wrote:
> > >>
> > >>> That's an interesting suggestion and something that should be
> relatively
> > >>> easy to implement since one of the pieces of buffer pool meta data
> could be
> > >>> an allocation count that is incremented on each buffer_alloc() and
> > >>> decremented on each buffer_free().
> > >>>
> > >>>  Bill
> > >>>
> > >>>  On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid <
> > >>> Leonid.Rosenboim@caviumnetworks.com> wrote:
> > >>>
> > >>>>   How about checking that all buffers have been freed to the pool
> > >>>> prior to destroying the pool itself ?
> > >>>>
> > >>>>
> > >>>>  Also, since odp_buffer_pool_create() gets memory from the caller,
> the
> > >>>> destroy description should spell out that the caller is also
> responsable
> > >>>> for freeing this memory for complete reverse of the action.
> > >>>>
> > >>>>
> > >>>>  Then, if some buffers have not been returned, what to do ?
> > >>>>
> > >>>>
> > >>>>  One option is to warn and proceed, another option is to bail and
> > >>>> return an error, a third option is to mark the pool as destined for
> > >>>> deletion, which prevents new buffers from being doled out, but
> allows
> > >>>> buffer frees, and trigger an automatic pool destruction when the
> last
> > >>>> buffer is returned.
> > >>>>
> > >>>>
> > >>>>  The risk of proceeding with destruction while some buffers are
> > >>>> outstanding is potential memory corruption that is very hard to
> debug.
> > >>>>
> > >>>>
> > >>>>  ------------------------------
> > >>>> *From:* lng-odp-bounces@lists.linaro.org <
> > >>>> lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell <
> > >>>> anders.roxell@linaro.org>
> > >>>> *Sent:* Thursday, September 25, 2014 5:48 AM
> > >>>> *To:* lng-odp-forward
> > >>>> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> > >>>>
> > >>>>
> > >>>>
> > >>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org
> >
> > >>>> wrote:
> > >>>>
> > >>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
> > >>>>> ---
> > >>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
> > >>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
> > >>>>> ++++++++++++++++++++++
> > >>>>>  2 files changed, 34 insertions(+)
> > >>>>>
> > >>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
> > >>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
> > >>>>> index fe88898..fca2c62 100644
> > >>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
> > >>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
> > >>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
> > >>>>>
> > >>>>>
> > >>>>>  /**
> > >>>>> + * Delete a buffer pool
> > >>>>> + *
> > >>>>> + * @param pool_hdl Buffer pool handle
> > >>>>> + *
> > >>>>> + * @return 0 if successful else -1
> > >>>>> + */
> > >>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
> > >>>>> +
> > >>>>> +/**
> > >>>>>   * Create a buffer pool
> > >>>>>   *
> > >>>>>   * @param name      Name of the pool (max
> ODP_BUFFER_POOL_NAME_LEN -
> > >>>>> 1 chars)
> > >>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
> > >>>>> b/platform/linux-generic/odp_buffer_pool.c
> > >>>>> index 4d9ff45..7ba793d 100644
> > >>>>> --- a/platform/linux-generic/odp_buffer_pool.c
> > >>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
> > >>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
> > >>>>>         }
> > >>>>>  }
> > >>>>>
> > >>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
> > >>>>> +{
> > >>>>> +       pool_entry_t *pool;
> > >>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
> > >>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n",
> pool_hdl);
> > >>>>> +
> > >>>>> +       pool = get_pool_entry(pool_hdl);
> > >>>>> +       pool->s.free_bufs = 0;
> > >>>>> +       strcpy(pool->s.name, "");
> > >>>>> +       /* Need to restore this because this is setup in
> > >>>>> +        * odp_buffer_pool_init_global
> > >>>>> +        * */
> > >>>>> +       pool->s.pool_hdl = pool_hdl;
> > >>>>> +       pool->s.buf_base = 0;
> > >>>>> +       pool->s.buf_size = 0;
> > >>>>> +       pool->s.buf_offset = 0;
> > >>>>> +       pool->s.num_bufs = 0;
> > >>>>> +       pool->s.pool_base_addr = NULL;
> > >>>>> +       pool->s.pool_size = 0;
> > >>>>> +       pool->s.user_size = 0;
> > >>>>> +       pool->s.user_align = 0;
> > >>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
> > >>>>> +       pool->s.hdr_size = 0;
> > >>>>> +       return 0;
> > >>>>> +}
> > >>>>>
> > >>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
> > >>>>>                                          void *base_addr, uint64_t
> > >>>>> size,
> > >>>>> --
> > >>>>> 2.1.0
> > >>>>>
> > >>>>>
> > >>>>
> > >>>>  Hi all,
> > >>>>
> > >>>>  I had a chat with Ola and he proposed destroy instead of delete
> > >>>>
> > >>>>  I think I like odp_buffer_pool_destroy more than what I've done in
> > >>>> the patch odp_buffer_pool_delete... =)
> > >>>>
> > >>>>  Comments?
> > >>>>
> > >>>>  Cheers,
> > >>>> Anders
> > >>>>
> > >>>>  _______________________________________________
> > >>>> lng-odp mailing list
> > >>>> lng-odp@lists.linaro.org
> > >>>> http://lists.linaro.org/mailman/listinfo/lng-odp
> > >>>>
> > >>>>
> > >>>
> > >>> _______________________________________________
> > >>> lng-odp mailing list
> > >>> lng-odp@lists.linaro.org
> > >>> http://lists.linaro.org/mailman/listinfo/lng-odp
> > >>>
> > >>>
> > >>
> > >
>
> > _______________________________________________
> > lng-odp mailing list
> > lng-odp@lists.linaro.org
> > http://lists.linaro.org/mailman/listinfo/lng-odp
>
Mike Holmes Sept. 26, 2014, 11:01 a.m. UTC | #5
+1

On 26 September 2014 06:57, Bill Fischofer <bill.fischofer@linaro.org>
wrote:

> That's a good suggestion.  +1
>
> On Fri, Sep 26, 2014 at 5:53 AM, Anders Roxell <anders.roxell@linaro.org>
> wrote:
>
>> On 2014-09-26 05:39, Bill Fischofer wrote:
>> > From the ODP Buffer Management API Design doc:
>> > odp_buffer_pool_destroy
>> >
>> > /**
>> > * Destroy a buffer pool previously created by odp_buffer_pool_create()
>> > *
>> > * @param pool[in]    Handle of the buffer pool to be destroyed
>> > *
>> > * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
>> > *                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
>> > */
>> > int odp_buffer_pool_destroy (odp_buffer_pool_t pool);
>> >
>> > This routine destroys a previously created buffer pool.  Note that
>> attempts
>> > to destroy a predefined buffer pool will be rejected since the
>> application
>> > did not create it.  Results are undefined if an attempt is made to
>> destroy
>> > a buffer pool that contains allocated or otherwise active buffers.
>>
>> I think this discussion is interesting and important. However, I
>> propose that we continue the general discussion in a new mail thread so
>> that
>> we can come to a conclusion about the patch.
>>
>> For the patch I propose that we only change s/delete/destroy/ . This
>> means that
>> we can get the patch applied + that we can get the unit tests applied. So
>> more
>> people can get involved and contribute to the unit tests.
>>
>>
>> Cheers,
>> Anders
>>
>> >
>> > --------
>> >
>> > Is this not sufficient for ODP v1.0?
>> >
>> > Bill
>> >
>> > On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <
>> ola.liljedahl@linaro.org>
>> > wrote:
>> >
>> > > No, no specific platform in mind. But maybe I have more of the SW
>> > > perspective in my thinking.
>> > > I see a difference in maintaining actual counters of free/used
>> buffers and
>> > > maintaining "lists" (however they are implemented) of free buffers
>> that can
>> > > be used for allocation. Of course the buffer manager must know which
>> > > buffers are free and thus can be allocated.
>> > > Bill's proposal was to specifically maintain free/used counters which
>> I
>> > > object to.
>> > > By traversing its list of free buffers, a buffer manager could find
>> out
>> > > how many buffers are free. But such an operation could take time.
>> Imagine
>> > > SW traversing a linked list, lots of cache misses.
>> > > So a buffer manager should be able to report that buffer pool destroy
>> > > cannot succeed because buffers are in use (and you don't need to know
>> the
>> > > actual number of used buffers). But I don't expect a buffer manager
>> > > necessarily to have an API that reports the number of used and free
>> > > buffers. If we should define such an API, it has to be marked as a
>> > > potential performance killer. Forcing a (SW) buffer pool
>> implementation to
>> > > maintain SW-managed counters of free and user buffers is not a good
>> idea.
>> > > And besides, you want the check for used buffers and the destroy
>> operation
>> > > to be atomic, a separate API for returning the counters would not
>> maintain
>> > > the atomicity and we would return to the situation where we might
>> destroy a
>> > > buffer poll with buffers in use and corresponding undefined behavior.
>> > >
>> > > -- Ola
>> > >
>> > > On 25 September 2014 22:25, Rosenboim, Leonid <
>> > > Leonid.Rosenboim@caviumnetworks.com> wrote:
>> > >
>> > >>  Ola,
>> > >>
>> > >>
>> > >>  Do you have any specific platform in mind when writing this remark ?
>> > >>
>> > >>
>> > >>  I am having some difficulty wrapping my head around this remark,
>> > >>
>> > >> because it is hard for me to imagine how a buffer pool can keep
>> track of
>> > >> free buffers,
>> > >>
>> > >> yet unable to tell how many such free buffers currently available in
>> > >> storage.
>> > >>
>> > >> If it can not track the number of free buffers, how would it know to
>> tell
>> > >> that it is
>> > >>
>> > >> empty and refuse an allocation request.
>> > >>
>> > >>
>> > >>  Alternatively, if a pool implements a counter of free buffers rather
>> > >> than allocated (outstanding) buffers, all that is needed to
>> implement the
>> > >> functionality I proposed, os to save the buffer count that was
>> supplied at
>> > >> creation time, and compare that with free buffer count, presto.
>> > >>
>> > >>
>> > >>  In other words, what use is a buffer pool if it cant even track the
>> > >> number of buffers it has to offer for allocation ?
>> > >>
>> > >>
>> > >>  - Leo
>> > >>
>> > >>
>> > >>  ------------------------------
>> > >> *From:* Ola Liljedahl <ola.liljedahl@linaro.org>
>> > >> *Sent:* Thursday, September 25, 2014 10:53 AM
>> > >> *To:* Bill Fischofer
>> > >> *Cc:* Rosenboim, Leonid; lng-odp-forward
>> > >>
>> > >> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>> > >>
>> > >>  Free/use counters updated on every alloc and free call might not be
>> > >> possible to support, e.g. for HW buffer pools which allow allocation
>> from
>> > >> HW blocks. Also SW implementations of (shared) counters are not
>> scalable.
>> > >> Better to attempt to check if any buffers are still in use when
>> buffer pool
>> > >> destruction is attempted. Perhaps buffers should be marked as free
>> or in
>> > >> use and odp_buffer_pool_destroy() could traverse the buffer pool and
>> check
>> > >> the flags.
>> > >>
>> > >> On 25 September 2014 17:55, Bill Fischofer <
>> bill.fischofer@linaro.org>
>> > >> wrote:
>> > >>
>> > >>> That's an interesting suggestion and something that should be
>> relatively
>> > >>> easy to implement since one of the pieces of buffer pool meta data
>> could be
>> > >>> an allocation count that is incremented on each buffer_alloc() and
>> > >>> decremented on each buffer_free().
>> > >>>
>> > >>>  Bill
>> > >>>
>> > >>>  On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid <
>> > >>> Leonid.Rosenboim@caviumnetworks.com> wrote:
>> > >>>
>> > >>>>   How about checking that all buffers have been freed to the pool
>> > >>>> prior to destroying the pool itself ?
>> > >>>>
>> > >>>>
>> > >>>>  Also, since odp_buffer_pool_create() gets memory from the caller,
>> the
>> > >>>> destroy description should spell out that the caller is also
>> responsable
>> > >>>> for freeing this memory for complete reverse of the action.
>> > >>>>
>> > >>>>
>> > >>>>  Then, if some buffers have not been returned, what to do ?
>> > >>>>
>> > >>>>
>> > >>>>  One option is to warn and proceed, another option is to bail and
>> > >>>> return an error, a third option is to mark the pool as destined for
>> > >>>> deletion, which prevents new buffers from being doled out, but
>> allows
>> > >>>> buffer frees, and trigger an automatic pool destruction when the
>> last
>> > >>>> buffer is returned.
>> > >>>>
>> > >>>>
>> > >>>>  The risk of proceeding with destruction while some buffers are
>> > >>>> outstanding is potential memory corruption that is very hard to
>> debug.
>> > >>>>
>> > >>>>
>> > >>>>  ------------------------------
>> > >>>> *From:* lng-odp-bounces@lists.linaro.org <
>> > >>>> lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell <
>> > >>>> anders.roxell@linaro.org>
>> > >>>> *Sent:* Thursday, September 25, 2014 5:48 AM
>> > >>>> *To:* lng-odp-forward
>> > >>>> *Subject:* Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>> > >>>>
>> > >>>>
>> > >>>>
>> > >>>> On 24 September 2014 22:12, Anders Roxell <
>> anders.roxell@linaro.org>
>> > >>>> wrote:
>> > >>>>
>> > >>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
>> > >>>>> ---
>> > >>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
>> > >>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
>> > >>>>> ++++++++++++++++++++++
>> > >>>>>  2 files changed, 34 insertions(+)
>> > >>>>>
>> > >>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
>> > >>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
>> > >>>>> index fe88898..fca2c62 100644
>> > >>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
>> > >>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
>> > >>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
>> > >>>>>
>> > >>>>>
>> > >>>>>  /**
>> > >>>>> + * Delete a buffer pool
>> > >>>>> + *
>> > >>>>> + * @param pool_hdl Buffer pool handle
>> > >>>>> + *
>> > >>>>> + * @return 0 if successful else -1
>> > >>>>> + */
>> > >>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
>> > >>>>> +
>> > >>>>> +/**
>> > >>>>>   * Create a buffer pool
>> > >>>>>   *
>> > >>>>>   * @param name      Name of the pool (max
>> ODP_BUFFER_POOL_NAME_LEN -
>> > >>>>> 1 chars)
>> > >>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
>> > >>>>> b/platform/linux-generic/odp_buffer_pool.c
>> > >>>>> index 4d9ff45..7ba793d 100644
>> > >>>>> --- a/platform/linux-generic/odp_buffer_pool.c
>> > >>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
>> > >>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
>> > >>>>>         }
>> > >>>>>  }
>> > >>>>>
>> > >>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
>> > >>>>> +{
>> > >>>>> +       pool_entry_t *pool;
>> > >>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
>> > >>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n",
>> pool_hdl);
>> > >>>>> +
>> > >>>>> +       pool = get_pool_entry(pool_hdl);
>> > >>>>> +       pool->s.free_bufs = 0;
>> > >>>>> +       strcpy(pool->s.name, "");
>> > >>>>> +       /* Need to restore this because this is setup in
>> > >>>>> +        * odp_buffer_pool_init_global
>> > >>>>> +        * */
>> > >>>>> +       pool->s.pool_hdl = pool_hdl;
>> > >>>>> +       pool->s.buf_base = 0;
>> > >>>>> +       pool->s.buf_size = 0;
>> > >>>>> +       pool->s.buf_offset = 0;
>> > >>>>> +       pool->s.num_bufs = 0;
>> > >>>>> +       pool->s.pool_base_addr = NULL;
>> > >>>>> +       pool->s.pool_size = 0;
>> > >>>>> +       pool->s.user_size = 0;
>> > >>>>> +       pool->s.user_align = 0;
>> > >>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
>> > >>>>> +       pool->s.hdr_size = 0;
>> > >>>>> +       return 0;
>> > >>>>> +}
>> > >>>>>
>> > >>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>> > >>>>>                                          void *base_addr, uint64_t
>> > >>>>> size,
>> > >>>>> --
>> > >>>>> 2.1.0
>> > >>>>>
>> > >>>>>
>> > >>>>
>> > >>>>  Hi all,
>> > >>>>
>> > >>>>  I had a chat with Ola and he proposed destroy instead of delete
>> > >>>>
>> > >>>>  I think I like odp_buffer_pool_destroy more than what I've done in
>> > >>>> the patch odp_buffer_pool_delete... =)
>> > >>>>
>> > >>>>  Comments?
>> > >>>>
>> > >>>>  Cheers,
>> > >>>> Anders
>> > >>>>
>> > >>>>  _______________________________________________
>> > >>>> lng-odp mailing list
>> > >>>> lng-odp@lists.linaro.org
>> > >>>> http://lists.linaro.org/mailman/listinfo/lng-odp
>> > >>>>
>> > >>>>
>> > >>>
>> > >>> _______________________________________________
>> > >>> lng-odp mailing list
>> > >>> lng-odp@lists.linaro.org
>> > >>> http://lists.linaro.org/mailman/listinfo/lng-odp
>> > >>>
>> > >>>
>> > >>
>> > >
>>
>> > _______________________________________________
>> > lng-odp mailing list
>> > lng-odp@lists.linaro.org
>> > http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
>
vkamensky Sept. 26, 2014, 6 p.m. UTC | #6
On 26 September 2014 03:39, Bill Fischofer <bill.fischofer@linaro.org> wrote:
> From the ODP Buffer Management API Design doc:
>
> odp_buffer_pool_destroy
>
> /**
> * Destroy a buffer pool previously created by odp_buffer_pool_create()
> *
> * @param pool[in]    Handle of the buffer pool to be destroyed
> *
> * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
> *                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
> */
> int odp_buffer_pool_destroy (odp_buffer_pool_t pool);
>
>
> This routine destroys a previously created buffer pool.  Note that attempts
> to destroy a predefined buffer pool will be rejected since the application
> did not create it.  Results are undefined if an attempt is made to destroy a
> buffer pool that contains allocated or otherwise active buffers.

IMO above note is sufficient as long as it is in API description
itself I.e it should
be inside of doxygen comment.

Also please note patch posted by Anders does not match above description,
neither in comment nor in implementation (return -1 for example).
Anders, could you please address that in next version of the patch.

Thanks,
Victor

>
> --------
>
> Is this not sufficient for ODP v1.0?
>
> Bill
>
> On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <ola.liljedahl@linaro.org>
> wrote:
>>
>> No, no specific platform in mind. But maybe I have more of the SW
>> perspective in my thinking.
>> I see a difference in maintaining actual counters of free/used buffers and
>> maintaining "lists" (however they are implemented) of free buffers that can
>> be used for allocation. Of course the buffer manager must know which buffers
>> are free and thus can be allocated.
>> Bill's proposal was to specifically maintain free/used counters which I
>> object to.
>> By traversing its list of free buffers, a buffer manager could find out
>> how many buffers are free. But such an operation could take time. Imagine SW
>> traversing a linked list, lots of cache misses.
>> So a buffer manager should be able to report that buffer pool destroy
>> cannot succeed because buffers are in use (and you don't need to know the
>> actual number of used buffers). But I don't expect a buffer manager
>> necessarily to have an API that reports the number of used and free buffers.
>> If we should define such an API, it has to be marked as a potential
>> performance killer. Forcing a (SW) buffer pool implementation to maintain
>> SW-managed counters of free and user buffers is not a good idea. And
>> besides, you want the check for used buffers and the destroy operation to be
>> atomic, a separate API for returning the counters would not maintain the
>> atomicity and we would return to the situation where we might destroy a
>> buffer poll with buffers in use and corresponding undefined behavior.
>>
>> -- Ola
>>
>> On 25 September 2014 22:25, Rosenboim, Leonid
>> <Leonid.Rosenboim@caviumnetworks.com> wrote:
>>>
>>> Ola,
>>>
>>>
>>> Do you have any specific platform in mind when writing this remark ?
>>>
>>>
>>> I am having some difficulty wrapping my head around this remark,
>>>
>>> because it is hard for me to imagine how a buffer pool can keep track of
>>> free buffers,
>>>
>>> yet unable to tell how many such free buffers currently available in
>>> storage.
>>>
>>> If it can not track the number of free buffers, how would it know to tell
>>> that it is
>>>
>>> empty and refuse an allocation request.
>>>
>>>
>>> Alternatively, if a pool implements a counter of free buffers rather than
>>> allocated (outstanding) buffers, all that is needed to implement the
>>> functionality I proposed, os to save the buffer count that was supplied at
>>> creation time, and compare that with free buffer count, presto.
>>>
>>>
>>> In other words, what use is a buffer pool if it cant even track the
>>> number of buffers it has to offer for allocation ?
>>>
>>>
>>> - Leo
>>>
>>>
>>> ________________________________
>>> From: Ola Liljedahl <ola.liljedahl@linaro.org>
>>> Sent: Thursday, September 25, 2014 10:53 AM
>>> To: Bill Fischofer
>>> Cc: Rosenboim, Leonid; lng-odp-forward
>>>
>>> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>>>
>>> Free/use counters updated on every alloc and free call might not be
>>> possible to support, e.g. for HW buffer pools which allow allocation from HW
>>> blocks. Also SW implementations of (shared) counters are not scalable.
>>> Better to attempt to check if any buffers are still in use when buffer pool
>>> destruction is attempted. Perhaps buffers should be marked as free or in use
>>> and odp_buffer_pool_destroy() could traverse the buffer pool and check the
>>> flags.
>>>
>>> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org>
>>> wrote:
>>>>
>>>> That's an interesting suggestion and something that should be relatively
>>>> easy to implement since one of the pieces of buffer pool meta data could be
>>>> an allocation count that is incremented on each buffer_alloc() and
>>>> decremented on each buffer_free().
>>>>
>>>> Bill
>>>>
>>>> On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid
>>>> <Leonid.Rosenboim@caviumnetworks.com> wrote:
>>>>>
>>>>> How about checking that all buffers have been freed to the pool prior
>>>>> to destroying the pool itself ?
>>>>>
>>>>>
>>>>> Also, since odp_buffer_pool_create() gets memory from the caller, the
>>>>> destroy description should spell out that the caller is also responsable for
>>>>> freeing this memory for complete reverse of the action.
>>>>>
>>>>>
>>>>> Then, if some buffers have not been returned, what to do ?
>>>>>
>>>>>
>>>>> One option is to warn and proceed, another option is to bail and return
>>>>> an error, a third option is to mark the pool as destined for deletion, which
>>>>> prevents new buffers from being doled out, but allows buffer frees, and
>>>>> trigger an automatic pool destruction when the last buffer is returned.
>>>>>
>>>>>
>>>>> The risk of proceeding with destruction while some buffers are
>>>>> outstanding is potential memory corruption that is very hard to debug.
>>>>>
>>>>>
>>>>> ________________________________
>>>>> From: lng-odp-bounces@lists.linaro.org
>>>>> <lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell
>>>>> <anders.roxell@linaro.org>
>>>>> Sent: Thursday, September 25, 2014 5:48 AM
>>>>> To: lng-odp-forward
>>>>> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>>>>>
>>>>>
>>>>>
>>>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org>
>>>>> wrote:
>>>>>>
>>>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
>>>>>> ---
>>>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
>>>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
>>>>>> ++++++++++++++++++++++
>>>>>>  2 files changed, 34 insertions(+)
>>>>>>
>>>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>>> index fe88898..fca2c62 100644
>>>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
>>>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
>>>>>>
>>>>>>
>>>>>>  /**
>>>>>> + * Delete a buffer pool
>>>>>> + *
>>>>>> + * @param pool_hdl Buffer pool handle
>>>>>> + *
>>>>>> + * @return 0 if successful else -1
>>>>>> + */
>>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
>>>>>> +
>>>>>> +/**
>>>>>>   * Create a buffer pool
>>>>>>   *
>>>>>>   * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN -
>>>>>> 1 chars)
>>>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
>>>>>> b/platform/linux-generic/odp_buffer_pool.c
>>>>>> index 4d9ff45..7ba793d 100644
>>>>>> --- a/platform/linux-generic/odp_buffer_pool.c
>>>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
>>>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
>>>>>>         }
>>>>>>  }
>>>>>>
>>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
>>>>>> +{
>>>>>> +       pool_entry_t *pool;
>>>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
>>>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n", pool_hdl);
>>>>>> +
>>>>>> +       pool = get_pool_entry(pool_hdl);
>>>>>> +       pool->s.free_bufs = 0;
>>>>>> +       strcpy(pool->s.name, "");
>>>>>> +       /* Need to restore this because this is setup in
>>>>>> +        * odp_buffer_pool_init_global
>>>>>> +        * */
>>>>>> +       pool->s.pool_hdl = pool_hdl;
>>>>>> +       pool->s.buf_base = 0;
>>>>>> +       pool->s.buf_size = 0;
>>>>>> +       pool->s.buf_offset = 0;
>>>>>> +       pool->s.num_bufs = 0;
>>>>>> +       pool->s.pool_base_addr = NULL;
>>>>>> +       pool->s.pool_size = 0;
>>>>>> +       pool->s.user_size = 0;
>>>>>> +       pool->s.user_align = 0;
>>>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
>>>>>> +       pool->s.hdr_size = 0;
>>>>>> +       return 0;
>>>>>> +}
>>>>>>
>>>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
>>>>>>                                          void *base_addr, uint64_t
>>>>>> size,
>>>>>> --
>>>>>> 2.1.0
>>>>>>
>>>>>
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I had a chat with Ola and he proposed destroy instead of delete
>>>>>
>>>>> I think I like odp_buffer_pool_destroy more than what I've done in the
>>>>> patch odp_buffer_pool_delete... =)
>>>>>
>>>>> Comments?
>>>>>
>>>>> Cheers,
>>>>> Anders
>>>>>
>>>>> _______________________________________________
>>>>> lng-odp mailing list
>>>>> lng-odp@lists.linaro.org
>>>>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> lng-odp mailing list
>>>> lng-odp@lists.linaro.org
>>>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>>>
>>>
>>
>
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
Rosenboim, Leonid Sept. 26, 2014, 6:14 p.m. UTC | #7
I agree. At this stage the description of each normative API is the most important element.

Please also add a clarification with respect to ownership of the memory chunk underlying the pool, e.g. since it was provided by the caller to pool_create, it will be owned by the caller after pool_destroy returns.

I am not entirely happy with the "undefined behaviour" assigned to the case of destroying a pool with outstanding buffers, as it basically means that each implementation can handle it differently, but I suppose it is better than not mentioning the potential issue at all.

The issue with "undefined behaviour" is that an application should do all that it can to avoid this scenario, but we do not provide an API for the application to inquire the number of free buffers, meaning that in order to avoid the undefined behaviour the applications would need to implement the safeguards somehow.

If the behaviour was defined in any one specific way, the application could then have relied on the odp_pool implementation to handle this situation consistently, and would not need to worry about it.

Happy New Year to all!
- Leo
Bill Fischofer Sept. 26, 2014, 7:04 p.m. UTC | #8
Following the LNG-SC's guidelines that v1.0 is targeted for evaluation
(v2.0 targeted for production) let's defer this to v2.0.  To properly
coordinate we probably need an odp_buffer_pool_quiesce() routine that would
prevent further allocation from the pool, etc.

On Fri, Sep 26, 2014 at 1:14 PM, Rosenboim, Leonid <
Leonid.Rosenboim@caviumnetworks.com> wrote:

>
> I agree. At this stage the description of each normative API is the most
> important element.
>
> Please also add a clarification with respect to ownership of the memory
> chunk underlying the pool, e.g. since it was provided by the caller to
> pool_create, it will be owned by the caller after pool_destroy returns.
>
> I am not entirely happy with the "undefined behaviour" assigned to the
> case of destroying a pool with outstanding buffers, as it basically means
> that each implementation can handle it differently, but I suppose it is
> better than not mentioning the potential issue at all.
>
> The issue with "undefined behaviour" is that an application should do all
> that it can to avoid this scenario, but we do not provide an API for the
> application to inquire the number of free buffers, meaning that in order to
> avoid the undefined behaviour the applications would need to implement the
> safeguards somehow.
>
> If the behaviour was defined in any one specific way, the application
> could then have relied on the odp_pool implementation to handle this
> situation consistently, and would not need to worry about it.
>
> Happy New Year to all!
> - Leo
>
> ________________________________________
> From: lng-odp-bounces@lists.linaro.org <lng-odp-bounces@lists.linaro.org>
> on behalf of Victor Kamensky <victor.kamensky@linaro.org>
> Sent: Friday, September 26, 2014 11:00 AM
> To: Bill Fischofer; Anders Roxell
> Cc: lng-odp-forward
> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
>
> On 26 September 2014 03:39, Bill Fischofer <bill.fischofer@linaro.org>
> wrote:
> > From the ODP Buffer Management API Design doc:
> >
> > odp_buffer_pool_destroy
> >
> > /**
> > * Destroy a buffer pool previously created by odp_buffer_pool_create()
> > *
> > * @param pool[in]    Handle of the buffer pool to be destroyed
> > *
> > * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
> > *                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
> > */
> > int odp_buffer_pool_destroy (odp_buffer_pool_t pool);
> >
> >
> > This routine destroys a previously created buffer pool.  Note that
> attempts
> > to destroy a predefined buffer pool will be rejected since the
> application
> > did not create it.  Results are undefined if an attempt is made to
> destroy a
> > buffer pool that contains allocated or otherwise active buffers.
>
> IMO above note is sufficient as long as it is in API description
> itself I.e it should
> be inside of doxygen comment.
>
> Also please note patch posted by Anders does not match above description,
> neither in comment nor in implementation (return -1 for example).
> Anders, could you please address that in next version of the patch.
>
> Thanks,
> Victor
>
> >
> > --------
> >
> > Is this not sufficient for ODP v1.0?
> >
> > Bill
> >
> > On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <ola.liljedahl@linaro.org
> >
> > wrote:
> >>
> >> No, no specific platform in mind. But maybe I have more of the SW
> >> perspective in my thinking.
> >> I see a difference in maintaining actual counters of free/used buffers
> and
> >> maintaining "lists" (however they are implemented) of free buffers that
> can
> >> be used for allocation. Of course the buffer manager must know which
> buffers
> >> are free and thus can be allocated.
> >> Bill's proposal was to specifically maintain free/used counters which I
> >> object to.
> >> By traversing its list of free buffers, a buffer manager could find out
> >> how many buffers are free. But such an operation could take time.
> Imagine SW
> >> traversing a linked list, lots of cache misses.
> >> So a buffer manager should be able to report that buffer pool destroy
> >> cannot succeed because buffers are in use (and you don't need to know
> the
> >> actual number of used buffers). But I don't expect a buffer manager
> >> necessarily to have an API that reports the number of used and free
> buffers.
> >> If we should define such an API, it has to be marked as a potential
> >> performance killer. Forcing a (SW) buffer pool implementation to
> maintain
> >> SW-managed counters of free and user buffers is not a good idea. And
> >> besides, you want the check for used buffers and the destroy operation
> to be
> >> atomic, a separate API for returning the counters would not maintain the
> >> atomicity and we would return to the situation where we might destroy a
> >> buffer poll with buffers in use and corresponding undefined behavior.
> >>
> >> -- Ola
> >>
> >> On 25 September 2014 22:25, Rosenboim, Leonid
> >> <Leonid.Rosenboim@caviumnetworks.com> wrote:
> >>>
> >>> Ola,
> >>>
> >>>
> >>> Do you have any specific platform in mind when writing this remark ?
> >>>
> >>>
> >>> I am having some difficulty wrapping my head around this remark,
> >>>
> >>> because it is hard for me to imagine how a buffer pool can keep track
> of
> >>> free buffers,
> >>>
> >>> yet unable to tell how many such free buffers currently available in
> >>> storage.
> >>>
> >>> If it can not track the number of free buffers, how would it know to
> tell
> >>> that it is
> >>>
> >>> empty and refuse an allocation request.
> >>>
> >>>
> >>> Alternatively, if a pool implements a counter of free buffers rather
> than
> >>> allocated (outstanding) buffers, all that is needed to implement the
> >>> functionality I proposed, os to save the buffer count that was
> supplied at
> >>> creation time, and compare that with free buffer count, presto.
> >>>
> >>>
> >>> In other words, what use is a buffer pool if it cant even track the
> >>> number of buffers it has to offer for allocation ?
> >>>
> >>>
> >>> - Leo
> >>>
> >>>
> >>> ________________________________
> >>> From: Ola Liljedahl <ola.liljedahl@linaro.org>
> >>> Sent: Thursday, September 25, 2014 10:53 AM
> >>> To: Bill Fischofer
> >>> Cc: Rosenboim, Leonid; lng-odp-forward
> >>>
> >>> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> >>>
> >>> Free/use counters updated on every alloc and free call might not be
> >>> possible to support, e.g. for HW buffer pools which allow allocation
> from HW
> >>> blocks. Also SW implementations of (shared) counters are not scalable.
> >>> Better to attempt to check if any buffers are still in use when buffer
> pool
> >>> destruction is attempted. Perhaps buffers should be marked as free or
> in use
> >>> and odp_buffer_pool_destroy() could traverse the buffer pool and check
> the
> >>> flags.
> >>>
> >>> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org>
> >>> wrote:
> >>>>
> >>>> That's an interesting suggestion and something that should be
> relatively
> >>>> easy to implement since one of the pieces of buffer pool meta data
> could be
> >>>> an allocation count that is incremented on each buffer_alloc() and
> >>>> decremented on each buffer_free().
> >>>>
> >>>> Bill
> >>>>
> >>>> On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid
> >>>> <Leonid.Rosenboim@caviumnetworks.com> wrote:
> >>>>>
> >>>>> How about checking that all buffers have been freed to the pool prior
> >>>>> to destroying the pool itself ?
> >>>>>
> >>>>>
> >>>>> Also, since odp_buffer_pool_create() gets memory from the caller, the
> >>>>> destroy description should spell out that the caller is also
> responsable for
> >>>>> freeing this memory for complete reverse of the action.
> >>>>>
> >>>>>
> >>>>> Then, if some buffers have not been returned, what to do ?
> >>>>>
> >>>>>
> >>>>> One option is to warn and proceed, another option is to bail and
> return
> >>>>> an error, a third option is to mark the pool as destined for
> deletion, which
> >>>>> prevents new buffers from being doled out, but allows buffer frees,
> and
> >>>>> trigger an automatic pool destruction when the last buffer is
> returned.
> >>>>>
> >>>>>
> >>>>> The risk of proceeding with destruction while some buffers are
> >>>>> outstanding is potential memory corruption that is very hard to
> debug.
> >>>>>
> >>>>>
> >>>>> ________________________________
> >>>>> From: lng-odp-bounces@lists.linaro.org
> >>>>> <lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell
> >>>>> <anders.roxell@linaro.org>
> >>>>> Sent: Thursday, September 25, 2014 5:48 AM
> >>>>> To: lng-odp-forward
> >>>>> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org>
> >>>>> wrote:
> >>>>>>
> >>>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
> >>>>>> ---
> >>>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
> >>>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
> >>>>>> ++++++++++++++++++++++
> >>>>>>  2 files changed, 34 insertions(+)
> >>>>>>
> >>>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> index fe88898..fca2c62 100644
> >>>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
> >>>>>>
> >>>>>>
> >>>>>>  /**
> >>>>>> + * Delete a buffer pool
> >>>>>> + *
> >>>>>> + * @param pool_hdl Buffer pool handle
> >>>>>> + *
> >>>>>> + * @return 0 if successful else -1
> >>>>>> + */
> >>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
> >>>>>> +
> >>>>>> +/**
> >>>>>>   * Create a buffer pool
> >>>>>>   *
> >>>>>>   * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN
> -
> >>>>>> 1 chars)
> >>>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
> >>>>>> b/platform/linux-generic/odp_buffer_pool.c
> >>>>>> index 4d9ff45..7ba793d 100644
> >>>>>> --- a/platform/linux-generic/odp_buffer_pool.c
> >>>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
> >>>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
> >>>>>>         }
> >>>>>>  }
> >>>>>>
> >>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
> >>>>>> +{
> >>>>>> +       pool_entry_t *pool;
> >>>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
> >>>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n",
> pool_hdl);
> >>>>>> +
> >>>>>> +       pool = get_pool_entry(pool_hdl);
> >>>>>> +       pool->s.free_bufs = 0;
> >>>>>> +       strcpy(pool->s.name, "");
> >>>>>> +       /* Need to restore this because this is setup in
> >>>>>> +        * odp_buffer_pool_init_global
> >>>>>> +        * */
> >>>>>> +       pool->s.pool_hdl = pool_hdl;
> >>>>>> +       pool->s.buf_base = 0;
> >>>>>> +       pool->s.buf_size = 0;
> >>>>>> +       pool->s.buf_offset = 0;
> >>>>>> +       pool->s.num_bufs = 0;
> >>>>>> +       pool->s.pool_base_addr = NULL;
> >>>>>> +       pool->s.pool_size = 0;
> >>>>>> +       pool->s.user_size = 0;
> >>>>>> +       pool->s.user_align = 0;
> >>>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
> >>>>>> +       pool->s.hdr_size = 0;
> >>>>>> +       return 0;
> >>>>>> +}
> >>>>>>
> >>>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
> >>>>>>                                          void *base_addr, uint64_t
> >>>>>> size,
> >>>>>> --
> >>>>>> 2.1.0
> >>>>>>
> >>>>>
> >>>>>
> >>>>> Hi all,
> >>>>>
> >>>>> I had a chat with Ola and he proposed destroy instead of delete
> >>>>>
> >>>>> I think I like odp_buffer_pool_destroy more than what I've done in
> the
> >>>>> patch odp_buffer_pool_delete... =)
> >>>>>
> >>>>> Comments?
> >>>>>
> >>>>> Cheers,
> >>>>> Anders
> >>>>>
> >>>>> _______________________________________________
> >>>>> lng-odp mailing list
> >>>>> lng-odp@lists.linaro.org
> >>>>> http://lists.linaro.org/mailman/listinfo/lng-odp
> >>>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> lng-odp mailing list
> >>>> lng-odp@lists.linaro.org
> >>>> http://lists.linaro.org/mailman/listinfo/lng-odp
> >>>>
> >>>
> >>
> >
> >
> > _______________________________________________
> > lng-odp mailing list
> > lng-odp@lists.linaro.org
> > http://lists.linaro.org/mailman/listinfo/lng-odp
> >
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
Anders Roxell Sept. 29, 2014, 11:21 a.m. UTC | #9
On 2014-09-26 11:00, Victor Kamensky wrote:
> On 26 September 2014 03:39, Bill Fischofer <bill.fischofer@linaro.org> wrote:
> > From the ODP Buffer Management API Design doc:
> >
> > odp_buffer_pool_destroy
> >
> > /**
> > * Destroy a buffer pool previously created by odp_buffer_pool_create()
> > *
> > * @param pool[in]    Handle of the buffer pool to be destroyed
> > *
> > * @return Success or ODP_BUFFER_POOL_INVALID if pool is unknown
> > *                 or ODP_BUFFER_POOL_PREDEFINED if pool is predefined
> > */
> > int odp_buffer_pool_destroy (odp_buffer_pool_t pool);
> >
> >
> > This routine destroys a previously created buffer pool.  Note that attempts
> > to destroy a predefined buffer pool will be rejected since the application
> > did not create it.  Results are undefined if an attempt is made to destroy a
> > buffer pool that contains allocated or otherwise active buffers.
> 
> IMO above note is sufficient as long as it is in API description
> itself I.e it should
> be inside of doxygen comment.
> 
> Also please note patch posted by Anders does not match above description,
> neither in comment nor in implementation (return -1 for example).
> Anders, could you please address that in next version of the patch.

Yes I will fix that Victor.


Cheers,
Anders

> 
> Thanks,
> Victor
> 
> >
> > --------
> >
> > Is this not sufficient for ODP v1.0?
> >
> > Bill
> >
> > On Fri, Sep 26, 2014 at 4:25 AM, Ola Liljedahl <ola.liljedahl@linaro.org>
> > wrote:
> >>
> >> No, no specific platform in mind. But maybe I have more of the SW
> >> perspective in my thinking.
> >> I see a difference in maintaining actual counters of free/used buffers and
> >> maintaining "lists" (however they are implemented) of free buffers that can
> >> be used for allocation. Of course the buffer manager must know which buffers
> >> are free and thus can be allocated.
> >> Bill's proposal was to specifically maintain free/used counters which I
> >> object to.
> >> By traversing its list of free buffers, a buffer manager could find out
> >> how many buffers are free. But such an operation could take time. Imagine SW
> >> traversing a linked list, lots of cache misses.
> >> So a buffer manager should be able to report that buffer pool destroy
> >> cannot succeed because buffers are in use (and you don't need to know the
> >> actual number of used buffers). But I don't expect a buffer manager
> >> necessarily to have an API that reports the number of used and free buffers.
> >> If we should define such an API, it has to be marked as a potential
> >> performance killer. Forcing a (SW) buffer pool implementation to maintain
> >> SW-managed counters of free and user buffers is not a good idea. And
> >> besides, you want the check for used buffers and the destroy operation to be
> >> atomic, a separate API for returning the counters would not maintain the
> >> atomicity and we would return to the situation where we might destroy a
> >> buffer poll with buffers in use and corresponding undefined behavior.
> >>
> >> -- Ola
> >>
> >> On 25 September 2014 22:25, Rosenboim, Leonid
> >> <Leonid.Rosenboim@caviumnetworks.com> wrote:
> >>>
> >>> Ola,
> >>>
> >>>
> >>> Do you have any specific platform in mind when writing this remark ?
> >>>
> >>>
> >>> I am having some difficulty wrapping my head around this remark,
> >>>
> >>> because it is hard for me to imagine how a buffer pool can keep track of
> >>> free buffers,
> >>>
> >>> yet unable to tell how many such free buffers currently available in
> >>> storage.
> >>>
> >>> If it can not track the number of free buffers, how would it know to tell
> >>> that it is
> >>>
> >>> empty and refuse an allocation request.
> >>>
> >>>
> >>> Alternatively, if a pool implements a counter of free buffers rather than
> >>> allocated (outstanding) buffers, all that is needed to implement the
> >>> functionality I proposed, os to save the buffer count that was supplied at
> >>> creation time, and compare that with free buffer count, presto.
> >>>
> >>>
> >>> In other words, what use is a buffer pool if it cant even track the
> >>> number of buffers it has to offer for allocation ?
> >>>
> >>>
> >>> - Leo
> >>>
> >>>
> >>> ________________________________
> >>> From: Ola Liljedahl <ola.liljedahl@linaro.org>
> >>> Sent: Thursday, September 25, 2014 10:53 AM
> >>> To: Bill Fischofer
> >>> Cc: Rosenboim, Leonid; lng-odp-forward
> >>>
> >>> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> >>>
> >>> Free/use counters updated on every alloc and free call might not be
> >>> possible to support, e.g. for HW buffer pools which allow allocation from HW
> >>> blocks. Also SW implementations of (shared) counters are not scalable.
> >>> Better to attempt to check if any buffers are still in use when buffer pool
> >>> destruction is attempted. Perhaps buffers should be marked as free or in use
> >>> and odp_buffer_pool_destroy() could traverse the buffer pool and check the
> >>> flags.
> >>>
> >>> On 25 September 2014 17:55, Bill Fischofer <bill.fischofer@linaro.org>
> >>> wrote:
> >>>>
> >>>> That's an interesting suggestion and something that should be relatively
> >>>> easy to implement since one of the pieces of buffer pool meta data could be
> >>>> an allocation count that is incremented on each buffer_alloc() and
> >>>> decremented on each buffer_free().
> >>>>
> >>>> Bill
> >>>>
> >>>> On Thu, Sep 25, 2014 at 10:49 AM, Rosenboim, Leonid
> >>>> <Leonid.Rosenboim@caviumnetworks.com> wrote:
> >>>>>
> >>>>> How about checking that all buffers have been freed to the pool prior
> >>>>> to destroying the pool itself ?
> >>>>>
> >>>>>
> >>>>> Also, since odp_buffer_pool_create() gets memory from the caller, the
> >>>>> destroy description should spell out that the caller is also responsable for
> >>>>> freeing this memory for complete reverse of the action.
> >>>>>
> >>>>>
> >>>>> Then, if some buffers have not been returned, what to do ?
> >>>>>
> >>>>>
> >>>>> One option is to warn and proceed, another option is to bail and return
> >>>>> an error, a third option is to mark the pool as destined for deletion, which
> >>>>> prevents new buffers from being doled out, but allows buffer frees, and
> >>>>> trigger an automatic pool destruction when the last buffer is returned.
> >>>>>
> >>>>>
> >>>>> The risk of proceeding with destruction while some buffers are
> >>>>> outstanding is potential memory corruption that is very hard to debug.
> >>>>>
> >>>>>
> >>>>> ________________________________
> >>>>> From: lng-odp-bounces@lists.linaro.org
> >>>>> <lng-odp-bounces@lists.linaro.org> on behalf of Anders Roxell
> >>>>> <anders.roxell@linaro.org>
> >>>>> Sent: Thursday, September 25, 2014 5:48 AM
> >>>>> To: lng-odp-forward
> >>>>> Subject: Re: [lng-odp] [PATCH] Add API odp_buffer_pool_delete
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 24 September 2014 22:12, Anders Roxell <anders.roxell@linaro.org>
> >>>>> wrote:
> >>>>>>
> >>>>>> Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
> >>>>>> ---
> >>>>>>  .../linux-generic/include/api/odp_buffer_pool.h    |  9 ++++++++
> >>>>>>  platform/linux-generic/odp_buffer_pool.c           | 25
> >>>>>> ++++++++++++++++++++++
> >>>>>>  2 files changed, 34 insertions(+)
> >>>>>>
> >>>>>> diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> b/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> index fe88898..fca2c62 100644
> >>>>>> --- a/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> +++ b/platform/linux-generic/include/api/odp_buffer_pool.h
> >>>>>> @@ -34,6 +34,15 @@ typedef uint32_t odp_buffer_pool_t;
> >>>>>>
> >>>>>>
> >>>>>>  /**
> >>>>>> + * Delete a buffer pool
> >>>>>> + *
> >>>>>> + * @param pool_hdl Buffer pool handle
> >>>>>> + *
> >>>>>> + * @return 0 if successful else -1
> >>>>>> + */
> >>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
> >>>>>> +
> >>>>>> +/**
> >>>>>>   * Create a buffer pool
> >>>>>>   *
> >>>>>>   * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN -
> >>>>>> 1 chars)
> >>>>>> diff --git a/platform/linux-generic/odp_buffer_pool.c
> >>>>>> b/platform/linux-generic/odp_buffer_pool.c
> >>>>>> index 4d9ff45..7ba793d 100644
> >>>>>> --- a/platform/linux-generic/odp_buffer_pool.c
> >>>>>> +++ b/platform/linux-generic/odp_buffer_pool.c
> >>>>>> @@ -375,6 +375,31 @@ static void link_bufs(pool_entry_t *pool)
> >>>>>>         }
> >>>>>>  }
> >>>>>>
> >>>>>> +int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
> >>>>>> +{
> >>>>>> +       pool_entry_t *pool;
> >>>>>> +       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
> >>>>>> +               ODP_ERR("set_handle: Bad pool handle %u\n", pool_hdl);
> >>>>>> +
> >>>>>> +       pool = get_pool_entry(pool_hdl);
> >>>>>> +       pool->s.free_bufs = 0;
> >>>>>> +       strcpy(pool->s.name, "");
> >>>>>> +       /* Need to restore this because this is setup in
> >>>>>> +        * odp_buffer_pool_init_global
> >>>>>> +        * */
> >>>>>> +       pool->s.pool_hdl = pool_hdl;
> >>>>>> +       pool->s.buf_base = 0;
> >>>>>> +       pool->s.buf_size = 0;
> >>>>>> +       pool->s.buf_offset = 0;
> >>>>>> +       pool->s.num_bufs = 0;
> >>>>>> +       pool->s.pool_base_addr = NULL;
> >>>>>> +       pool->s.pool_size = 0;
> >>>>>> +       pool->s.user_size = 0;
> >>>>>> +       pool->s.user_align = 0;
> >>>>>> +       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
> >>>>>> +       pool->s.hdr_size = 0;
> >>>>>> +       return 0;
> >>>>>> +}
> >>>>>>
> >>>>>>  odp_buffer_pool_t odp_buffer_pool_create(const char *name,
> >>>>>>                                          void *base_addr, uint64_t
> >>>>>> size,
> >>>>>> --
> >>>>>> 2.1.0
> >>>>>>
> >>>>>
> >>>>>
> >>>>> Hi all,
> >>>>>
> >>>>> I had a chat with Ola and he proposed destroy instead of delete
> >>>>>
> >>>>> I think I like odp_buffer_pool_destroy more than what I've done in the
> >>>>> patch odp_buffer_pool_delete... =)
> >>>>>
> >>>>> Comments?
> >>>>>
> >>>>> Cheers,
> >>>>> Anders
> >>>>>
> >>>>> _______________________________________________
> >>>>> lng-odp mailing list
> >>>>> lng-odp@lists.linaro.org
> >>>>> http://lists.linaro.org/mailman/listinfo/lng-odp
> >>>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> lng-odp mailing list
> >>>> lng-odp@lists.linaro.org
> >>>> http://lists.linaro.org/mailman/listinfo/lng-odp
> >>>>
> >>>
> >>
> >
> >
> > _______________________________________________
> > lng-odp mailing list
> > lng-odp@lists.linaro.org
> > http://lists.linaro.org/mailman/listinfo/lng-odp
> >
diff mbox

Patch

diff --git a/platform/linux-generic/include/api/odp_buffer_pool.h b/platform/linux-generic/include/api/odp_buffer_pool.h
index fe88898..fca2c62 100644
--- a/platform/linux-generic/include/api/odp_buffer_pool.h
+++ b/platform/linux-generic/include/api/odp_buffer_pool.h
@@ -34,6 +34,15 @@  typedef uint32_t odp_buffer_pool_t;


 /**
+ * Delete a buffer pool
+ *
+ * @param pool_hdl Buffer pool handle
+ *
+ * @return 0 if successful else -1
+ */
+int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl);
+
+/**
  * Create a buffer pool
  *
  * @param name      Name of the pool (max ODP_BUFFER_POOL_NAME_LEN - 1 chars)
diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c
index 4d9ff45..7ba793d 100644
--- a/platform/linux-generic/odp_buffer_pool.c
+++ b/platform/linux-generic/odp_buffer_pool.c
@@ -375,6 +375,31 @@  static void link_bufs(pool_entry_t *pool)
        }
 }

+int odp_buffer_pool_delete(odp_buffer_pool_t pool_hdl)
+{
+       pool_entry_t *pool;
+       if (odp_unlikely(ODP_CONFIG_BUFFER_POOLS <= pool_hdl))
+               ODP_ERR("set_handle: Bad pool handle %u\n", pool_hdl);
+
+       pool = get_pool_entry(pool_hdl);
+       pool->s.free_bufs = 0;
+       strcpy(pool->s.name<http://s.name>, "");
+       /* Need to restore this because this is setup in
+        * odp_buffer_pool_init_global
+        * */
+       pool->s.pool_hdl = pool_hdl;
+       pool->s.buf_base = 0;
+       pool->s.buf_size = 0;
+       pool->s.buf_offset = 0;
+       pool->s.num_bufs = 0;
+       pool->s.pool_base_addr = NULL;
+       pool->s.pool_size = 0;
+       pool->s.user_size = 0;
+       pool->s.user_align = 0;
+       pool->s.buf_type = ODP_BUFFER_POOL_INVALID;
+       pool->s.hdr_size = 0;
+       return 0;
+}

 odp_buffer_pool_t odp_buffer_pool_create(const char *name,
                                         void *base_addr, uint64_t size,