diff mbox

[API-NEXT] api: pool: add buffer constructor for pool creation parameters

Message ID 1439317883-9334-1-git-send-email-zoltan.kiss@linaro.org
State New
Headers show

Commit Message

Zoltan Kiss Aug. 11, 2015, 6:31 p.m. UTC
Applications can preset certain parts of the buffer or user area, so when that
memory will be allocated it starts from a known state. If the platform
allocates the memory during pool creation, it's enough to run the constructor
after that. If it's allocating memory on demand, it should call the
constructor each time.

Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
---
 include/odp/api/pool.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

Comments

Balasubramanian Manoharan Aug. 12, 2015, 6:34 a.m. UTC | #1
Hi,

Comments inline...

On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:

> Applications can preset certain parts of the buffer or user area, so when
> that
> memory will be allocated it starts from a known state. If the platform
> allocates the memory during pool creation, it's enough to run the
> constructor
> after that. If it's allocating memory on demand, it should call the
> constructor each time.
>

[Bala] Not sure if I understand the above description correctly does it
mean that if the memory is allocated
for an existing pool this call should be called only during the pool
creation and not during each and every buffer allocation?
Then it will be applications responsibility to reset the user area before
freeing the buffer? Is this the use-case this API is trying to address?


>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
> ---
>  include/odp/api/pool.h | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
> index 2e79a55..1bd19bf 100644
> --- a/include/odp/api/pool.h
> +++ b/include/odp/api/pool.h
> @@ -41,6 +41,20 @@ extern "C" {
>  #define ODP_POOL_NAME_LEN  32
>
>  /**
> + * Buffer constructor callback function for pools.
> + *
> + * @param pool          Handle of the pool which owns the buffer
> + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
> + * @param buf           Pointer to the buffer
> + *
> + * @note If the application specifies this pointer, it expects that every
> buffer
> + * is initialized with it when the underlying memory is allocated.
> + */
> +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
> +                                  void *buf_ctor_arg,
> +                                  void *buf);
> +
>

[Bala] We have avoided call back functions in ODP architecture so if the
requirement can be addressed without a call-back maybe we can follow that
approach. Again I am not clear if this call-back function should be called
only once during pool creation or every time during buffer alloc.


> +/**
>   * Pool parameters
>   * Used to communicate pool creation options.
>   */
> @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>                         uint32_t num;
>                 } tmo;
>         };
> +
> +       /** Buffer constructor to initialize every buffer. Use NULL if no
> +           initialization needed. */
> +       odp_pool_buf_ctor_t *buf_ctor;
> +       /** Opaque pointer passed to buffer constructor */
> +       void *buf_ctor_arg;
>  } odp_pool_param_t;
>
>  /** Packet pool*/
> --
> 1.9.1
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>

Regards,
Bala
Ivan Khoronzhuk Aug. 12, 2015, 7:01 a.m. UTC | #2
Hi, Zoltan

On 11.08.15 21:31, Zoltan Kiss wrote:
> Applications can preset certain parts of the buffer or user area, so when that
> memory will be allocated it starts from a known state. If the platform
> allocates the memory during pool creation, it's enough to run the constructor
> after that. If it's allocating memory on demand, it should call the
> constructor each time.

Could you please clarify for what it's needed for.

The main questions after first glance:
What use-case is it intended for?
Why this cannot be done by implementation implicitly?
Why should an application know about some hash, buffer preallocation, etc?
What is an application going to do with a hash?
What is some profit of this?

>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
> ---
>   include/odp/api/pool.h | 20 ++++++++++++++++++++
>   1 file changed, 20 insertions(+)
>
> diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
> index 2e79a55..1bd19bf 100644
> --- a/include/odp/api/pool.h
> +++ b/include/odp/api/pool.h
> @@ -41,6 +41,20 @@ extern "C" {
>   #define ODP_POOL_NAME_LEN  32
>
>   /**
> + * Buffer constructor callback function for pools.
> + *
> + * @param pool          Handle of the pool which owns the buffer
> + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
> + * @param buf           Pointer to the buffer
> + *
> + * @note If the application specifies this pointer, it expects that every buffer
> + * is initialized with it when the underlying memory is allocated.
> + */
> +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
> +				   void *buf_ctor_arg,
> +				   void *buf);
> +
> +/**
>    * Pool parameters
>    * Used to communicate pool creation options.
>    */
> @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>   			uint32_t num;
>   		} tmo;
>   	};
> +
> +	/** Buffer constructor to initialize every buffer. Use NULL if no
> +	    initialization needed. */
> +	odp_pool_buf_ctor_t *buf_ctor;
> +	/** Opaque pointer passed to buffer constructor */
> +	void *buf_ctor_arg;
>   } odp_pool_param_t;
>
>   /** Packet pool*/
>
Zoltan Kiss Aug. 12, 2015, 10:07 a.m. UTC | #3
On 12/08/15 07:34, Bala Manoharan wrote:
> Hi,
>
> Comments inline...
>
> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
> <mailto:zoltan.kiss@linaro.org>> wrote:
>
>     Applications can preset certain parts of the buffer or user area, so
>     when that
>     memory will be allocated it starts from a known state. If the platform
>     allocates the memory during pool creation, it's enough to run the
>     constructor
>     after that. If it's allocating memory on demand, it should call the
>     constructor each time.
>
>
> [Bala] Not sure if I understand the above description correctly does it
> mean that if the memory is allocated
> for an existing pool this call should be called only during the pool
> creation and not during each and every buffer allocation?
I'm also not sure I understand the question :) How do you mean "if the 
memory is allocated for an _existing_ pool"? I mean, you can't allocate 
memory for a pool which doesn't exist.
The key point is "when that memory will be allocated". This callback 
should be called whenever the memory is allocated for the buffer. If it 
is done during pool creation (which I guess most sensible 
implementations do), then it happens then. If for some reason the 
platform allocates memory when someone allocates the buffer, it happens 
then.

> Then it will be applications responsibility to reset the user area
> before freeing the buffer? Is this the use-case this API is trying to
> address?
No, the application is not required to reset it at any time. It can do 
that, if it's what it wants. The only requirement is that the buffer 
starts from a known state after its memory was allocated.
The use case is the following: OVS has it's layer to handle buffers from 
different sources, e.g. it has to be able to differentiate between 
packets coming from DPDK and ODP (the latter can run DPDK underneath as 
well, but that's a different story ...). It stores a "struct dp_packet" 
in the user area to store data about the packet:

https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41

Most of these fields will be set during processing to their right value, 
however there are two things we need to set right after receive: 
"source" to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself. 
The first value will be the same for all ODP packets, every time. And 
the second is known when the underlying memory was allocated.
Both of them could be set when the pool is created, OVS-DPDK does that 
already, but ODP-OVS has to reset these values after every receive. Even 
when it's only a few cycles to set and maybe unnecessary cache loads, 
it's still a lot of cycles when you receive at 10-40-100 Gbps.


>
>
>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>     <mailto:zoltan.kiss@linaro.org>>
>     ---
>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>       1 file changed, 20 insertions(+)
>
>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>     index 2e79a55..1bd19bf 100644
>     --- a/include/odp/api/pool.h
>     +++ b/include/odp/api/pool.h
>     @@ -41,6 +41,20 @@ extern "C" {
>       #define ODP_POOL_NAME_LEN  32
>
>       /**
>     + * Buffer constructor callback function for pools.
>     + *
>     + * @param pool          Handle of the pool which owns the buffer
>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>     + * @param buf           Pointer to the buffer
>     + *
>     + * @note If the application specifies this pointer, it expects that
>     every buffer
>     + * is initialized with it when the underlying memory is allocated.
>     + */
>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>     +                                  void *buf_ctor_arg,
>     +                                  void *buf);
>     +
>
>
> [Bala] We have avoided call back functions in ODP architecture so if the
> requirement can be addressed without a call-back maybe we can follow
> that approach. Again I am not clear if this call-back function should be
> called only once during pool creation or every time during buffer alloc.
>
>     +/**
>        * Pool parameters
>        * Used to communicate pool creation options.
>        */
>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>                              uint32_t num;
>                      } tmo;
>              };
>     +
>     +       /** Buffer constructor to initialize every buffer. Use NULL
>     if no
>     +           initialization needed. */
>     +       odp_pool_buf_ctor_t *buf_ctor;
>     +       /** Opaque pointer passed to buffer constructor */
>     +       void *buf_ctor_arg;
>       } odp_pool_param_t;
>
>       /** Packet pool*/
>     --
>     1.9.1
>
>     _______________________________________________
>     lng-odp mailing list
>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>     https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
> Regards,
> Bala
Zoltan Kiss Aug. 12, 2015, 10:09 a.m. UTC | #4
On 12/08/15 07:34, Bala Manoharan wrote:
> Hi,
>
> Comments inline...
>
> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
> <mailto:zoltan.kiss@linaro.org>> wrote:
>
>     Applications can preset certain parts of the buffer or user area, so
>     when that
>     memory will be allocated it starts from a known state. If the platform
>     allocates the memory during pool creation, it's enough to run the
>     constructor
>     after that. If it's allocating memory on demand, it should call the
>     constructor each time.
>
>
> [Bala] Not sure if I understand the above description correctly does it
> mean that if the memory is allocated
> for an existing pool this call should be called only during the pool
> creation and not during each and every buffer allocation?
> Then it will be applications responsibility to reset the user area
> before freeing the buffer? Is this the use-case this API is trying to
> address?
>
>
>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>     <mailto:zoltan.kiss@linaro.org>>
>     ---
>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>       1 file changed, 20 insertions(+)
>
>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>     index 2e79a55..1bd19bf 100644
>     --- a/include/odp/api/pool.h
>     +++ b/include/odp/api/pool.h
>     @@ -41,6 +41,20 @@ extern "C" {
>       #define ODP_POOL_NAME_LEN  32
>
>       /**
>     + * Buffer constructor callback function for pools.
>     + *
>     + * @param pool          Handle of the pool which owns the buffer
>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>     + * @param buf           Pointer to the buffer
>     + *
>     + * @note If the application specifies this pointer, it expects that
>     every buffer
>     + * is initialized with it when the underlying memory is allocated.
>     + */
>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>     +                                  void *buf_ctor_arg,
>     +                                  void *buf);
>     +
>
>
> [Bala] We have avoided call back functions in ODP architecture so if the
> requirement can be addressed without a call-back maybe we can follow
> that approach. Again I am not clear if this call-back function should be
> called only once during pool creation or every time during buffer alloc.

Why is this fear from callbacks? I know it's breaking call graph feature 
of Eclipse, but that's not a huge issue.

>
>     +/**
>        * Pool parameters
>        * Used to communicate pool creation options.
>        */
>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>                              uint32_t num;
>                      } tmo;
>              };
>     +
>     +       /** Buffer constructor to initialize every buffer. Use NULL
>     if no
>     +           initialization needed. */
>     +       odp_pool_buf_ctor_t *buf_ctor;
>     +       /** Opaque pointer passed to buffer constructor */
>     +       void *buf_ctor_arg;
>       } odp_pool_param_t;
>
>       /** Packet pool*/
>     --
>     1.9.1
>
>     _______________________________________________
>     lng-odp mailing list
>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>     https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
> Regards,
> Bala
Bill Fischofer Aug. 12, 2015, 10:40 a.m. UTC | #5
Some basic comments:

First, ODP buffers are never returned as addresses.  Buffers are of type
odp_buffer_t, not void *.

Second, the purpose here is to initialize user metadata, not the buffer
(that's the responsibility of the implementation), and ODP only defines
user metadata for packets, not general buffers, so this init is done for
pools of type ODP_POOL_PACKET.  So a better signature might be:

typedef void (odp_packet_uarea_init_t)(odp_packet_t pkt, void
*uarea_init_arg, void *uarea);

and this would be activated via a new field in the odp_pool_param_t:

typedef struct odp_pool_param_t {
/** Pool type */
int type;

union {
struct {
/** Number of buffers in the pool */
uint32_t num;

/** Buffer size in bytes. The maximum number of bytes
   application will store in each buffer. */
uint32_t size;

/** Minimum buffer alignment in bytes. Valid values are
   powers of two. Use 0 for default alignment.
   Default will always be a multiple of 8. */
uint32_t align;
} buf;
struct {
/** The number of packets that the pool must provide
   that are packet length 'len' bytes or smaller. */
uint32_t num;

/** Minimum packet length that the pool must provide
   'num' packets. The number of packets may be less
   than 'num' when packets are larger than 'len'.
   Use 0 for default. */
uint32_t len;

/** Minimum number of packet data bytes that are stored
   in the first segment of a packet. The maximum value
   is defined by ODP_CONFIG_PACKET_SEG_LEN_MAX.
   Use 0 for default. */
uint32_t seg_len;

/** User area size in bytes. Specify as 0 if no user
   area is needed. */
uint32_t uarea_size;

                       /** User area initializer.  Specify as NULL if
initialization calls
                            are not needed */
                       odp_packet_uarea_init_t uarea_init;

                       /** User area initializer parameter.  Ignored unless
user area
                            initializer is non-NULL */
                       void *uarea_init_arg;
} pkt;
struct {
/** Number of timeouts in the pool */
uint32_t num;
} tmo;
};
} odp_pool_param_t;

The semantics would be that at odp_pool_create() time, if a uarea_init()
routine is specified for the pool, then it is called for each packet in the
pool and is passed the handle of that packet, the opaque uarea_init_arg
specified on the odp_pool_param_t, and the address of the user area
associated with that packet (the size of that area is fixed and presumably
known to the initializer since it was specified in the uarea_size field of
the odp_pool_param_t).

One of the objections to callbacks voiced last year is that different
implementations may not be able to iterate through pool members as implied
by this.  Since the intent here is to initialize the packet user area,
perhaps a more specific focus on that intent would be less problematic.

On Wed, Aug 12, 2015 at 5:09 AM, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:

>
>
> On 12/08/15 07:34, Bala Manoharan wrote:
>
>> Hi,
>>
>> Comments inline...
>>
>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>
>>     Applications can preset certain parts of the buffer or user area, so
>>     when that
>>     memory will be allocated it starts from a known state. If the platform
>>     allocates the memory during pool creation, it's enough to run the
>>     constructor
>>     after that. If it's allocating memory on demand, it should call the
>>     constructor each time.
>>
>>
>> [Bala] Not sure if I understand the above description correctly does it
>> mean that if the memory is allocated
>> for an existing pool this call should be called only during the pool
>> creation and not during each and every buffer allocation?
>> Then it will be applications responsibility to reset the user area
>> before freeing the buffer? Is this the use-case this API is trying to
>> address?
>>
>>
>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>     <mailto:zoltan.kiss@linaro.org>>
>>
>>     ---
>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>       1 file changed, 20 insertions(+)
>>
>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>     index 2e79a55..1bd19bf 100644
>>     --- a/include/odp/api/pool.h
>>     +++ b/include/odp/api/pool.h
>>     @@ -41,6 +41,20 @@ extern "C" {
>>       #define ODP_POOL_NAME_LEN  32
>>
>>       /**
>>     + * Buffer constructor callback function for pools.
>>     + *
>>     + * @param pool          Handle of the pool which owns the buffer
>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>     + * @param buf           Pointer to the buffer
>>     + *
>>     + * @note If the application specifies this pointer, it expects that
>>     every buffer
>>     + * is initialized with it when the underlying memory is allocated.
>>     + */
>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>     +                                  void *buf_ctor_arg,
>>     +                                  void *buf);
>>     +
>>
>>
>> [Bala] We have avoided call back functions in ODP architecture so if the
>> requirement can be addressed without a call-back maybe we can follow
>> that approach. Again I am not clear if this call-back function should be
>> called only once during pool creation or every time during buffer alloc.
>>
>
> Why is this fear from callbacks? I know it's breaking call graph feature
> of Eclipse, but that's not a huge issue.
>
>
>>     +/**
>>        * Pool parameters
>>        * Used to communicate pool creation options.
>>        */
>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>                              uint32_t num;
>>                      } tmo;
>>              };
>>     +
>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>     if no
>>     +           initialization needed. */
>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>     +       /** Opaque pointer passed to buffer constructor */
>>     +       void *buf_ctor_arg;
>>       } odp_pool_param_t;
>>
>>       /** Packet pool*/
>>     --
>>     1.9.1
>>
>>     _______________________________________________
>>     lng-odp mailing list
>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>> Regards,
>> Bala
>>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
Balasubramanian Manoharan Aug. 12, 2015, 10:47 a.m. UTC | #6
On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:

>
>
> On 12/08/15 07:34, Bala Manoharan wrote:
>
>> Hi,
>>
>> Comments inline...
>>
>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>
>>     Applications can preset certain parts of the buffer or user area, so
>>     when that
>>     memory will be allocated it starts from a known state. If the platform
>>     allocates the memory during pool creation, it's enough to run the
>>     constructor
>>     after that. If it's allocating memory on demand, it should call the
>>     constructor each time.
>>
>>
>> [Bala] Not sure if I understand the above description correctly does it
>> mean that if the memory is allocated
>> for an existing pool this call should be called only during the pool
>> creation and not during each and every buffer allocation?
>>
> I'm also not sure I understand the question :) How do you mean "if the
> memory is allocated for an _existing_ pool"? I mean, you can't allocate
> memory for a pool which doesn't exist.
> The key point is "when that memory will be allocated". This callback
> should be called whenever the memory is allocated for the buffer. If it is
> done during pool creation (which I guess most sensible implementations do),
> then it happens then. If for some reason the platform allocates memory when
> someone allocates the buffer, it happens then.
>

[Bala] Let me try to decrypt my query in a better way :-)

Usually when application calls odp_pool_create() most implementations will
allocate a memory region and map them as pointers based on a fixed segment
size so that when the application calls odp_buffer_allocate() the segment
of this memory pool (segments may be combined depending on the required
size of the buffer ) will be returned to the user.

So my understanding of your requirement is that whenever the application
calls  odp_buffer_alloc() then the user area should be reset to a
predefined value.

So in-case that an application does the following
1. odp_buffer_alloc()
2. odp_buffer_free()
3. odp_buffer_alloc()

For simplicity lets assume the implementation returned the same buffer
during the second odp_buffer_alloc() call ( Basically the buffer gets
reused ) then the implementation should have reset the user-area of the
buffer before returning it to the application. Is this the requirement?

I have additional query regarding the use-case you have defined below but I
would like to get clarity on the above first :)

Regards,
Bala


> Then it will be applications responsibility to reset the user area
>> before freeing the buffer? Is this the use-case this API is trying to
>> address?
>>
> No, the application is not required to reset it at any time. It can do
> that, if it's what it wants. The only requirement is that the buffer starts
> from a known state after its memory was allocated.
> The use case is the following: OVS has it's layer to handle buffers from
> different sources, e.g. it has to be able to differentiate between packets
> coming from DPDK and ODP (the latter can run DPDK underneath as well, but
> that's a different story ...). It stores a "struct dp_packet" in the user
> area to store data about the packet:
>
> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>
> Most of these fields will be set during processing to their right value,
> however there are two things we need to set right after receive: "source"
> to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself. The first
> value will be the same for all ODP packets, every time. And the second is
> known when the underlying memory was allocated.
> Both of them could be set when the pool is created, OVS-DPDK does that
> already, but ODP-OVS has to reset these values after every receive. Even
> when it's only a few cycles to set and maybe unnecessary cache loads, it's
> still a lot of cycles when you receive at 10-40-100 Gbps.
>
>
>
>>
>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>     <mailto:zoltan.kiss@linaro.org>>
>>
>>     ---
>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>       1 file changed, 20 insertions(+)
>>
>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>     index 2e79a55..1bd19bf 100644
>>     --- a/include/odp/api/pool.h
>>     +++ b/include/odp/api/pool.h
>>     @@ -41,6 +41,20 @@ extern "C" {
>>       #define ODP_POOL_NAME_LEN  32
>>
>>       /**
>>     + * Buffer constructor callback function for pools.
>>     + *
>>     + * @param pool          Handle of the pool which owns the buffer
>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>     + * @param buf           Pointer to the buffer
>>     + *
>>     + * @note If the application specifies this pointer, it expects that
>>     every buffer
>>     + * is initialized with it when the underlying memory is allocated.
>>     + */
>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>     +                                  void *buf_ctor_arg,
>>     +                                  void *buf);
>>     +
>>
>>
>> [Bala] We have avoided call back functions in ODP architecture so if the
>> requirement can be addressed without a call-back maybe we can follow
>> that approach. Again I am not clear if this call-back function should be
>> called only once during pool creation or every time during buffer alloc.
>>
>>     +/**
>>        * Pool parameters
>>        * Used to communicate pool creation options.
>>        */
>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>                              uint32_t num;
>>                      } tmo;
>>              };
>>     +
>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>     if no
>>     +           initialization needed. */
>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>     +       /** Opaque pointer passed to buffer constructor */
>>     +       void *buf_ctor_arg;
>>       } odp_pool_param_t;
>>
>>       /** Packet pool*/
>>     --
>>     1.9.1
>>
>>     _______________________________________________
>>     lng-odp mailing list
>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>> Regards,
>> Bala
>>
>
Balasubramanian Manoharan Aug. 12, 2015, 10:55 a.m. UTC | #7
On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
wrote:

>
>
> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>
>>
>>
>> On 12/08/15 07:34, Bala Manoharan wrote:
>>
>>> Hi,
>>>
>>> Comments inline...
>>>
>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>
>>>     Applications can preset certain parts of the buffer or user area, so
>>>     when that
>>>     memory will be allocated it starts from a known state. If the
>>> platform
>>>     allocates the memory during pool creation, it's enough to run the
>>>     constructor
>>>     after that. If it's allocating memory on demand, it should call the
>>>     constructor each time.
>>>
>>>
>>> [Bala] Not sure if I understand the above description correctly does it
>>> mean that if the memory is allocated
>>> for an existing pool this call should be called only during the pool
>>> creation and not during each and every buffer allocation?
>>>
>> I'm also not sure I understand the question :) How do you mean "if the
>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>> memory for a pool which doesn't exist.
>> The key point is "when that memory will be allocated". This callback
>> should be called whenever the memory is allocated for the buffer. If it is
>> done during pool creation (which I guess most sensible implementations do),
>> then it happens then. If for some reason the platform allocates memory when
>> someone allocates the buffer, it happens then.
>>
>
> [Bala] Let me try to decrypt my query in a better way :-)
>
> Usually when application calls odp_pool_create() most implementations will
> allocate a memory region and map them as pointers based on a fixed segment
> size so that when the application calls odp_buffer_allocate() the segment
> of this memory pool (segments may be combined depending on the required
> size of the buffer ) will be returned to the user.
>
> So my understanding of your requirement is that whenever the application
> calls  odp_buffer_alloc() then the user area should be reset to a
> predefined value.
>
> So in-case that an application does the following
> 1. odp_buffer_alloc()
> 2. odp_buffer_free()
> 3. odp_buffer_alloc()
>
> For simplicity lets assume the implementation returned the same buffer
> during the second odp_buffer_alloc() call ( Basically the buffer gets
> reused ) then the implementation should have reset the user-area of the
> buffer before returning it to the application. Is this the requirement?
>

[Bala] Missed a sentence here
In the above scenario implementation should have reset the user-area before
both odp_buffer_alloc() both in step 1 and step 3.

>
> I have additional query regarding the use-case you have defined below but
> I would like to get clarity on the above first :)
>
> Regards,
> Bala
>
>
>> Then it will be applications responsibility to reset the user area
>>> before freeing the buffer? Is this the use-case this API is trying to
>>> address?
>>>
>> No, the application is not required to reset it at any time. It can do
>> that, if it's what it wants. The only requirement is that the buffer starts
>> from a known state after its memory was allocated.
>> The use case is the following: OVS has it's layer to handle buffers from
>> different sources, e.g. it has to be able to differentiate between packets
>> coming from DPDK and ODP (the latter can run DPDK underneath as well, but
>> that's a different story ...). It stores a "struct dp_packet" in the user
>> area to store data about the packet:
>>
>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>
>> Most of these fields will be set during processing to their right value,
>> however there are two things we need to set right after receive: "source"
>> to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself. The first
>> value will be the same for all ODP packets, every time. And the second is
>> known when the underlying memory was allocated.
>> Both of them could be set when the pool is created, OVS-DPDK does that
>> already, but ODP-OVS has to reset these values after every receive. Even
>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>
>>
>>
>>>
>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>     <mailto:zoltan.kiss@linaro.org>>
>>>
>>>     ---
>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>       1 file changed, 20 insertions(+)
>>>
>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>     index 2e79a55..1bd19bf 100644
>>>     --- a/include/odp/api/pool.h
>>>     +++ b/include/odp/api/pool.h
>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>       #define ODP_POOL_NAME_LEN  32
>>>
>>>       /**
>>>     + * Buffer constructor callback function for pools.
>>>     + *
>>>     + * @param pool          Handle of the pool which owns the buffer
>>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>>     + * @param buf           Pointer to the buffer
>>>     + *
>>>     + * @note If the application specifies this pointer, it expects that
>>>     every buffer
>>>     + * is initialized with it when the underlying memory is allocated.
>>>     + */
>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>     +                                  void *buf_ctor_arg,
>>>     +                                  void *buf);
>>>     +
>>>
>>>
>>> [Bala] We have avoided call back functions in ODP architecture so if the
>>> requirement can be addressed without a call-back maybe we can follow
>>> that approach. Again I am not clear if this call-back function should be
>>> called only once during pool creation or every time during buffer alloc.
>>>
>>>     +/**
>>>        * Pool parameters
>>>        * Used to communicate pool creation options.
>>>        */
>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>                              uint32_t num;
>>>                      } tmo;
>>>              };
>>>     +
>>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>>     if no
>>>     +           initialization needed. */
>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>     +       /** Opaque pointer passed to buffer constructor */
>>>     +       void *buf_ctor_arg;
>>>       } odp_pool_param_t;
>>>
>>>       /** Packet pool*/
>>>     --
>>>     1.9.1
>>>
>>>     _______________________________________________
>>>     lng-odp mailing list
>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>
>>>
>>> Regards,
>>> Bala
>>>
>>
>
Bill Fischofer Aug. 12, 2015, 10:59 a.m. UTC | #8
User metadata is entirely under the control of the application.  The only
thing ODP implementations do with regard to it are:


   1. Reserve space for it at pool create time.
   2. Copy it if as part of an ODP API a new packet is implicitly
   allocated.  For example, the definition of odp_packet_add_data() returns an
   odp_packet_t that may be the same or different than the input
   odp_packet_t.  In the latter case, the user area of the input packet is
   copied to the output packet as part of the operation of the API

Under discussion here is the option to permit the application to initialize
the contents of the user area for each element of a packet pool at pool
create time.

I've added an agenda item for this to today's ARCH call.

On Wed, Aug 12, 2015 at 5:55 AM, Bala Manoharan <bala.manoharan@linaro.org>
wrote:

>
>
> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
> wrote:
>
>>
>>
>> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>
>>>
>>>
>>> On 12/08/15 07:34, Bala Manoharan wrote:
>>>
>>>> Hi,
>>>>
>>>> Comments inline...
>>>>
>>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>>
>>>>     Applications can preset certain parts of the buffer or user area, so
>>>>     when that
>>>>     memory will be allocated it starts from a known state. If the
>>>> platform
>>>>     allocates the memory during pool creation, it's enough to run the
>>>>     constructor
>>>>     after that. If it's allocating memory on demand, it should call the
>>>>     constructor each time.
>>>>
>>>>
>>>> [Bala] Not sure if I understand the above description correctly does it
>>>> mean that if the memory is allocated
>>>> for an existing pool this call should be called only during the pool
>>>> creation and not during each and every buffer allocation?
>>>>
>>> I'm also not sure I understand the question :) How do you mean "if the
>>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>>> memory for a pool which doesn't exist.
>>> The key point is "when that memory will be allocated". This callback
>>> should be called whenever the memory is allocated for the buffer. If it is
>>> done during pool creation (which I guess most sensible implementations do),
>>> then it happens then. If for some reason the platform allocates memory when
>>> someone allocates the buffer, it happens then.
>>>
>>
>> [Bala] Let me try to decrypt my query in a better way :-)
>>
>> Usually when application calls odp_pool_create() most implementations
>> will allocate a memory region and map them as pointers based on a fixed
>> segment size so that when the application calls odp_buffer_allocate() the
>> segment of this memory pool (segments may be combined depending on the
>> required size of the buffer ) will be returned to the user.
>>
>> So my understanding of your requirement is that whenever the application
>> calls  odp_buffer_alloc() then the user area should be reset to a
>> predefined value.
>>
>> So in-case that an application does the following
>> 1. odp_buffer_alloc()
>> 2. odp_buffer_free()
>> 3. odp_buffer_alloc()
>>
>> For simplicity lets assume the implementation returned the same buffer
>> during the second odp_buffer_alloc() call ( Basically the buffer gets
>> reused ) then the implementation should have reset the user-area of the
>> buffer before returning it to the application. Is this the requirement?
>>
>
> [Bala] Missed a sentence here
> In the above scenario implementation should have reset the user-area
> before both odp_buffer_alloc() both in step 1 and step 3.
>
>>
>> I have additional query regarding the use-case you have defined below but
>> I would like to get clarity on the above first :)
>>
>> Regards,
>> Bala
>>
>>
>>> Then it will be applications responsibility to reset the user area
>>>> before freeing the buffer? Is this the use-case this API is trying to
>>>> address?
>>>>
>>> No, the application is not required to reset it at any time. It can do
>>> that, if it's what it wants. The only requirement is that the buffer starts
>>> from a known state after its memory was allocated.
>>> The use case is the following: OVS has it's layer to handle buffers from
>>> different sources, e.g. it has to be able to differentiate between packets
>>> coming from DPDK and ODP (the latter can run DPDK underneath as well, but
>>> that's a different story ...). It stores a "struct dp_packet" in the user
>>> area to store data about the packet:
>>>
>>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>>
>>> Most of these fields will be set during processing to their right value,
>>> however there are two things we need to set right after receive: "source"
>>> to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself. The first
>>> value will be the same for all ODP packets, every time. And the second is
>>> known when the underlying memory was allocated.
>>> Both of them could be set when the pool is created, OVS-DPDK does that
>>> already, but ODP-OVS has to reset these values after every receive. Even
>>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>>
>>>
>>>
>>>>
>>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>>     <mailto:zoltan.kiss@linaro.org>>
>>>>
>>>>     ---
>>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>>       1 file changed, 20 insertions(+)
>>>>
>>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>>     index 2e79a55..1bd19bf 100644
>>>>     --- a/include/odp/api/pool.h
>>>>     +++ b/include/odp/api/pool.h
>>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>>       #define ODP_POOL_NAME_LEN  32
>>>>
>>>>       /**
>>>>     + * Buffer constructor callback function for pools.
>>>>     + *
>>>>     + * @param pool          Handle of the pool which owns the buffer
>>>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>>>     + * @param buf           Pointer to the buffer
>>>>     + *
>>>>     + * @note If the application specifies this pointer, it expects that
>>>>     every buffer
>>>>     + * is initialized with it when the underlying memory is allocated.
>>>>     + */
>>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>>     +                                  void *buf_ctor_arg,
>>>>     +                                  void *buf);
>>>>     +
>>>>
>>>>
>>>> [Bala] We have avoided call back functions in ODP architecture so if the
>>>> requirement can be addressed without a call-back maybe we can follow
>>>> that approach. Again I am not clear if this call-back function should be
>>>> called only once during pool creation or every time during buffer alloc.
>>>>
>>>>     +/**
>>>>        * Pool parameters
>>>>        * Used to communicate pool creation options.
>>>>        */
>>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>>                              uint32_t num;
>>>>                      } tmo;
>>>>              };
>>>>     +
>>>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>>>     if no
>>>>     +           initialization needed. */
>>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>>     +       /** Opaque pointer passed to buffer constructor */
>>>>     +       void *buf_ctor_arg;
>>>>       } odp_pool_param_t;
>>>>
>>>>       /** Packet pool*/
>>>>     --
>>>>     1.9.1
>>>>
>>>>     _______________________________________________
>>>>     lng-odp mailing list
>>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>
>>>>
>>>> Regards,
>>>> Bala
>>>>
>>>
>>
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
Balasubramanian Manoharan Aug. 12, 2015, 11:05 a.m. UTC | #9
Hi,

On 12 August 2015 at 16:29, Bill Fischofer <bill.fischofer@linaro.org>
wrote:

> User metadata is entirely under the control of the application.  The only
> thing ODP implementations do with regard to it are:
>
>
>    1. Reserve space for it at pool create time.
>    2. Copy it if as part of an ODP API a new packet is implicitly
>    allocated.  For example, the definition of odp_packet_add_data() returns an
>    odp_packet_t that may be the same or different than the input
>    odp_packet_t.  In the latter case, the user area of the input packet is
>    copied to the output packet as part of the operation of the API
>
> Under discussion here is the option to permit the application to
> initialize the contents of the user area for each element of a packet pool
> at pool create time.
>

Adding a minor point here. If the requirement is to call this
initialization only during pool create then application will have to reset
the user-area before calling odp_packet_free() API.

Regards,
Bala

>
> I've added an agenda item for this to today's ARCH call.
>
> On Wed, Aug 12, 2015 at 5:55 AM, Bala Manoharan <bala.manoharan@linaro.org
> > wrote:
>
>>
>>
>> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
>> wrote:
>>
>>>
>>>
>>> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>>
>>>>
>>>>
>>>> On 12/08/15 07:34, Bala Manoharan wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Comments inline...
>>>>>
>>>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>>>
>>>>>     Applications can preset certain parts of the buffer or user area,
>>>>> so
>>>>>     when that
>>>>>     memory will be allocated it starts from a known state. If the
>>>>> platform
>>>>>     allocates the memory during pool creation, it's enough to run the
>>>>>     constructor
>>>>>     after that. If it's allocating memory on demand, it should call the
>>>>>     constructor each time.
>>>>>
>>>>>
>>>>> [Bala] Not sure if I understand the above description correctly does it
>>>>> mean that if the memory is allocated
>>>>> for an existing pool this call should be called only during the pool
>>>>> creation and not during each and every buffer allocation?
>>>>>
>>>> I'm also not sure I understand the question :) How do you mean "if the
>>>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>>>> memory for a pool which doesn't exist.
>>>> The key point is "when that memory will be allocated". This callback
>>>> should be called whenever the memory is allocated for the buffer. If it is
>>>> done during pool creation (which I guess most sensible implementations do),
>>>> then it happens then. If for some reason the platform allocates memory when
>>>> someone allocates the buffer, it happens then.
>>>>
>>>
>>> [Bala] Let me try to decrypt my query in a better way :-)
>>>
>>> Usually when application calls odp_pool_create() most implementations
>>> will allocate a memory region and map them as pointers based on a fixed
>>> segment size so that when the application calls odp_buffer_allocate() the
>>> segment of this memory pool (segments may be combined depending on the
>>> required size of the buffer ) will be returned to the user.
>>>
>>> So my understanding of your requirement is that whenever the application
>>> calls  odp_buffer_alloc() then the user area should be reset to a
>>> predefined value.
>>>
>>> So in-case that an application does the following
>>> 1. odp_buffer_alloc()
>>> 2. odp_buffer_free()
>>> 3. odp_buffer_alloc()
>>>
>>> For simplicity lets assume the implementation returned the same buffer
>>> during the second odp_buffer_alloc() call ( Basically the buffer gets
>>> reused ) then the implementation should have reset the user-area of the
>>> buffer before returning it to the application. Is this the requirement?
>>>
>>
>> [Bala] Missed a sentence here
>> In the above scenario implementation should have reset the user-area
>> before both odp_buffer_alloc() both in step 1 and step 3.
>>
>>>
>>> I have additional query regarding the use-case you have defined below
>>> but I would like to get clarity on the above first :)
>>>
>>> Regards,
>>> Bala
>>>
>>>
>>>> Then it will be applications responsibility to reset the user area
>>>>> before freeing the buffer? Is this the use-case this API is trying to
>>>>> address?
>>>>>
>>>> No, the application is not required to reset it at any time. It can do
>>>> that, if it's what it wants. The only requirement is that the buffer starts
>>>> from a known state after its memory was allocated.
>>>> The use case is the following: OVS has it's layer to handle buffers
>>>> from different sources, e.g. it has to be able to differentiate between
>>>> packets coming from DPDK and ODP (the latter can run DPDK underneath as
>>>> well, but that's a different story ...). It stores a "struct dp_packet" in
>>>> the user area to store data about the packet:
>>>>
>>>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>>>
>>>> Most of these fields will be set during processing to their right
>>>> value, however there are two things we need to set right after receive:
>>>> "source" to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself.
>>>> The first value will be the same for all ODP packets, every time. And the
>>>> second is known when the underlying memory was allocated.
>>>> Both of them could be set when the pool is created, OVS-DPDK does that
>>>> already, but ODP-OVS has to reset these values after every receive. Even
>>>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>>>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>>>
>>>>
>>>>
>>>>>
>>>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>>>     <mailto:zoltan.kiss@linaro.org>>
>>>>>
>>>>>     ---
>>>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>>>       1 file changed, 20 insertions(+)
>>>>>
>>>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>>>     index 2e79a55..1bd19bf 100644
>>>>>     --- a/include/odp/api/pool.h
>>>>>     +++ b/include/odp/api/pool.h
>>>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>>>       #define ODP_POOL_NAME_LEN  32
>>>>>
>>>>>       /**
>>>>>     + * Buffer constructor callback function for pools.
>>>>>     + *
>>>>>     + * @param pool          Handle of the pool which owns the buffer
>>>>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>>>>     + * @param buf           Pointer to the buffer
>>>>>     + *
>>>>>     + * @note If the application specifies this pointer, it expects
>>>>> that
>>>>>     every buffer
>>>>>     + * is initialized with it when the underlying memory is allocated.
>>>>>     + */
>>>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>>>     +                                  void *buf_ctor_arg,
>>>>>     +                                  void *buf);
>>>>>     +
>>>>>
>>>>>
>>>>> [Bala] We have avoided call back functions in ODP architecture so if
>>>>> the
>>>>> requirement can be addressed without a call-back maybe we can follow
>>>>> that approach. Again I am not clear if this call-back function should
>>>>> be
>>>>> called only once during pool creation or every time during buffer
>>>>> alloc.
>>>>>
>>>>>     +/**
>>>>>        * Pool parameters
>>>>>        * Used to communicate pool creation options.
>>>>>        */
>>>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>>>                              uint32_t num;
>>>>>                      } tmo;
>>>>>              };
>>>>>     +
>>>>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>>>>     if no
>>>>>     +           initialization needed. */
>>>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>>>     +       /** Opaque pointer passed to buffer constructor */
>>>>>     +       void *buf_ctor_arg;
>>>>>       } odp_pool_param_t;
>>>>>
>>>>>       /** Packet pool*/
>>>>>     --
>>>>>     1.9.1
>>>>>
>>>>>     _______________________________________________
>>>>>     lng-odp mailing list
>>>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>
>>>>>
>>>>> Regards,
>>>>> Bala
>>>>>
>>>>
>>>
>>
>> _______________________________________________
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>
Bill Fischofer Aug. 12, 2015, 11:29 a.m. UTC | #10
Yes, since user metadata is entirely under application control if it needs
to update any state information it is storing there prior to freeing the
packet then that's an application responsibility.

On Wed, Aug 12, 2015 at 6:05 AM, Bala Manoharan <bala.manoharan@linaro.org>
wrote:

> Hi,
>
> On 12 August 2015 at 16:29, Bill Fischofer <bill.fischofer@linaro.org>
> wrote:
>
>> User metadata is entirely under the control of the application.  The only
>> thing ODP implementations do with regard to it are:
>>
>>
>>    1. Reserve space for it at pool create time.
>>    2. Copy it if as part of an ODP API a new packet is implicitly
>>    allocated.  For example, the definition of odp_packet_add_data() returns an
>>    odp_packet_t that may be the same or different than the input
>>    odp_packet_t.  In the latter case, the user area of the input packet is
>>    copied to the output packet as part of the operation of the API
>>
>> Under discussion here is the option to permit the application to
>> initialize the contents of the user area for each element of a packet pool
>> at pool create time.
>>
>
> Adding a minor point here. If the requirement is to call this
> initialization only during pool create then application will have to reset
> the user-area before calling odp_packet_free() API.
>
> Regards,
> Bala
>
>>
>> I've added an agenda item for this to today's ARCH call.
>>
>> On Wed, Aug 12, 2015 at 5:55 AM, Bala Manoharan <
>> bala.manoharan@linaro.org> wrote:
>>
>>>
>>>
>>> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 12/08/15 07:34, Bala Manoharan wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Comments inline...
>>>>>>
>>>>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>>>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>>>>
>>>>>>     Applications can preset certain parts of the buffer or user area,
>>>>>> so
>>>>>>     when that
>>>>>>     memory will be allocated it starts from a known state. If the
>>>>>> platform
>>>>>>     allocates the memory during pool creation, it's enough to run the
>>>>>>     constructor
>>>>>>     after that. If it's allocating memory on demand, it should call
>>>>>> the
>>>>>>     constructor each time.
>>>>>>
>>>>>>
>>>>>> [Bala] Not sure if I understand the above description correctly does
>>>>>> it
>>>>>> mean that if the memory is allocated
>>>>>> for an existing pool this call should be called only during the pool
>>>>>> creation and not during each and every buffer allocation?
>>>>>>
>>>>> I'm also not sure I understand the question :) How do you mean "if the
>>>>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>>>>> memory for a pool which doesn't exist.
>>>>> The key point is "when that memory will be allocated". This callback
>>>>> should be called whenever the memory is allocated for the buffer. If it is
>>>>> done during pool creation (which I guess most sensible implementations do),
>>>>> then it happens then. If for some reason the platform allocates memory when
>>>>> someone allocates the buffer, it happens then.
>>>>>
>>>>
>>>> [Bala] Let me try to decrypt my query in a better way :-)
>>>>
>>>> Usually when application calls odp_pool_create() most implementations
>>>> will allocate a memory region and map them as pointers based on a fixed
>>>> segment size so that when the application calls odp_buffer_allocate() the
>>>> segment of this memory pool (segments may be combined depending on the
>>>> required size of the buffer ) will be returned to the user.
>>>>
>>>> So my understanding of your requirement is that whenever the
>>>> application calls  odp_buffer_alloc() then the user area should be reset to
>>>> a predefined value.
>>>>
>>>> So in-case that an application does the following
>>>> 1. odp_buffer_alloc()
>>>> 2. odp_buffer_free()
>>>> 3. odp_buffer_alloc()
>>>>
>>>> For simplicity lets assume the implementation returned the same buffer
>>>> during the second odp_buffer_alloc() call ( Basically the buffer gets
>>>> reused ) then the implementation should have reset the user-area of the
>>>> buffer before returning it to the application. Is this the requirement?
>>>>
>>>
>>> [Bala] Missed a sentence here
>>> In the above scenario implementation should have reset the user-area
>>> before both odp_buffer_alloc() both in step 1 and step 3.
>>>
>>>>
>>>> I have additional query regarding the use-case you have defined below
>>>> but I would like to get clarity on the above first :)
>>>>
>>>> Regards,
>>>> Bala
>>>>
>>>>
>>>>> Then it will be applications responsibility to reset the user area
>>>>>> before freeing the buffer? Is this the use-case this API is trying to
>>>>>> address?
>>>>>>
>>>>> No, the application is not required to reset it at any time. It can do
>>>>> that, if it's what it wants. The only requirement is that the buffer starts
>>>>> from a known state after its memory was allocated.
>>>>> The use case is the following: OVS has it's layer to handle buffers
>>>>> from different sources, e.g. it has to be able to differentiate between
>>>>> packets coming from DPDK and ODP (the latter can run DPDK underneath as
>>>>> well, but that's a different story ...). It stores a "struct dp_packet" in
>>>>> the user area to store data about the packet:
>>>>>
>>>>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>>>>
>>>>> Most of these fields will be set during processing to their right
>>>>> value, however there are two things we need to set right after receive:
>>>>> "source" to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself.
>>>>> The first value will be the same for all ODP packets, every time. And the
>>>>> second is known when the underlying memory was allocated.
>>>>> Both of them could be set when the pool is created, OVS-DPDK does that
>>>>> already, but ODP-OVS has to reset these values after every receive. Even
>>>>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>>>>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>>>>     <mailto:zoltan.kiss@linaro.org>>
>>>>>>
>>>>>>     ---
>>>>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>>>>       1 file changed, 20 insertions(+)
>>>>>>
>>>>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>>>>     index 2e79a55..1bd19bf 100644
>>>>>>     --- a/include/odp/api/pool.h
>>>>>>     +++ b/include/odp/api/pool.h
>>>>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>>>>       #define ODP_POOL_NAME_LEN  32
>>>>>>
>>>>>>       /**
>>>>>>     + * Buffer constructor callback function for pools.
>>>>>>     + *
>>>>>>     + * @param pool          Handle of the pool which owns the buffer
>>>>>>     + * @param buf_ctor_arg  Opaque pointer defined in
>>>>>> odp_pool_param_t
>>>>>>     + * @param buf           Pointer to the buffer
>>>>>>     + *
>>>>>>     + * @note If the application specifies this pointer, it expects
>>>>>> that
>>>>>>     every buffer
>>>>>>     + * is initialized with it when the underlying memory is
>>>>>> allocated.
>>>>>>     + */
>>>>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>>>>     +                                  void *buf_ctor_arg,
>>>>>>     +                                  void *buf);
>>>>>>     +
>>>>>>
>>>>>>
>>>>>> [Bala] We have avoided call back functions in ODP architecture so if
>>>>>> the
>>>>>> requirement can be addressed without a call-back maybe we can follow
>>>>>> that approach. Again I am not clear if this call-back function should
>>>>>> be
>>>>>> called only once during pool creation or every time during buffer
>>>>>> alloc.
>>>>>>
>>>>>>     +/**
>>>>>>        * Pool parameters
>>>>>>        * Used to communicate pool creation options.
>>>>>>        */
>>>>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>>>>                              uint32_t num;
>>>>>>                      } tmo;
>>>>>>              };
>>>>>>     +
>>>>>>     +       /** Buffer constructor to initialize every buffer. Use
>>>>>> NULL
>>>>>>     if no
>>>>>>     +           initialization needed. */
>>>>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>>>>     +       /** Opaque pointer passed to buffer constructor */
>>>>>>     +       void *buf_ctor_arg;
>>>>>>       } odp_pool_param_t;
>>>>>>
>>>>>>       /** Packet pool*/
>>>>>>     --
>>>>>>     1.9.1
>>>>>>
>>>>>>     _______________________________________________
>>>>>>     lng-odp mailing list
>>>>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Bala
>>>>>>
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> lng-odp mailing list
>>> lng-odp@lists.linaro.org
>>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>>
>>>
>>
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
Zoltan Kiss Aug. 12, 2015, 1:04 p.m. UTC | #11
On 12/08/15 11:55, Bala Manoharan wrote:
>
>
> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org 
> <mailto:bala.manoharan@linaro.org>> wrote:
>
>
>
>     On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org
>     <mailto:zoltan.kiss@linaro.org>> wrote:
>
>
>
>         On 12/08/15 07:34, Bala Manoharan wrote:
>
>             Hi,
>
>             Comments inline...
>
>             On 12 August 2015 at 00:01, Zoltan Kiss
>             <zoltan.kiss@linaro.org <mailto:zoltan.kiss@linaro.org>
>             <mailto:zoltan.kiss@linaro.org
>             <mailto:zoltan.kiss@linaro.org>>> wrote:
>
>                 Applications can preset certain parts of the buffer or
>             user area, so
>                 when that
>                 memory will be allocated it starts from a known state.
>             If the platform
>                 allocates the memory during pool creation, it's enough
>             to run the
>                 constructor
>                 after that. If it's allocating memory on demand, it
>             should call the
>                 constructor each time.
>
>
>             [Bala] Not sure if I understand the above description
>             correctly does it
>             mean that if the memory is allocated
>             for an existing pool this call should be called only
>             during the pool
>             creation and not during each and every buffer allocation?
>
>         I'm also not sure I understand the question :) How do you mean
>         "if the memory is allocated for an _existing_ pool"? I mean,
>         you can't allocate memory for a pool which doesn't exist.
>         The key point is "when that memory will be allocated". This
>         callback should be called whenever the memory is allocated for
>         the buffer. If it is done during pool creation (which I guess
>         most sensible implementations do), then it happens then. If
>         for some reason the platform allocates memory when someone
>         allocates the buffer, it happens then.
>
>
>     [Bala] Let me try to decrypt my query in a better way :-)
>
>     Usually when application calls odp_pool_create() most
>     implementations will allocate a memory region and map them as
>     pointers based on a fixed segment size so that when the
>     application calls odp_buffer_allocate() the segment of this memory
>     pool (segments may be combined depending on the required size of
>     the buffer )
>
No, odp_buffer_alloc() doesn't let you specify you sizes, the buffer 
size is set when you create the pool. I think you mean packet type 
pools. In that case there could be segmentation, so a packet could have 
been the Nth segment of an another packet before.
>
>     will be returned to the user.
>
>     So my understanding of your requirement is that whenever the
>     application calls  odp_buffer_alloc() then the user area should be
>     reset to a predefined value.
>
No. "so when that memory will be allocated". The callback is only called 
when the memory underlying the buffer gets allocated. Beware, allocating 
a buffer and allocating the memory can be two different things in this 
context. At least linux-generic and ODP-DPDK allocates the memory when 
the pool is created, not when the buffer is allocated.
>
>
>     So in-case that an application does the following
>     1. odp_buffer_alloc()
>     2. odp_buffer_free()
>     3. odp_buffer_alloc()
>
>     For simplicity lets assume the implementation returned the same
>     buffer during the second odp_buffer_alloc() call ( Basically the
>     buffer gets reused ) then the implementation should have reset the
>     user-area of the buffer before returning it to the application. Is
>     this the requirement?
>
No. The only requirement is that "when that memory will be allocated it 
starts from a known state". The implementation should call the callback 
only when that memory is allocated. Therefore, if the memory underlying 
this buffer were not released and reallocated between an alloc and free, 
this callback shouldn't be called
>
> [Bala] Missed a sentence here
> In the above scenario implementation should have reset the user-area 
> before both odp_buffer_alloc() both in step 1 and step 3.
>
>
>     I have additional query regarding the use-case you have defined
>     below but I would like to get clarity on the above first :)
>
>     Regards,
>     Bala
>
>
>             Then it will be applications responsibility to reset the
>             user area
>             before freeing the buffer? Is this the use-case this API
>             is trying to
>             address?
>
>         No, the application is not required to reset it at any time.
>         It can do that, if it's what it wants. The only requirement is
>         that the buffer starts from a known state after its memory was
>         allocated.
>         The use case is the following: OVS has it's layer to handle
>         buffers from different sources, e.g. it has to be able to
>         differentiate between packets coming from DPDK and ODP (the
>         latter can run DPDK underneath as well, but that's a different
>         story ...). It stores a "struct dp_packet" in the user area to
>         store data about the packet:
>
>         https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>
>         Most of these fields will be set during processing to their
>         right value, however there are two things we need to set right
>         after receive: "source" to DPBUF_ODP and "odp_pkt" to point to
>         the odp_packet_t itself. The first value will be the same for
>         all ODP packets, every time. And the second is known when the
>         underlying memory was allocated.
>         Both of them could be set when the pool is created, OVS-DPDK
>         does that already, but ODP-OVS has to reset these values after
>         every receive. Even when it's only a few cycles to set and
>         maybe unnecessary cache loads, it's still a lot of cycles when
>         you receive at 10-40-100 Gbps.
>
>
>
>
>                 Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>             <mailto:zoltan.kiss@linaro.org>
>                 <mailto:zoltan.kiss@linaro.org
>             <mailto:zoltan.kiss@linaro.org>>>
>
>                 ---
>                   include/odp/api/pool.h | 20 ++++++++++++++++++++
>                   1 file changed, 20 insertions(+)
>
>                 diff --git a/include/odp/api/pool.h
>             b/include/odp/api/pool.h
>                 index 2e79a55..1bd19bf 100644
>                 --- a/include/odp/api/pool.h
>                 +++ b/include/odp/api/pool.h
>                 @@ -41,6 +41,20 @@ extern "C" {
>                   #define ODP_POOL_NAME_LEN  32
>
>                   /**
>                 + * Buffer constructor callback function for pools.
>                 + *
>                 + * @param pool          Handle of the pool which owns
>             the buffer
>                 + * @param buf_ctor_arg  Opaque pointer defined in
>             odp_pool_param_t
>                 + * @param buf           Pointer to the buffer
>                 + *
>                 + * @note If the application specifies this pointer,
>             it expects that
>                 every buffer
>                 + * is initialized with it when the underlying memory
>             is allocated.
>                 + */
>                 +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>                 + void *buf_ctor_arg,
>                 + void *buf);
>                 +
>
>
>             [Bala] We have avoided call back functions in ODP
>             architecture so if the
>             requirement can be addressed without a call-back maybe we
>             can follow
>             that approach. Again I am not clear if this call-back
>             function should be
>             called only once during pool creation or every time during
>             buffer alloc.
>
>                 +/**
>                    * Pool parameters
>                    * Used to communicate pool creation options.
>                    */
>                 @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>                                          uint32_t num;
>                                  } tmo;
>                          };
>                 +
>                 +       /** Buffer constructor to initialize every
>             buffer. Use NULL
>                 if no
>                 +           initialization needed. */
>                 +       odp_pool_buf_ctor_t *buf_ctor;
>                 +       /** Opaque pointer passed to buffer constructor */
>                 +       void *buf_ctor_arg;
>                   } odp_pool_param_t;
>
>                   /** Packet pool*/
>                 --
>                 1.9.1
>
>             _______________________________________________
>                 lng-odp mailing list
>             lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>             <mailto:lng-odp@lists.linaro.org
>             <mailto:lng-odp@lists.linaro.org>>
>             https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
>             Regards,
>             Bala
>
>
>
Zoltan Kiss Aug. 12, 2015, 1:09 p.m. UTC | #12
On 12/08/15 12:05, Bala Manoharan wrote:
> Hi,
>
> On 12 August 2015 at 16:29, Bill Fischofer <bill.fischofer@linaro.org 
> <mailto:bill.fischofer@linaro.org>> wrote:
>
>     User metadata is entirely under the control of the application. 
>     The only thing ODP implementations do with regard to it are:
>
>      1. Reserve space for it at pool create time.
>      2. Copy it if as part of an ODP API a new packet is implicitly
>         allocated.  For example, the definition of
>         odp_packet_add_data() returns an odp_packet_t that may be the
>         same or different than the input odp_packet_t.  In the latter
>         case, the user area of the input packet is copied to the
>         output packet as part of the operation of the API
>
>     Under discussion here is the option to permit the application to
>     initialize the contents of the user area for each element of a
>     packet pool at pool create time.
>
>
> Adding a minor point here. If the requirement is to call this 
> initialization only during pool create then application will have to 
> reset the user-area before calling odp_packet_free() API.

No, it doesn't HAVE TO. It MIGHT do that, if that's what the application 
wants. But as the OVS use case shows, the two things it presets stays 
the same during the whole lifetime of the application.
>
> Regards,
> Bala
>
>
>     I've added an agenda item for this to today's ARCH call.
>
>     On Wed, Aug 12, 2015 at 5:55 AM, Bala Manoharan
>     <bala.manoharan@linaro.org <mailto:bala.manoharan@linaro.org>> wrote:
>
>
>
>         On 12 August 2015 at 16:17, Bala Manoharan
>         <bala.manoharan@linaro.org <mailto:bala.manoharan@linaro.org>>
>         wrote:
>
>
>
>             On 12 August 2015 at 15:37, Zoltan Kiss
>             <zoltan.kiss@linaro.org <mailto:zoltan.kiss@linaro.org>>
>             wrote:
>
>
>
>                 On 12/08/15 07:34, Bala Manoharan wrote:
>
>                     Hi,
>
>                     Comments inline...
>
>                     On 12 August 2015 at 00:01, Zoltan Kiss
>                     <zoltan.kiss@linaro.org
>                     <mailto:zoltan.kiss@linaro.org>
>                     <mailto:zoltan.kiss@linaro.org
>                     <mailto:zoltan.kiss@linaro.org>>> wrote:
>
>                         Applications can preset certain parts of the
>                     buffer or user area, so
>                         when that
>                         memory will be allocated it starts from a
>                     known state. If the platform
>                         allocates the memory during pool creation,
>                     it's enough to run the
>                         constructor
>                         after that. If it's allocating memory on
>                     demand, it should call the
>                         constructor each time.
>
>
>                     [Bala] Not sure if I understand the above
>                     description correctly does it
>                     mean that if the memory is allocated
>                     for an existing pool this call should be called
>                     only during the pool
>                     creation and not during each and every buffer
>                     allocation?
>
>                 I'm also not sure I understand the question :) How do
>                 you mean "if the memory is allocated for an _existing_
>                 pool"? I mean, you can't allocate memory for a pool
>                 which doesn't exist.
>                 The key point is "when that memory will be allocated".
>                 This callback should be called whenever the memory is
>                 allocated for the buffer. If it is done during pool
>                 creation (which I guess most sensible implementations
>                 do), then it happens then. If for some reason the
>                 platform allocates memory when someone allocates the
>                 buffer, it happens then.
>
>
>             [Bala] Let me try to decrypt my query in a better way :-)
>
>             Usually when application calls odp_pool_create() most
>             implementations will allocate a memory region and map them
>             as pointers based on a fixed segment size so that when the
>             application calls odp_buffer_allocate() the segment of
>             this memory pool (segments may be combined depending on
>             the required size of the buffer ) will be returned to the
>             user.
>
>             So my understanding of your requirement is that whenever
>             the application calls  odp_buffer_alloc() then the user
>             area should be reset to a predefined value.
>
>             So in-case that an application does the following
>             1. odp_buffer_alloc()
>             2. odp_buffer_free()
>             3. odp_buffer_alloc()
>
>             For simplicity lets assume the implementation returned the
>             same buffer during the second odp_buffer_alloc() call (
>             Basically the buffer gets reused ) then the implementation
>             should have reset the user-area of the buffer before
>             returning it to the application. Is this the requirement?
>
>
>         [Bala] Missed a sentence here
>         In the above scenario implementation should have reset the
>         user-area before both odp_buffer_alloc() both in step 1 and
>         step 3.
>
>
>             I have additional query regarding the use-case you have
>             defined below but I would like to get clarity on the above
>             first :)
>
>             Regards,
>             Bala
>
>
>                     Then it will be applications responsibility to
>                     reset the user area
>                     before freeing the buffer? Is this the use-case
>                     this API is trying to
>                     address?
>
>                 No, the application is not required to reset it at any
>                 time. It can do that, if it's what it wants. The only
>                 requirement is that the buffer starts from a known
>                 state after its memory was allocated.
>                 The use case is the following: OVS has it's layer to
>                 handle buffers from different sources, e.g. it has to
>                 be able to differentiate between packets coming from
>                 DPDK and ODP (the latter can run DPDK underneath as
>                 well, but that's a different story ...). It stores a
>                 "struct dp_packet" in the user area to store data
>                 about the packet:
>
>                 https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>
>                 Most of these fields will be set during processing to
>                 their right value, however there are two things we
>                 need to set right after receive: "source" to DPBUF_ODP
>                 and "odp_pkt" to point to the odp_packet_t itself. The
>                 first value will be the same for all ODP packets,
>                 every time. And the second is known when the
>                 underlying memory was allocated.
>                 Both of them could be set when the pool is created,
>                 OVS-DPDK does that already, but ODP-OVS has to reset
>                 these values after every receive. Even when it's only
>                 a few cycles to set and maybe unnecessary cache loads,
>                 it's still a lot of cycles when you receive at
>                 10-40-100 Gbps.
>
>
>
>
>                         Signed-off-by: Zoltan Kiss
>                     <zoltan.kiss@linaro.org
>                     <mailto:zoltan.kiss@linaro.org>
>                         <mailto:zoltan.kiss@linaro.org
>                     <mailto:zoltan.kiss@linaro.org>>>
>
>                         ---
>                     include/odp/api/pool.h | 20 ++++++++++++++++++++
>                           1 file changed, 20 insertions(+)
>
>                         diff --git a/include/odp/api/pool.h
>                     b/include/odp/api/pool.h
>                         index 2e79a55..1bd19bf 100644
>                         --- a/include/odp/api/pool.h
>                         +++ b/include/odp/api/pool.h
>                         @@ -41,6 +41,20 @@ extern "C" {
>                           #define ODP_POOL_NAME_LEN 32
>
>                           /**
>                         + * Buffer constructor callback function for
>                     pools.
>                         + *
>                         + * @param pool Handle of the pool which owns
>                     the buffer
>                         + * @param buf_ctor_arg Opaque pointer defined
>                     in odp_pool_param_t
>                         + * @param buf  Pointer to the buffer
>                         + *
>                         + * @note If the application specifies this
>                     pointer, it expects that
>                         every buffer
>                         + * is initialized with it when the underlying
>                     memory is allocated.
>                         + */
>                         +typedef void (odp_pool_buf_ctor_t)(odp_pool_t
>                     pool,
>                         +       void *buf_ctor_arg,
>                         +       void *buf);
>                         +
>
>
>                     [Bala] We have avoided call back functions in ODP
>                     architecture so if the
>                     requirement can be addressed without a call-back
>                     maybe we can follow
>                     that approach. Again I am not clear if this
>                     call-back function should be
>                     called only once during pool creation or every
>                     time during buffer alloc.
>
>                         +/**
>                            * Pool parameters
>                            * Used to communicate pool creation options.
>                            */
>                         @@ -88,6 +102,12 @@ typedef struct
>                     odp_pool_param_t {
>                      uint32_t num;
>                          } tmo;
>                                  };
>                         +
>                         +       /** Buffer constructor to initialize
>                     every buffer. Use NULL
>                         if no
>                         +  initialization needed. */
>                         +  odp_pool_buf_ctor_t *buf_ctor;
>                         +       /** Opaque pointer passed to buffer
>                     constructor */
>                         +       void *buf_ctor_arg;
>                           } odp_pool_param_t;
>
>                           /** Packet pool*/
>                         --
>                         1.9.1
>
>                     _______________________________________________
>                         lng-odp mailing list
>                     lng-odp@lists.linaro.org
>                     <mailto:lng-odp@lists.linaro.org>
>                     <mailto:lng-odp@lists.linaro.org
>                     <mailto:lng-odp@lists.linaro.org>>
>                     https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
>                     Regards,
>                     Bala
>
>
>
>
>         _______________________________________________
>         lng-odp mailing list
>         lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>         https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
>
Zoltan Kiss Aug. 12, 2015, 1:16 p.m. UTC | #13
On 12/08/15 11:40, Bill Fischofer wrote:
> Some basic comments:
>
> First, ODP buffers are never returned as addresses. Buffers are of 
> type odp_buffer_t, not void *.
>
> Second, the purpose here is to initialize user metadata, not the 
> buffer (that's the responsibility of the implementation), and ODP only 
> defines user metadata for packets, not general buffers, so this init 
> is done for pools of type ODP_POOL_PACKET.  So a better signature 
> might be:

Ok, I'm convinced, we don't need to support init of the buffer here.
>
> typedef void (odp_packet_uarea_init_t)(odp_packet_t pkt, void 
> *uarea_init_arg, void *uarea);
>
> and this would be activated via a new field in the odp_pool_param_t:
>
> typedef struct odp_pool_param_t {
> /** Pool type */
> int type;
>
> union {
> struct {
> /** Number of buffers in the pool */
> uint32_t num;
>
> /** Buffer size in bytes. The maximum number of bytes
>    application will store in each buffer. */
> uint32_t size;
>
> /** Minimum buffer alignment in bytes. Valid values are
>    powers of two. Use 0 for default alignment.
>    Default will always be a multiple of 8. */
> uint32_t align;
> } buf;
> struct {
> /** The number of packets that the pool must provide
>    that are packet length 'len' bytes or smaller. */
> uint32_t num;
>
> /** Minimum packet length that the pool must provide
>    'num' packets. The number of packets may be less
>    than 'num' when packets are larger than 'len'.
>    Use 0 for default. */
> uint32_t len;
>
> /** Minimum number of packet data bytes that are stored
>    in the first segment of a packet. The maximum value
>    is defined by ODP_CONFIG_PACKET_SEG_LEN_MAX.
>    Use 0 for default. */
> uint32_t seg_len;
>
> /** User area size in bytes. Specify as 0 if no user
>    area is needed. */
> uint32_t uarea_size;
>                        /** User area initializer. Specify as NULL if 
> initialization calls
>                             are not needed */
>                        odp_packet_uarea_init_t uarea_init;
>
>                        /** User area initializer parameter.  Ignored 
> unless user area
>                             initializer is non-NULL */
>                        void *uarea_init_arg;
> } pkt;
> struct {
> /** Number of timeouts in the pool */
> uint32_t num;
> } tmo;
> };
> } odp_pool_param_t;
>
> The semantics would be that at odp_pool_create() time, if a 
> uarea_init() routine is specified for the pool, then it is called for 
> each packet in the pool and is passed the handle of that packet, the 
> opaque uarea_init_arg specified on the odp_pool_param_t, and the 
> address of the user area associated with that packet (the size of that 
> area is fixed and presumably known to the initializer since it was 
> specified in the uarea_size field of the odp_pool_param_t).
>
> One of the objections to callbacks voiced last year is that different 
> implementations may not be able to iterate through pool members as 
> implied by this.  Since the intent here is to initialize the packet 
> user area, perhaps a more specific focus on that intent would be less 
> problematic.
I see. Does anyone know why a platform can't iterate through buffers at 
pool creation time? If it's because they are allocated in memory on 
deman, then the platform shouldn't call this at pool creation, but when 
packet is allocated.
>
> On Wed, Aug 12, 2015 at 5:09 AM, Zoltan Kiss <zoltan.kiss@linaro.org 
> <mailto:zoltan.kiss@linaro.org>> wrote:
>
>
>
>     On 12/08/15 07:34, Bala Manoharan wrote:
>
>         Hi,
>
>         Comments inline...
>
>         On 12 August 2015 at 00:01, Zoltan Kiss
>         <zoltan.kiss@linaro.org <mailto:zoltan.kiss@linaro.org>
>         <mailto:zoltan.kiss@linaro.org
>         <mailto:zoltan.kiss@linaro.org>>> wrote:
>
>             Applications can preset certain parts of the buffer or
>         user area, so
>             when that
>             memory will be allocated it starts from a known state. If
>         the platform
>             allocates the memory during pool creation, it's enough to
>         run the
>             constructor
>             after that. If it's allocating memory on demand, it should
>         call the
>             constructor each time.
>
>
>         [Bala] Not sure if I understand the above description
>         correctly does it
>         mean that if the memory is allocated
>         for an existing pool this call should be called only during
>         the pool
>         creation and not during each and every buffer allocation?
>         Then it will be applications responsibility to reset the user area
>         before freeing the buffer? Is this the use-case this API is
>         trying to
>         address?
>
>
>             Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>         <mailto:zoltan.kiss@linaro.org>
>             <mailto:zoltan.kiss@linaro.org
>         <mailto:zoltan.kiss@linaro.org>>>
>
>             ---
>               include/odp/api/pool.h | 20 ++++++++++++++++++++
>               1 file changed, 20 insertions(+)
>
>             diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>             index 2e79a55..1bd19bf 100644
>             --- a/include/odp/api/pool.h
>             +++ b/include/odp/api/pool.h
>             @@ -41,6 +41,20 @@ extern "C" {
>               #define ODP_POOL_NAME_LEN  32
>
>               /**
>             + * Buffer constructor callback function for pools.
>             + *
>             + * @param pool          Handle of the pool which owns the
>         buffer
>             + * @param buf_ctor_arg  Opaque pointer defined in
>         odp_pool_param_t
>             + * @param buf           Pointer to the buffer
>             + *
>             + * @note If the application specifies this pointer, it
>         expects that
>             every buffer
>             + * is initialized with it when the underlying memory is
>         allocated.
>             + */
>             +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>             +                                  void *buf_ctor_arg,
>             +                                  void *buf);
>             +
>
>
>         [Bala] We have avoided call back functions in ODP architecture
>         so if the
>         requirement can be addressed without a call-back maybe we can
>         follow
>         that approach. Again I am not clear if this call-back function
>         should be
>         called only once during pool creation or every time during
>         buffer alloc.
>
>
>     Why is this fear from callbacks? I know it's breaking call graph
>     feature of Eclipse, but that's not a huge issue.
>
>
>             +/**
>                * Pool parameters
>                * Used to communicate pool creation options.
>                */
>             @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>                                      uint32_t num;
>                              } tmo;
>                      };
>             +
>             +       /** Buffer constructor to initialize every buffer.
>         Use NULL
>             if no
>             +           initialization needed. */
>             +       odp_pool_buf_ctor_t *buf_ctor;
>             +       /** Opaque pointer passed to buffer constructor */
>             +       void *buf_ctor_arg;
>               } odp_pool_param_t;
>
>               /** Packet pool*/
>             --
>             1.9.1
>
>             _______________________________________________
>             lng-odp mailing list
>         lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>         <mailto:lng-odp@lists.linaro.org
>         <mailto:lng-odp@lists.linaro.org>>
>         https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
>         Regards,
>         Bala
>
>     _______________________________________________
>     lng-odp mailing list
>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>     https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
Zoltan Kiss Aug. 12, 2015, 1:41 p.m. UTC | #14
On 12/08/15 08:01, Ivan Khoronzhuk wrote:
> Hi, Zoltan
>
> On 11.08.15 21:31, Zoltan Kiss wrote:
>> Applications can preset certain parts of the buffer or user area, so 
>> when that
>> memory will be allocated it starts from a known state. If the platform
>> allocates the memory during pool creation, it's enough to run the 
>> constructor
>> after that. If it's allocating memory on demand, it should call the
>> constructor each time.
>
> Could you please clarify for what it's needed for.
>
> The main questions after first glance:
> What use-case is it intended for?
> Why this cannot be done by implementation implicitly?
> Why should an application know about some hash, buffer preallocation, 
> etc?

I think I've covered these in the replies to Bala and Bill.
> What is an application going to do with a hash?
OVS uses the hash to look up the flow in its flow cache. I should better 
define what is hashed here and how, this is the hash of the header

> What is some profit of this?
OVS don't have to calculate the hash by itself.


>
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
>> ---
>>   include/odp/api/pool.h | 20 ++++++++++++++++++++
>>   1 file changed, 20 insertions(+)
>>
>> diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>> index 2e79a55..1bd19bf 100644
>> --- a/include/odp/api/pool.h
>> +++ b/include/odp/api/pool.h
>> @@ -41,6 +41,20 @@ extern "C" {
>>   #define ODP_POOL_NAME_LEN  32
>>
>>   /**
>> + * Buffer constructor callback function for pools.
>> + *
>> + * @param pool          Handle of the pool which owns the buffer
>> + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>> + * @param buf           Pointer to the buffer
>> + *
>> + * @note If the application specifies this pointer, it expects that 
>> every buffer
>> + * is initialized with it when the underlying memory is allocated.
>> + */
>> +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>> +                   void *buf_ctor_arg,
>> +                   void *buf);
>> +
>> +/**
>>    * Pool parameters
>>    * Used to communicate pool creation options.
>>    */
>> @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>               uint32_t num;
>>           } tmo;
>>       };
>> +
>> +    /** Buffer constructor to initialize every buffer. Use NULL if no
>> +        initialization needed. */
>> +    odp_pool_buf_ctor_t *buf_ctor;
>> +    /** Opaque pointer passed to buffer constructor */
>> +    void *buf_ctor_arg;
>>   } odp_pool_param_t;
>>
>>   /** Packet pool*/
>>
>
Balasubramanian Manoharan Aug. 12, 2015, 1:51 p.m. UTC | #15
On 12 August 2015 at 18:34, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:

> On 12/08/15 11:55, Bala Manoharan wrote:
>
>
>
> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
> wrote:
>
>>
>>
>> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>
>>>
>>>
>>> On 12/08/15 07:34, Bala Manoharan wrote:
>>>
>>>> Hi,
>>>>
>>>> Comments inline...
>>>>
>>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>>
>>>>     Applications can preset certain parts of the buffer or user area, so
>>>>     when that
>>>>     memory will be allocated it starts from a known state. If the
>>>> platform
>>>>     allocates the memory during pool creation, it's enough to run the
>>>>     constructor
>>>>     after that. If it's allocating memory on demand, it should call the
>>>>     constructor each time.
>>>>
>>>>
>>>> [Bala] Not sure if I understand the above description correctly does it
>>>> mean that if the memory is allocated
>>>> for an existing pool this call should be called only during the pool
>>>> creation and not during each and every buffer allocation?
>>>>
>>> I'm also not sure I understand the question :) How do you mean "if the
>>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>>> memory for a pool which doesn't exist.
>>> The key point is "when that memory will be allocated". This callback
>>> should be called whenever the memory is allocated for the buffer. If it is
>>> done during pool creation (which I guess most sensible implementations do),
>>> then it happens then. If for some reason the platform allocates memory when
>>> someone allocates the buffer, it happens then.
>>>
>>
>> [Bala] Let me try to decrypt my query in a better way :-)
>>
>> Usually when application calls odp_pool_create() most implementations
>> will allocate a memory region and map them as pointers based on a fixed
>> segment size so that when the application calls odp_buffer_allocate() the
>> segment of this memory pool (segments may be combined depending on the
>> required size of the buffer )
>>
> No, odp_buffer_alloc() doesn't let you specify you sizes, the buffer size
> is set when you create the pool. I think you mean packet type pools. In
> that case there could be segmentation, so a packet could have been the Nth
> segment of an another packet before.
>

Yes, but this init call back will be called for each segment in a packet
pool since each segment could become a potential packet. So in general any
packet allocated from this pool should have the default value set before
the very first alloc and resetting this value during subsequent allocation
is an application responsibility and implementation should not worry about
the same.


>
> will be returned to the user.
>>
>> So my understanding of your requirement is that whenever the application
>> calls  odp_buffer_alloc() then the user area should be reset to a
>> predefined value.
>>
> No. "so when that memory will be allocated". The callback is only called
> when the memory underlying the buffer gets allocated. Beware, allocating a
> buffer and allocating the memory can be two different things in this
> context. At least linux-generic and ODP-DPDK allocates the memory when the
> pool is created, not when the buffer is allocated.
>
>
>> So in-case that an application does the following
>> 1. odp_buffer_alloc()
>> 2. odp_buffer_free()
>> 3. odp_buffer_alloc()
>>
>> For simplicity lets assume the implementation returned the same buffer
>> during the second odp_buffer_alloc() call ( Basically the buffer gets
>> reused ) then the implementation should have reset the user-area of the
>> buffer before returning it to the application. Is this the requirement?
>>
> No. The only requirement is that "when that memory will be allocated it
> starts from a known state". The implementation should call the callback
> only when that memory is allocated. Therefore, if the memory underlying
> this buffer were not released and reallocated between an alloc and free,
> this callback shouldn't be called
>

I understand your use-case now please add the description in the API
definition that this initialisation is not required when the buffer gets
reused after a free and it is only needed when memory is created the very
first time.

>
> [Bala] Missed a sentence here
> In the above scenario implementation should have reset the user-area
> before both odp_buffer_alloc() both in step 1 and step 3.
>
>>
>> I have additional query regarding the use-case you have defined below but
>> I would like to get clarity on the above first :)
>>
>> Regards,
>> Bala
>>
>>
>>> Then it will be applications responsibility to reset the user area
>>>> before freeing the buffer? Is this the use-case this API is trying to
>>>> address?
>>>>
>>> No, the application is not required to reset it at any time. It can do
>>> that, if it's what it wants. The only requirement is that the buffer starts
>>> from a known state after its memory was allocated.
>>> The use case is the following: OVS has it's layer to handle buffers from
>>> different sources, e.g. it has to be able to differentiate between packets
>>> coming from DPDK and ODP (the latter can run DPDK underneath as well, but
>>> that's a different story ...). It stores a "struct dp_packet" in the user
>>> area to store data about the packet:
>>>
>>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>>
>>> Most of these fields will be set during processing to their right value,
>>> however there are two things we need to set right after receive: "source"
>>> to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself. The first
>>> value will be the same for all ODP packets, every time. And the second is
>>> known when the underlying memory was allocated.
>>> Both of them could be set when the pool is created, OVS-DPDK does that
>>> already, but ODP-OVS has to reset these values after every receive. Even
>>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>>
>>>
>>>
>>>>
>>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>>     <mailto:zoltan.kiss@linaro.org>>
>>>>
>>>>     ---
>>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>>       1 file changed, 20 insertions(+)
>>>>
>>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>>     index 2e79a55..1bd19bf 100644
>>>>     --- a/include/odp/api/pool.h
>>>>     +++ b/include/odp/api/pool.h
>>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>>       #define ODP_POOL_NAME_LEN  32
>>>>
>>>>       /**
>>>>     + * Buffer constructor callback function for pools.
>>>>     + *
>>>>     + * @param pool          Handle of the pool which owns the buffer
>>>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>>>     + * @param buf           Pointer to the buffer
>>>>     + *
>>>>     + * @note If the application specifies this pointer, it expects that
>>>>     every buffer
>>>>     + * is initialized with it when the underlying memory is allocated.
>>>>     + */
>>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>>     +                                  void *buf_ctor_arg,
>>>>     +                                  void *buf);
>>>>     +
>>>>
>>>>
>>>> [Bala] We have avoided call back functions in ODP architecture so if the
>>>> requirement can be addressed without a call-back maybe we can follow
>>>> that approach. Again I am not clear if this call-back function should be
>>>> called only once during pool creation or every time during buffer alloc.
>>>>
>>>>     +/**
>>>>        * Pool parameters
>>>>        * Used to communicate pool creation options.
>>>>        */
>>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>>                              uint32_t num;
>>>>                      } tmo;
>>>>              };
>>>>     +
>>>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>>>     if no
>>>>     +           initialization needed. */
>>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>>     +       /** Opaque pointer passed to buffer constructor */
>>>>     +       void *buf_ctor_arg;
>>>>       } odp_pool_param_t;
>>>>
>>>>       /** Packet pool*/
>>>>     --
>>>>     1.9.1
>>>>
>>>>     _______________________________________________
>>>>     lng-odp mailing list
>>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>
>>>>
>>>> Regards,
>>>> Bala
>>>>
>>>
>>
>
>
Regards,
Bala
Bill Fischofer Aug. 12, 2015, 2:03 p.m. UTC | #16
No, packets are a first-class object.  Segments are internal to packets.
That's why we have both odp_packet_t and odp_packet_seg_t handles. The user
area is associated with an odp_packet_t, not an odp_packet_seg_t.

On Wed, Aug 12, 2015 at 8:51 AM, Bala Manoharan <bala.manoharan@linaro.org>
wrote:

>
>
> On 12 August 2015 at 18:34, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>
>> On 12/08/15 11:55, Bala Manoharan wrote:
>>
>>
>>
>> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
>> wrote:
>>
>>>
>>>
>>> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>>
>>>>
>>>>
>>>> On 12/08/15 07:34, Bala Manoharan wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Comments inline...
>>>>>
>>>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>>>
>>>>>     Applications can preset certain parts of the buffer or user area,
>>>>> so
>>>>>     when that
>>>>>     memory will be allocated it starts from a known state. If the
>>>>> platform
>>>>>     allocates the memory during pool creation, it's enough to run the
>>>>>     constructor
>>>>>     after that. If it's allocating memory on demand, it should call the
>>>>>     constructor each time.
>>>>>
>>>>>
>>>>> [Bala] Not sure if I understand the above description correctly does it
>>>>> mean that if the memory is allocated
>>>>> for an existing pool this call should be called only during the pool
>>>>> creation and not during each and every buffer allocation?
>>>>>
>>>> I'm also not sure I understand the question :) How do you mean "if the
>>>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>>>> memory for a pool which doesn't exist.
>>>> The key point is "when that memory will be allocated". This callback
>>>> should be called whenever the memory is allocated for the buffer. If it is
>>>> done during pool creation (which I guess most sensible implementations do),
>>>> then it happens then. If for some reason the platform allocates memory when
>>>> someone allocates the buffer, it happens then.
>>>>
>>>
>>> [Bala] Let me try to decrypt my query in a better way :-)
>>>
>>> Usually when application calls odp_pool_create() most implementations
>>> will allocate a memory region and map them as pointers based on a fixed
>>> segment size so that when the application calls odp_buffer_allocate() the
>>> segment of this memory pool (segments may be combined depending on the
>>> required size of the buffer )
>>>
>> No, odp_buffer_alloc() doesn't let you specify you sizes, the buffer size
>> is set when you create the pool. I think you mean packet type pools. In
>> that case there could be segmentation, so a packet could have been the Nth
>> segment of an another packet before.
>>
>
> Yes, but this init call back will be called for each segment in a packet
> pool since each segment could become a potential packet. So in general any
> packet allocated from this pool should have the default value set before
> the very first alloc and resetting this value during subsequent allocation
> is an application responsibility and implementation should not worry about
> the same.
>
>
>>
>> will be returned to the user.
>>>
>>> So my understanding of your requirement is that whenever the application
>>> calls  odp_buffer_alloc() then the user area should be reset to a
>>> predefined value.
>>>
>> No. "so when that memory will be allocated". The callback is only called
>> when the memory underlying the buffer gets allocated. Beware, allocating a
>> buffer and allocating the memory can be two different things in this
>> context. At least linux-generic and ODP-DPDK allocates the memory when the
>> pool is created, not when the buffer is allocated.
>>
>>
>>> So in-case that an application does the following
>>> 1. odp_buffer_alloc()
>>> 2. odp_buffer_free()
>>> 3. odp_buffer_alloc()
>>>
>>> For simplicity lets assume the implementation returned the same buffer
>>> during the second odp_buffer_alloc() call ( Basically the buffer gets
>>> reused ) then the implementation should have reset the user-area of the
>>> buffer before returning it to the application. Is this the requirement?
>>>
>> No. The only requirement is that "when that memory will be allocated it
>> starts from a known state". The implementation should call the callback
>> only when that memory is allocated. Therefore, if the memory underlying
>> this buffer were not released and reallocated between an alloc and free,
>> this callback shouldn't be called
>>
>
> I understand your use-case now please add the description in the API
> definition that this initialisation is not required when the buffer gets
> reused after a free and it is only needed when memory is created the very
> first time.
>
>>
>> [Bala] Missed a sentence here
>> In the above scenario implementation should have reset the user-area
>> before both odp_buffer_alloc() both in step 1 and step 3.
>>
>>>
>>> I have additional query regarding the use-case you have defined below
>>> but I would like to get clarity on the above first :)
>>>
>>> Regards,
>>> Bala
>>>
>>>
>>>> Then it will be applications responsibility to reset the user area
>>>>> before freeing the buffer? Is this the use-case this API is trying to
>>>>> address?
>>>>>
>>>> No, the application is not required to reset it at any time. It can do
>>>> that, if it's what it wants. The only requirement is that the buffer starts
>>>> from a known state after its memory was allocated.
>>>> The use case is the following: OVS has it's layer to handle buffers
>>>> from different sources, e.g. it has to be able to differentiate between
>>>> packets coming from DPDK and ODP (the latter can run DPDK underneath as
>>>> well, but that's a different story ...). It stores a "struct dp_packet" in
>>>> the user area to store data about the packet:
>>>>
>>>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>>>
>>>> Most of these fields will be set during processing to their right
>>>> value, however there are two things we need to set right after receive:
>>>> "source" to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself.
>>>> The first value will be the same for all ODP packets, every time. And the
>>>> second is known when the underlying memory was allocated.
>>>> Both of them could be set when the pool is created, OVS-DPDK does that
>>>> already, but ODP-OVS has to reset these values after every receive. Even
>>>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>>>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>>>
>>>>
>>>>
>>>>>
>>>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>>>     <mailto:zoltan.kiss@linaro.org>>
>>>>>
>>>>>     ---
>>>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>>>       1 file changed, 20 insertions(+)
>>>>>
>>>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>>>     index 2e79a55..1bd19bf 100644
>>>>>     --- a/include/odp/api/pool.h
>>>>>     +++ b/include/odp/api/pool.h
>>>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>>>       #define ODP_POOL_NAME_LEN  32
>>>>>
>>>>>       /**
>>>>>     + * Buffer constructor callback function for pools.
>>>>>     + *
>>>>>     + * @param pool          Handle of the pool which owns the buffer
>>>>>     + * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
>>>>>     + * @param buf           Pointer to the buffer
>>>>>     + *
>>>>>     + * @note If the application specifies this pointer, it expects
>>>>> that
>>>>>     every buffer
>>>>>     + * is initialized with it when the underlying memory is allocated.
>>>>>     + */
>>>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>>>     +                                  void *buf_ctor_arg,
>>>>>     +                                  void *buf);
>>>>>     +
>>>>>
>>>>>
>>>>> [Bala] We have avoided call back functions in ODP architecture so if
>>>>> the
>>>>> requirement can be addressed without a call-back maybe we can follow
>>>>> that approach. Again I am not clear if this call-back function should
>>>>> be
>>>>> called only once during pool creation or every time during buffer
>>>>> alloc.
>>>>>
>>>>>     +/**
>>>>>        * Pool parameters
>>>>>        * Used to communicate pool creation options.
>>>>>        */
>>>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>>>                              uint32_t num;
>>>>>                      } tmo;
>>>>>              };
>>>>>     +
>>>>>     +       /** Buffer constructor to initialize every buffer. Use NULL
>>>>>     if no
>>>>>     +           initialization needed. */
>>>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>>>     +       /** Opaque pointer passed to buffer constructor */
>>>>>     +       void *buf_ctor_arg;
>>>>>       } odp_pool_param_t;
>>>>>
>>>>>       /** Packet pool*/
>>>>>     --
>>>>>     1.9.1
>>>>>
>>>>>     _______________________________________________
>>>>>     lng-odp mailing list
>>>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>
>>>>>
>>>>> Regards,
>>>>> Bala
>>>>>
>>>>
>>>
>>
>>
> Regards,
> Bala
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
Balasubramanian Manoharan Aug. 12, 2015, 5:13 p.m. UTC | #17
On 12 August 2015 at 19:33, Bill Fischofer <bill.fischofer@linaro.org>
wrote:

> No, packets are a first-class object.  Segments are internal to packets.
> That's why we have both odp_packet_t and odp_packet_seg_t handles. The user
> area is associated with an odp_packet_t, not an odp_packet_seg_t.
>

Yes. user_area is associated only with packets and not with segments.
I was using the term segment to say that each segment could become a
potential packet in the sense that you can never guess which segment will
be the start of a packet and also user_area for performance reasons might
get stored on the first segment. This is just an implementation detail and
we can leave this topic since I believe we are both on the same page.

Regards,
Bala

> On Wed, Aug 12, 2015 at 8:51 AM, Bala Manoharan <bala.manoharan@linaro.org
> > wrote:
>
>>
>>
>> On 12 August 2015 at 18:34, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>
>>> On 12/08/15 11:55, Bala Manoharan wrote:
>>>
>>>
>>>
>>> On 12 August 2015 at 16:17, Bala Manoharan <bala.manoharan@linaro.org>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 12 August 2015 at 15:37, Zoltan Kiss <zoltan.kiss@linaro.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 12/08/15 07:34, Bala Manoharan wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Comments inline...
>>>>>>
>>>>>> On 12 August 2015 at 00:01, Zoltan Kiss <zoltan.kiss@linaro.org
>>>>>> <mailto:zoltan.kiss@linaro.org>> wrote:
>>>>>>
>>>>>>     Applications can preset certain parts of the buffer or user area,
>>>>>> so
>>>>>>     when that
>>>>>>     memory will be allocated it starts from a known state. If the
>>>>>> platform
>>>>>>     allocates the memory during pool creation, it's enough to run the
>>>>>>     constructor
>>>>>>     after that. If it's allocating memory on demand, it should call
>>>>>> the
>>>>>>     constructor each time.
>>>>>>
>>>>>>
>>>>>> [Bala] Not sure if I understand the above description correctly does
>>>>>> it
>>>>>> mean that if the memory is allocated
>>>>>> for an existing pool this call should be called only during the pool
>>>>>> creation and not during each and every buffer allocation?
>>>>>>
>>>>> I'm also not sure I understand the question :) How do you mean "if the
>>>>> memory is allocated for an _existing_ pool"? I mean, you can't allocate
>>>>> memory for a pool which doesn't exist.
>>>>> The key point is "when that memory will be allocated". This callback
>>>>> should be called whenever the memory is allocated for the buffer. If it is
>>>>> done during pool creation (which I guess most sensible implementations do),
>>>>> then it happens then. If for some reason the platform allocates memory when
>>>>> someone allocates the buffer, it happens then.
>>>>>
>>>>
>>>> [Bala] Let me try to decrypt my query in a better way :-)
>>>>
>>>> Usually when application calls odp_pool_create() most implementations
>>>> will allocate a memory region and map them as pointers based on a fixed
>>>> segment size so that when the application calls odp_buffer_allocate() the
>>>> segment of this memory pool (segments may be combined depending on the
>>>> required size of the buffer )
>>>>
>>> No, odp_buffer_alloc() doesn't let you specify you sizes, the buffer
>>> size is set when you create the pool. I think you mean packet type pools.
>>> In that case there could be segmentation, so a packet could have been the
>>> Nth segment of an another packet before.
>>>
>>
>> Yes, but this init call back will be called for each segment in a packet
>> pool since each segment could become a potential packet. So in general any
>> packet allocated from this pool should have the default value set before
>> the very first alloc and resetting this value during subsequent allocation
>> is an application responsibility and implementation should not worry about
>> the same.
>>
>>
>>>
>>> will be returned to the user.
>>>>
>>>> So my understanding of your requirement is that whenever the
>>>> application calls  odp_buffer_alloc() then the user area should be reset to
>>>> a predefined value.
>>>>
>>> No. "so when that memory will be allocated". The callback is only called
>>> when the memory underlying the buffer gets allocated. Beware, allocating a
>>> buffer and allocating the memory can be two different things in this
>>> context. At least linux-generic and ODP-DPDK allocates the memory when the
>>> pool is created, not when the buffer is allocated.
>>>
>>>
>>>> So in-case that an application does the following
>>>> 1. odp_buffer_alloc()
>>>> 2. odp_buffer_free()
>>>> 3. odp_buffer_alloc()
>>>>
>>>> For simplicity lets assume the implementation returned the same buffer
>>>> during the second odp_buffer_alloc() call ( Basically the buffer gets
>>>> reused ) then the implementation should have reset the user-area of the
>>>> buffer before returning it to the application. Is this the requirement?
>>>>
>>> No. The only requirement is that "when that memory will be allocated it
>>> starts from a known state". The implementation should call the callback
>>> only when that memory is allocated. Therefore, if the memory underlying
>>> this buffer were not released and reallocated between an alloc and free,
>>> this callback shouldn't be called
>>>
>>
>> I understand your use-case now please add the description in the API
>> definition that this initialisation is not required when the buffer gets
>> reused after a free and it is only needed when memory is created the very
>> first time.
>>
>>>
>>> [Bala] Missed a sentence here
>>> In the above scenario implementation should have reset the user-area
>>> before both odp_buffer_alloc() both in step 1 and step 3.
>>>
>>>>
>>>> I have additional query regarding the use-case you have defined below
>>>> but I would like to get clarity on the above first :)
>>>>
>>>> Regards,
>>>> Bala
>>>>
>>>>
>>>>> Then it will be applications responsibility to reset the user area
>>>>>> before freeing the buffer? Is this the use-case this API is trying to
>>>>>> address?
>>>>>>
>>>>> No, the application is not required to reset it at any time. It can do
>>>>> that, if it's what it wants. The only requirement is that the buffer starts
>>>>> from a known state after its memory was allocated.
>>>>> The use case is the following: OVS has it's layer to handle buffers
>>>>> from different sources, e.g. it has to be able to differentiate between
>>>>> packets coming from DPDK and ODP (the latter can run DPDK underneath as
>>>>> well, but that's a different story ...). It stores a "struct dp_packet" in
>>>>> the user area to store data about the packet:
>>>>>
>>>>> https://git.linaro.org/lng/odp-ovs.git/blob/HEAD:/lib/dp-packet.h#l41
>>>>>
>>>>> Most of these fields will be set during processing to their right
>>>>> value, however there are two things we need to set right after receive:
>>>>> "source" to DPBUF_ODP and "odp_pkt" to point to the odp_packet_t itself.
>>>>> The first value will be the same for all ODP packets, every time. And the
>>>>> second is known when the underlying memory was allocated.
>>>>> Both of them could be set when the pool is created, OVS-DPDK does that
>>>>> already, but ODP-OVS has to reset these values after every receive. Even
>>>>> when it's only a few cycles to set and maybe unnecessary cache loads, it's
>>>>> still a lot of cycles when you receive at 10-40-100 Gbps.
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>     Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org
>>>>>>     <mailto:zoltan.kiss@linaro.org>>
>>>>>>
>>>>>>     ---
>>>>>>       include/odp/api/pool.h | 20 ++++++++++++++++++++
>>>>>>       1 file changed, 20 insertions(+)
>>>>>>
>>>>>>     diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
>>>>>>     index 2e79a55..1bd19bf 100644
>>>>>>     --- a/include/odp/api/pool.h
>>>>>>     +++ b/include/odp/api/pool.h
>>>>>>     @@ -41,6 +41,20 @@ extern "C" {
>>>>>>       #define ODP_POOL_NAME_LEN  32
>>>>>>
>>>>>>       /**
>>>>>>     + * Buffer constructor callback function for pools.
>>>>>>     + *
>>>>>>     + * @param pool          Handle of the pool which owns the buffer
>>>>>>     + * @param buf_ctor_arg  Opaque pointer defined in
>>>>>> odp_pool_param_t
>>>>>>     + * @param buf           Pointer to the buffer
>>>>>>     + *
>>>>>>     + * @note If the application specifies this pointer, it expects
>>>>>> that
>>>>>>     every buffer
>>>>>>     + * is initialized with it when the underlying memory is
>>>>>> allocated.
>>>>>>     + */
>>>>>>     +typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
>>>>>>     +                                  void *buf_ctor_arg,
>>>>>>     +                                  void *buf);
>>>>>>     +
>>>>>>
>>>>>>
>>>>>> [Bala] We have avoided call back functions in ODP architecture so if
>>>>>> the
>>>>>> requirement can be addressed without a call-back maybe we can follow
>>>>>> that approach. Again I am not clear if this call-back function should
>>>>>> be
>>>>>> called only once during pool creation or every time during buffer
>>>>>> alloc.
>>>>>>
>>>>>>     +/**
>>>>>>        * Pool parameters
>>>>>>        * Used to communicate pool creation options.
>>>>>>        */
>>>>>>     @@ -88,6 +102,12 @@ typedef struct odp_pool_param_t {
>>>>>>                              uint32_t num;
>>>>>>                      } tmo;
>>>>>>              };
>>>>>>     +
>>>>>>     +       /** Buffer constructor to initialize every buffer. Use
>>>>>> NULL
>>>>>>     if no
>>>>>>     +           initialization needed. */
>>>>>>     +       odp_pool_buf_ctor_t *buf_ctor;
>>>>>>     +       /** Opaque pointer passed to buffer constructor */
>>>>>>     +       void *buf_ctor_arg;
>>>>>>       } odp_pool_param_t;
>>>>>>
>>>>>>       /** Packet pool*/
>>>>>>     --
>>>>>>     1.9.1
>>>>>>
>>>>>>     _______________________________________________
>>>>>>     lng-odp mailing list
>>>>>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>>>>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Bala
>>>>>>
>>>>>
>>>>
>>>
>>>
>> Regards,
>> Bala
>>
>> _______________________________________________
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>
diff mbox

Patch

diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
index 2e79a55..1bd19bf 100644
--- a/include/odp/api/pool.h
+++ b/include/odp/api/pool.h
@@ -41,6 +41,20 @@  extern "C" {
 #define ODP_POOL_NAME_LEN  32
 
 /**
+ * Buffer constructor callback function for pools.
+ *
+ * @param pool          Handle of the pool which owns the buffer
+ * @param buf_ctor_arg  Opaque pointer defined in odp_pool_param_t
+ * @param buf           Pointer to the buffer
+ *
+ * @note If the application specifies this pointer, it expects that every buffer
+ * is initialized with it when the underlying memory is allocated.
+ */
+typedef void (odp_pool_buf_ctor_t)(odp_pool_t pool,
+				   void *buf_ctor_arg,
+				   void *buf);
+
+/**
  * Pool parameters
  * Used to communicate pool creation options.
  */
@@ -88,6 +102,12 @@  typedef struct odp_pool_param_t {
 			uint32_t num;
 		} tmo;
 	};
+
+	/** Buffer constructor to initialize every buffer. Use NULL if no
+	    initialization needed. */
+	odp_pool_buf_ctor_t *buf_ctor;
+	/** Opaque pointer passed to buffer constructor */
+	void *buf_ctor_arg;
 } odp_pool_param_t;
 
 /** Packet pool*/