From patchwork Wed Feb 11 20:35:02 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 44574 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f198.google.com (mail-we0-f198.google.com [74.125.82.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BF9E021527 for ; Wed, 11 Feb 2015 20:35:26 +0000 (UTC) Received: by mail-we0-f198.google.com with SMTP id u56sf3744400wes.1 for ; Wed, 11 Feb 2015 12:35:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=qGwZ0Iynkc8o7HsS4i9AoHW412JlVCFQWVrMUaUqP48=; b=L6tgwOHa0PKhspMH1/gH/gvbRU7OZobBrNaIabbruNVOTab5eD8IA0YlI43+HvOlde WBYAERJPVRergUVk2cx0QTyYKSLeF1oZ9CQ652nj3t2b2xMZUCo9aSZ6dT32PNZWibt3 mpdqiN+ZM71BpOvjih2vkNpMUXICqaFxI7AkNyUYsd9VcTHG26AA9XE7/KkqSTAZ4TfF PkkkGkpSY5UbMZvZicmxcmSbLje5POcQfNNF/+WeiyteMD34ECyI7zJCZTIJqji33FS3 LNYGWzQxX90g2e3Nn39xFUoEcKEkf39x3LOkASBQNvZWgKK97XfaL0XSf3CwCcc0/jCs LYyg== X-Gm-Message-State: ALoCoQkVX4XQg1cKFsNTCLqk3c9gy6cDxfWinpP72SBaCy16z4/fTJjKZAlMEzn7+kYh7z3sr7yl X-Received: by 10.112.171.37 with SMTP id ar5mr62673lbc.16.1423686926005; Wed, 11 Feb 2015 12:35:26 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.6.97 with SMTP id z1ls117397laz.35.gmail; Wed, 11 Feb 2015 12:35:25 -0800 (PST) X-Received: by 10.152.30.66 with SMTP id q2mr250968lah.107.1423686925694; Wed, 11 Feb 2015 12:35:25 -0800 (PST) Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com. [209.85.217.180]) by mx.google.com with ESMTPS id es2si1444368lbb.114.2015.02.11.12.35.24 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Feb 2015 12:35:25 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.180 as permitted sender) client-ip=209.85.217.180; Received: by mail-lb0-f180.google.com with SMTP id z12so5582265lbi.11 for ; Wed, 11 Feb 2015 12:35:24 -0800 (PST) X-Received: by 10.112.171.168 with SMTP id av8mr392510lbc.88.1423686924913; Wed, 11 Feb 2015 12:35:24 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp25487lbj; Wed, 11 Feb 2015 12:35:24 -0800 (PST) X-Received: by 10.140.39.179 with SMTP id v48mr775968qgv.77.1423686922927; Wed, 11 Feb 2015 12:35:22 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id p20si2198137qgd.117.2015.02.11.12.35.21 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 11 Feb 2015 12:35:22 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YLe0E-00048t-4G; Wed, 11 Feb 2015 20:35:18 +0000 Received: from mail-pd0-f180.google.com ([209.85.192.180]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YLe08-00046p-AE for lng-odp@lists.linaro.org; Wed, 11 Feb 2015 20:35:12 +0000 Received: by pdbfl12 with SMTP id fl12so6652794pdb.4 for ; Wed, 11 Feb 2015 12:35:06 -0800 (PST) X-Received: by 10.68.68.137 with SMTP id w9mr651752pbt.132.1423686906767; Wed, 11 Feb 2015 12:35:06 -0800 (PST) Received: from localhost.localdomain ([210.177.145.245]) by mx.google.com with ESMTPSA id sm4sm1738706pab.22.2015.02.11.12.35.05 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 11 Feb 2015 12:35:05 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Wed, 11 Feb 2015 12:35:02 -0800 Message-Id: <1423686902-10251-1-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.0 X-Topics: patch Subject: [lng-odp] [RFC PATCHv2] api: pool: proposed pool parameters for packets X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- Based on the discussions we had this afternoon, here is a proposed patch that I believe covers the concerns of all parties. The application is free to request minimum segment sizes or to leave that to the implementation or to request that packets be unsegmented. The implementation, in turn, controls the actual segment size used (if any) and will reject pool create requests that it is unable to comply with. It is also part of the API specification that any segment size used by an ODP implementation will be a minimum of 256 bytes and always be a multiple of the requested align bytes in size. This also adds requested headroom and tailroom parameters to the pool create parameter list for packets. Note that this is an RFC because it is not a complete patch: The examples and tests are not updated to reflect these changes. If this API is acceptable I'll forward a complete patch conforming to it. v2 clarifies that seg_size is a multiple of align and how this is used to control unsegmened pools. If pkt.seg_size >= pkt.len the pool is unsegmented and pkt.seg_len is the maximum length packet that the pool will support. include/odp/api/pool.h | 54 +++++++++++++++++++++++++++++--- platform/linux-generic/odp_buffer_pool.c | 29 ++++++++++------- 2 files changed, 66 insertions(+), 17 deletions(-) diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h index b8c0f2e..3713674 100644 --- a/include/odp/api/pool.h +++ b/include/odp/api/pool.h @@ -60,13 +60,57 @@ typedef struct odp_pool_param_t { of 8. */ uint32_t num; /**< Number of buffers in the pool */ } buf; -/* Reserved for packet and timeout specific params struct { - uint32_t seg_size; - uint32_t seg_align; - uint32_t num; + uint32_t len; /**< Expected average packet length + in bytes. This is used by the + implementation to calculate + the amount of storage needed for + the pool. If this number is less + than the actual average packet + length then the effect will be + that the number of packets that + can be stored in the pool will be + less than num. */ + uint32_t align; /**< Minimum pkt alignment in bytes. + Valid values are powers of two. + Use 0 for default alignment. + Default will always be a multiple + of 8. */ + uint32_t num; /**< Maximum number of packets + that this pool may contain. */ + uint32_t seg_size; /**< Requested minimum segment size + to be used for this pool, The + implementation is free to use + a segment size larger than this + value. Any segment size used + will always be a multiple of + align bytes and will never be + < 256 bytes. If seg_size == 0, + the application has no + preference and the implementation + is free to a segment size of + its choosing, subject to the + previously noted constraints. + If seg_size >= len, the + application is requesting that + the pool be unsegmented and + seg_size is the maximum sized + packet that the pool will + support. */ + uint32_t headroom; /**< Requested headroom for packets + allocated from this pool. This + is the minimum headroom that + will be used. The implementation + may increase this value by any + multiple of 8 for internal + reasons. */ + uint32_t tailroom; /**< Requested tailroom for packets + allocated from this pool. This + is the minimum tailroom that + will be used. The implementation + may increase this by any byte + value for internal reasons. */ } pkt; -*/ struct { uint32_t __res1; /* Keep struct identical to buf, */ uint32_t __res2; /* until pool implementation is fixed*/ diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c index 69acf1b..ea764d9 100644 --- a/platform/linux-generic/odp_buffer_pool.c +++ b/platform/linux-generic/odp_buffer_pool.c @@ -138,6 +138,7 @@ odp_pool_t odp_pool_create(const char *name, uint32_t blk_size, buf_stride; uint32_t buf_align = params->buf.align; + uint32_t seg_size; /* Validate requested buffer alignment */ if (buf_align > ODP_CONFIG_BUFFER_ALIGN_MAX || @@ -166,18 +167,22 @@ odp_pool_t odp_pool_create(const char *name, break; case ODP_POOL_PACKET: - headroom = ODP_CONFIG_PACKET_HEADROOM; - tailroom = ODP_CONFIG_PACKET_TAILROOM; - unsegmented = params->buf.size > ODP_CONFIG_PACKET_BUF_LEN_MAX; - - if (unsegmented) - blk_size = ODP_ALIGN_ROUNDUP( - headroom + params->buf.size + tailroom, - buf_align); + headroom = params->pkt.headroom; + tailroom = params->pkt.tailroom; + unsegmented = params->pkt.seg_size >= params->pkt.len; + + if (params->pkt.seg_size == 0) + seg_size = ODP_CONFIG_PACKET_BUF_LEN_MIN; + else if (params->pkt.seg_size < 256) + seg_size = 256; else - blk_size = ODP_ALIGN_ROUNDUP( - headroom + params->buf.size + tailroom, - ODP_CONFIG_PACKET_BUF_LEN_MIN); + seg_size = params->pkt.seg_size; + + seg_size = ODP_ALIGN_ROUNDUP(params->pkt.seg_size, + buf_align); + blk_size = ODP_ALIGN_ROUNDUP( + headroom + params->pkt.len + tailroom, + seg_size); buf_stride = params->type == ODP_POOL_PACKET ? sizeof(odp_packet_hdr_stride) : @@ -279,7 +284,7 @@ odp_pool_t odp_pool_create(const char *name, pool->s.flags.unsegmented = unsegmented; pool->s.flags.zeroized = zeroized; pool->s.seg_size = unsegmented ? - blk_size : ODP_CONFIG_PACKET_BUF_LEN_MIN; + blk_size : seg_size; uint8_t *block_base_addr = pool->s.pool_base_addr;