From patchwork Sun Jan 11 17:50:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 42955 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f69.google.com (mail-wg0-f69.google.com [74.125.82.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 63D3F2055F for ; Sun, 11 Jan 2015 17:51:47 +0000 (UTC) Received: by mail-wg0-f69.google.com with SMTP id x12sf7845720wgg.0 for ; Sun, 11 Jan 2015 09:51:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=or3XuK6KF/IxHjnAIF+n7DWxp3kYa/Lergdt/WG2pEo=; b=kToFirWp/Zl1DrlnKHVWkAcFb91NsYeQXBYryS/jY4pm1btImfm4TuAse0dUyZ2cQX 7rHqBBUPpaq6Ly0QyQizo1tJtV8lftgMIVLeTo2V38ePUefHJkZmbTWMyKAzdmHG4WME zbqHonOZcaEH0QM4j/MtOFcXlBx8vMhBcnwYbVjMAzfU12hRDB/m6TxyMADo3BipER4A ELQHBT7wY+iUyfKXVmhq3sELl8cQvU5j6aLUJb0egGwJqyRUVkMYG/Da9Qq5uSZILrFq XNlj/ZY3CK6yH5U9yhIXSNdLK0FylCY1wGiWsmRpG6fAQN+dogecPJNaaZxjwDrFtL1u bVFg== X-Gm-Message-State: ALoCoQnRE39B4gSzTWwcsEM/QGiMn+pk2ZSPD95LR6lhNWFXQWs5dUBvExCxPUwgTvsgHkQJP9xy X-Received: by 10.112.148.198 with SMTP id tu6mr3178636lbb.3.1420998706387; Sun, 11 Jan 2015 09:51:46 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.73 with SMTP id o9ls535257laj.65.gmail; Sun, 11 Jan 2015 09:51:46 -0800 (PST) X-Received: by 10.152.7.229 with SMTP id m5mr32915546laa.80.1420998706177; Sun, 11 Jan 2015 09:51:46 -0800 (PST) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id p2si19130454lah.113.2015.01.11.09.51.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Jan 2015 09:51:46 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by mail-la0-f53.google.com with SMTP id gm9so21473379lab.12 for ; Sun, 11 Jan 2015 09:51:46 -0800 (PST) X-Received: by 10.112.180.135 with SMTP id do7mr32091881lbc.23.1420998706083; Sun, 11 Jan 2015 09:51:46 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp879657lbb; Sun, 11 Jan 2015 09:51:45 -0800 (PST) X-Received: by 10.224.151.208 with SMTP id d16mr32865443qaw.30.1420998704664; Sun, 11 Jan 2015 09:51:44 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id m67si19722066qgd.51.2015.01.11.09.51.14 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 11 Jan 2015 09:51:44 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YAMfQ-0006Zu-MY; Sun, 11 Jan 2015 17:51:12 +0000 Received: from mail-oi0-f47.google.com ([209.85.218.47]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YAMfB-0006Yk-Ub for lng-odp@lists.linaro.org; Sun, 11 Jan 2015 17:50:58 +0000 Received: by mail-oi0-f47.google.com with SMTP id v63so17931503oia.6 for ; Sun, 11 Jan 2015 09:50:52 -0800 (PST) X-Received: by 10.182.153.39 with SMTP id vd7mr15378238obb.78.1420998652693; Sun, 11 Jan 2015 09:50:52 -0800 (PST) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by mx.google.com with ESMTPSA id dd17sm7868408obb.18.2015.01.11.09.50.52 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 11 Jan 2015 09:50:52 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Sun, 11 Jan 2015 11:50:38 -0600 Message-Id: <1420998638-23980-4-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1420998638-23980-1-git-send-email-bill.fischofer@linaro.org> References: <1420998638-23980-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCHv2 3/3] linux-generic: buffers: add lock-based allocator X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 To permit performance scalability testing, add the USE_BUFFER_POOL_LOCKS compile time switch to enable use of locks vs. lockless buffer allocation. The default is to use lockless allocation. Set this to 1 to enable locks. Signed-off-by: Bill Fischofer --- .../include/odp_buffer_pool_internal.h | 120 +++++++++++++++++++++ platform/linux-generic/odp_buffer_pool.c | 12 +++ 2 files changed, 132 insertions(+) diff --git a/platform/linux-generic/include/odp_buffer_pool_internal.h b/platform/linux-generic/include/odp_buffer_pool_internal.h index 169e02d..a5a1864 100644 --- a/platform/linux-generic/include/odp_buffer_pool_internal.h +++ b/platform/linux-generic/include/odp_buffer_pool_internal.h @@ -64,6 +64,8 @@ typedef struct local_cache_t { /* Extra error checks */ /* #define POOL_ERROR_CHECK */ +/* Control use of locks vs. Lockless allocation */ +#define USE_BUFFER_POOL_LOCKS 0 #ifdef POOL_USE_TICKETLOCK #include @@ -80,7 +82,15 @@ typedef struct local_cache_t { struct pool_entry_s { #ifdef POOL_USE_TICKETLOCK odp_ticketlock_t lock ODP_ALIGNED_CACHE; +#if USE_BUFFER_POOL_LOCKS + odp_ticketlock_t buf_lock; + odp_ticketlock_t blk_lock; +#endif #else +#if USE_BUFFER_POOL_LOCKS + odp_spinlock_t buf_lock; + odp_spinlock_t blk_lock; +#endif odp_spinlock_t lock ODP_ALIGNED_CACHE; #endif @@ -107,8 +117,13 @@ struct pool_entry_s { size_t pool_size; uint32_t buf_align; uint32_t buf_stride; +#if USE_BUFFER_POOL_LOCKS + odp_buffer_hdr_t *buf_freelist; + void *blk_freelist; +#else odp_atomic_u64_t buf_freelist; odp_atomic_u64_t blk_freelist; +#endif odp_atomic_u32_t bufcount; odp_atomic_u32_t blkcount; odp_atomic_u64_t bufallocs; @@ -142,6 +157,109 @@ extern void *pool_entry_ptr[]; #define pool_is_secure(pool) 0 #endif +#if USE_BUFFER_POOL_LOCKS + +static inline void *get_blk(struct pool_entry_s *pool) +{ + void *block; + + POOL_LOCK(&pool->blk_lock); + + block = pool->blk_freelist; + if (block != NULL) + pool->blk_freelist = ((odp_buf_blk_t *)block)->next; + + POOL_UNLOCK(&pool->blk_lock); + + if (odp_unlikely(block == NULL)) { + odp_atomic_inc_u64(&pool->blkempty); + } else { + odp_atomic_inc_u64(&pool->blkallocs); + odp_atomic_dec_u32(&pool->blkcount); + } + + return block; +} + +static inline void ret_blk(struct pool_entry_s *pool, void *block) +{ + POOL_LOCK(&pool->blk_lock); + + ((odp_buf_blk_t *)block)->next = pool->blk_freelist; + pool->blk_freelist = block; + + POOL_UNLOCK(&pool->blk_lock); + + odp_atomic_inc_u32(&pool->blkcount); + odp_atomic_inc_u64(&pool->blkfrees); +} + +static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) +{ + odp_buffer_hdr_t *buf; + + POOL_LOCK(&pool->buf_lock); + + buf = pool->buf_freelist; + if (buf != NULL) + pool->buf_freelist = buf->next; + + POOL_UNLOCK(&pool->buf_lock); + + if (odp_unlikely(buf == NULL)) { + odp_atomic_inc_u64(&pool->bufempty); + } else { + uint64_t bufcount = + odp_atomic_fetch_sub_u32(&pool->bufcount, 1) - 1; + + /* Check for low watermark condition */ + if (bufcount == pool->low_wm && !pool->low_wm_assert) { + pool->low_wm_assert = 1; + odp_atomic_inc_u64(&pool->low_wm_count); + } + + odp_atomic_inc_u64(&pool->bufallocs); + /* Mark buffer allocated */ + buf->allocator = odp_thread_id(); + } + + return buf; +} + +static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) +{ + buf->allocator = ODP_FREEBUF; /* Mark buffer free */ + + if (!buf->flags.hdrdata && buf->type != ODP_BUFFER_TYPE_RAW) { + while (buf->segcount > 0) { + if (buffer_is_secure(buf) || pool_is_secure(pool)) + memset(buf->addr[buf->segcount - 1], + 0, buf->segsize); + ret_blk(pool, buf->addr[--buf->segcount]); + } + buf->size = 0; + } + + POOL_LOCK(&pool->buf_lock); + + buf->next = pool->buf_freelist; + pool->buf_freelist = buf; + + POOL_UNLOCK(&pool->buf_lock); + + uint64_t bufcount = odp_atomic_fetch_add_u32(&pool->bufcount, 1) + 1; + + /* Check if low watermark condition should be deasserted */ + if (bufcount == pool->high_wm && pool->low_wm_assert) { + pool->low_wm_assert = 0; + odp_atomic_inc_u64(&pool->high_wm_count); + } + + odp_atomic_inc_u64(&pool->buffrees); +} + +#else + #define OFFSET_BITS ODP_BITSIZE((uint64_t)ODP_BUFFER_MAX_BUFFERS * \ (uint32_t)ODP_CONFIG_PACKET_BUF_LEN_MAX) @@ -292,6 +410,8 @@ static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) odp_atomic_inc_u64(&pool->buffrees); } +#endif + static inline void *get_local_buf(local_cache_t *buf_cache, struct pool_entry_s *pool, size_t totsize) diff --git a/platform/linux-generic/odp_buffer_pool.c b/platform/linux-generic/odp_buffer_pool.c index 5d181bb..b905cdd 100644 --- a/platform/linux-generic/odp_buffer_pool.c +++ b/platform/linux-generic/odp_buffer_pool.c @@ -82,6 +82,10 @@ int odp_buffer_pool_init_global(void) /* init locks */ pool_entry_t *pool = &pool_tbl->pool[i]; POOL_LOCK_INIT(&pool->s.lock); +#if USE_BUFFER_POOL_LOCKS + POOL_LOCK_INIT(&pool->s.buf_lock); + POOL_LOCK_INIT(&pool->s.blk_lock); +#endif pool->s.pool_hdl = pool_index_to_handle(i); pool->s.pool_id = i; pool_entry_ptr[i] = pool; @@ -91,6 +95,9 @@ int odp_buffer_pool_init_global(void) ODP_DBG(" pool_entry_s size %zu\n", sizeof(struct pool_entry_s)); ODP_DBG(" pool_entry_t size %zu\n", sizeof(pool_entry_t)); ODP_DBG(" odp_buffer_hdr_t size %zu\n", sizeof(odp_buffer_hdr_t)); +#if !USE_BUFFER_POOL_LOCKS + ODP_DBG(" offset_bits = %d, tag_bits = %d\n", OFFSET_BITS, TAG_BITS); +#endif ODP_DBG("\n"); return 0; } @@ -287,8 +294,13 @@ odp_buffer_pool_t odp_buffer_pool_create(const char *name, pool->s.buf_stride = buf_stride; +#if USE_BUFFER_POOL_LOCKS + pool->s.buf_freelist = NULL; + pool->s.blk_freelist = NULL; +#else odp_atomic_store_u64(&pool->s.buf_freelist, NULL_OFFSET); odp_atomic_store_u64(&pool->s.blk_freelist, NULL_OFFSET); +#endif /* Initialization will increment these to their target vals */ odp_atomic_store_u32(&pool->s.bufcount, 0);