From patchwork Wed Dec 14 20:32:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 88069 Delivered-To: patch@linaro.org Received: by 10.140.20.101 with SMTP id 92csp400061qgi; Wed, 14 Dec 2016 12:32:17 -0800 (PST) X-Received: by 10.31.166.12 with SMTP id p12mr40123309vke.50.1481747536954; Wed, 14 Dec 2016 12:32:16 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id r188si16209410vke.4.2016.12.14.12.32.16; Wed, 14 Dec 2016 12:32:16 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 6D1DF60CF6; Wed, 14 Dec 2016 20:32:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 9E97460C8C; Wed, 14 Dec 2016 20:32:11 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 22F8860CE2; Wed, 14 Dec 2016 20:32:10 +0000 (UTC) Received: from mail-oi0-f43.google.com (mail-oi0-f43.google.com [209.85.218.43]) by lists.linaro.org (Postfix) with ESMTPS id 3388460C7F for ; Wed, 14 Dec 2016 20:32:08 +0000 (UTC) Received: by mail-oi0-f43.google.com with SMTP id y198so32090323oia.1 for ; Wed, 14 Dec 2016 12:32:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=aQpVZCa9zLOWI6WvLuLUDzCbY5AMq977sHEqMry+9ms=; b=N5IkUxXoDfWyE1fP0AE0N4lhWqTBf3qI0knpj3w078ycFCg+yeZUInQSj8AyjzWPHW 2ge3Jv3JwYGJdJZLO1dxIr4LNbiTfB/bRX+dQs7ppbsC+Nlex2llxF2UmZmKTtopmTRC QjWebKlXx4A1uiGi25vZqzT8+Pp03AIK7C+/I3gZVhKz0amOIwze1eevDUGzO+URvtQq QhiVdbCVIuAjgmE/ahV3UqDlMCYT5b//pDRh2AOJc8lgYwfEIm+LJmhjQFgNWe+sq4+B mP320zNcTAO5/O9WBhuWhwKlc8/e7dk77rc5aGwuHybwyGRGlrLVQsT+uvxrJ/MZjR0t WmeQ== X-Gm-Message-State: AKaTC02tEr19yI1+uljSUXpJKHaSI7q6mZUQhkaWz4aH/ipTcactlwrinMfiNwv+/Bl2i2Xku2s= X-Received: by 10.157.27.136 with SMTP id z8mr63232851otd.92.1481747527608; Wed, 14 Dec 2016 12:32:07 -0800 (PST) Received: from localhost.localdomain (cpe-70-121-83-241.austin.res.rr.com. [70.121.83.241]) by smtp.gmail.com with ESMTPSA id t136sm21613449oif.2.2016.12.14.12.32.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 14 Dec 2016 12:32:06 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Wed, 14 Dec 2016 14:32:05 -0600 Message-Id: <1481747525-32051-1-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.7.4 Subject: [lng-odp] [API-NEXT PATCHv3] linux-generic: pool: defer ring allocation until pool creation X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" To avoid excessive memory overhead for pools, defer the allocation of the pool ring until odp_pool_create() is called. This keeps pool memory overhead proportional to the number of pools actually in use rather than the architected maximum number of pools. This patch addresses Bug https://bugs.linaro.org/show_bug.cgi?id=2765 Signed-off-by: Bill Fischofer --- Changes for v3: - Do not reference pool ring on buffer alloc/free if cache can satisfy request Changes for v2: - Reset reserved to 0 if ring allocation fails to recover properly platform/linux-generic/include/odp_pool_internal.h | 3 ++- platform/linux-generic/odp_pool.c | 31 +++++++++++++++++----- 2 files changed, 26 insertions(+), 8 deletions(-) -- 2.7.4 diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 5d7b817..4915bda 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -69,7 +69,8 @@ typedef struct pool_t { pool_cache_t local_cache[ODP_THREAD_COUNT_MAX]; - pool_ring_t ring; + odp_shm_t ring_shm; + pool_ring_t *ring; } pool_t; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 4be3827..7113331 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -143,7 +143,7 @@ static void flush_cache(pool_cache_t *cache, pool_t *pool) uint32_t mask; uint32_t cache_num, i, data; - ring = &pool->ring.hdr; + ring = &pool->ring->hdr; mask = pool->ring_mask; cache_num = cache->num; @@ -172,6 +172,7 @@ static pool_t *reserve_pool(void) { int i; pool_t *pool; + char ring_name[ODP_POOL_NAME_LEN]; for (i = 0; i < ODP_CONFIG_POOLS; i++) { pool = pool_entry(i); @@ -180,6 +181,17 @@ static pool_t *reserve_pool(void) if (pool->reserved == 0) { pool->reserved = 1; UNLOCK(&pool->lock); + sprintf(ring_name, "_odp_pool_ring_%d", i); + pool->ring_shm = + odp_shm_reserve(ring_name, + sizeof(pool_ring_t), + ODP_CACHE_LINE_SIZE, 0); + if (pool->ring_shm == ODP_SHM_INVALID) { + ODP_ERR("Unable to alloc pool ring %d\n", i); + pool->reserved = 0; + break; + } + pool->ring = odp_shm_addr(pool->ring_shm); return pool; } UNLOCK(&pool->lock); @@ -214,7 +226,7 @@ static void init_buffers(pool_t *pool) int type; uint32_t seg_size; - ring = &pool->ring.hdr; + ring = &pool->ring->hdr; mask = pool->ring_mask; type = pool->params.type; @@ -411,7 +423,7 @@ static odp_pool_t pool_create(const char *name, odp_pool_param_t *params, pool->uarea_base_addr = odp_shm_addr(pool->uarea_shm); } - ring_init(&pool->ring.hdr); + ring_init(&pool->ring->hdr); init_buffers(pool); return pool->pool_hdl; @@ -533,6 +545,8 @@ int odp_pool_destroy(odp_pool_t pool_hdl) odp_shm_free(pool->uarea_shm); pool->reserved = 0; + odp_shm_free(pool->ring_shm); + pool->ring = NULL; UNLOCK(&pool->lock); return 0; @@ -589,8 +603,6 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], pool_cache_t *cache; uint32_t cache_num, num_ch, num_deq, burst; - ring = &pool->ring.hdr; - mask = pool->ring_mask; cache = local.cache[pool->pool_idx]; cache_num = cache->num; @@ -617,6 +629,8 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], * and not uint32_t. */ uint32_t data[burst]; + ring = &pool->ring->hdr; + mask = pool->ring_mask; burst = ring_deq_multi(ring, mask, data, burst); cache_num = burst - num_deq; @@ -668,12 +682,12 @@ static inline void buffer_free_to_pool(uint32_t pool_id, cache = local.cache[pool_id]; pool = pool_entry(pool_id); - ring = &pool->ring.hdr; - mask = pool->ring_mask; /* Special case of a very large free. Move directly to * the global pool. */ if (odp_unlikely(num > CONFIG_POOL_CACHE_SIZE)) { + ring = &pool->ring->hdr; + mask = pool->ring_mask; for (i = 0; i < num; i++) ring_enq(ring, mask, (uint32_t)(uintptr_t)buf[i]); @@ -688,6 +702,9 @@ static inline void buffer_free_to_pool(uint32_t pool_id, uint32_t index; int burst = CACHE_BURST; + ring = &pool->ring->hdr; + mask = pool->ring_mask; + if (odp_unlikely(num > CACHE_BURST)) burst = num;