From patchwork Tue Sep 11 14:00:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 146459 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp3682043ljw; Tue, 11 Sep 2018 07:06:58 -0700 (PDT) X-Google-Smtp-Source: ANB0VdacAME1m4Drjk/Oxf8swKhlJ2jTeKsgQNV9WxiiCWoWgou9YrR2W0XhHMRnfYqG+mHMlDzE X-Received: by 2002:ac8:2cef:: with SMTP id 44-v6mr19412493qtx.277.1536674818686; Tue, 11 Sep 2018 07:06:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536674818; cv=none; d=google.com; s=arc-20160816; b=KDmuykc8UfYYHttLc9keImFwIPzdS+Ck8jwhyB3w5u2Ck38GE9kXCvVTorvd1S3ayl c/NucNtfwj+hqJmKuK6+XB5Fz1YDhV8TxmGcccgat9GgLLx5J9rTolemxet3/JRo490k nkELP+OP3dWdWrnbEIkBqKiXl5ipZbYfolprlbT7AwmdmgueeqKyO6U6DZ7xCt5Pdr0L DTns5IJaekVYObSYE1QQpeuyCAuYJ/e24s1BUhVc1FmP9YuzRi1zXfrFFsdUU9WJEIsd rOiopiw9Ndbq/R7/I+R9sVupy1zlp6kjHVZ1hxYwxJ8LbNP0KtnH7c5J8fa+P1c6GCuM +biw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to; bh=zh3KfgONeRzGu4pBRcs6xv7NqpYqhSUzTPgpNxS0lEE=; b=p8qlI685sgq9BA26ak9YvnOb823KF0MWGX8PWeZY6Hkxs5DTD0XUQWTBqpfTLBDDBQ 3X7xj1gkTpZfLg+bwsuCgLss/MyTG312fZ7I6C8GrEATAEF4xSZcon68mH2tUaPjnYUB H1EwaXGGyFsFCilHpSnJEGfemqTBbOZxaeFuryhIf6jtRL9qqRGYDxfTZav1STuVJS+G /uUSzolNOJSYFQMdcoOPvbD1+joRBu3iH5QSnF8VeL9mhlpG7/1c+riiIYAPDtrlgPeb 8B5c6yClUpr1w3dzYou/wIwLWw83EX3l+4fwKy8qVZycPqqAGlDIYG9e9z7iKapnhJB6 wWLQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id x3-v6si3520232qti.136.2018.09.11.07.06.58; Tue, 11 Sep 2018 07:06:58 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 579A661C1D; Tue, 11 Sep 2018 14:06:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 07AEE61C11; Tue, 11 Sep 2018 14:02:36 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id AE36C61C1D; Tue, 11 Sep 2018 14:02:02 +0000 (UTC) Received: from forward102j.mail.yandex.net (forward102j.mail.yandex.net [5.45.198.243]) by lists.linaro.org (Postfix) with ESMTPS id 9FDA461C11 for ; Tue, 11 Sep 2018 14:00:45 +0000 (UTC) Received: from mxback13j.mail.yandex.net (mxback13j.mail.yandex.net [IPv6:2a02:6b8:0:1619::88]) by forward102j.mail.yandex.net (Yandex) with ESMTP id 9A350560212B for ; Tue, 11 Sep 2018 17:00:44 +0300 (MSK) Received: from smtp2o.mail.yandex.net (smtp2o.mail.yandex.net [2a02:6b8:0:1a2d::26]) by mxback13j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id 2Mpt1UM5u1-0iIuQusb; Tue, 11 Sep 2018 17:00:44 +0300 Received: by smtp2o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id XLXHUeLpaf-0f00UjY6; Tue, 11 Sep 2018 17:00:41 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Tue, 11 Sep 2018 14:00:38 +0000 Message-Id: <1536674439-8532-2-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1536674439-8532-1-git-send-email-odpbot@yandex.ru> References: <1536674439-8532-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 685 Subject: [lng-odp] [PATCH v5 1/2] linux-gen: ishm: implement huge page cache X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Josep Puigdemont With this patch, ODP will pre-allocate several huge pages at init time. When memory is to be mapped into a huge page, one that was pre-allocated will be used, if available, this way ODP won't have to trap into the kernel to allocate huge pages. The idea with this implementation is to trick ishm into thinking that a file descriptor where to map the memory was provided, this way it it won't try to allocate one itself. This file descriptor is one of those previously allocated at init time. When the system is done with this file descriptor, instead of closing it, it is put back into the list of available huge pages, ready to be reused. A collateral effect of this patch is that memory is not zeroed out when it is reused. WARNING: This patch will not work when using process mode threads. For several reasons, this may not work when using ODP_ISHM_SINGLE_VA either, so when this flag is set, the list of pre-allocated files is not used. By default ODP will not reserve any huge pages, to tell ODP to do that, update the ODP configuration file with something like this: shm: { num_cached_hp = 32 } Example usage: $ echo odp.config odp_implementation = "linux-generic" config_file_version = "0.0.1" shm: { num_cached_hp = 32 } $ ODP_CONFIG_FILE=odp.conf ./test/validation/api/shmem/shmem_main This patch solves bug #3774: https://bugs.linaro.org/show_bug.cgi?id=3774 Signed-off-by: Josep Puigdemont --- /** Email created from pull request 685 (joseppc:fix/cache_huge_pages) ** https://github.com/Linaro/odp/pull/685 ** Patch: https://github.com/Linaro/odp/pull/685.patch ** Base sha: 33fbc04b6373960ec3f84de4e7e7b34c49d71508 ** Merge commit sha: 9826130fb2849a5c4088572ca285b00e358be707 **/ config/odp-linux-generic.conf | 11 ++ platform/linux-generic/odp_ishm.c | 218 ++++++++++++++++++++++++++++-- 2 files changed, 215 insertions(+), 14 deletions(-) diff --git a/config/odp-linux-generic.conf b/config/odp-linux-generic.conf index 85d5414ba..0dd2a6c13 100644 --- a/config/odp-linux-generic.conf +++ b/config/odp-linux-generic.conf @@ -18,6 +18,17 @@ odp_implementation = "linux-generic" config_file_version = "0.0.1" +# Internal shared memory allocator +shm: { + # ODP will try to reserve as many huge pages as the number indicated + # here, up to 64. A zero value means that no pages should be reserved. + # When using process mode threads, this value should be set to 0 + # because the current implementation won't work properly otherwise. + # These pages will only be freed when the application calls + # odp_term_global(). + num_cached_hp = 0 +} + # DPDK pktio options pktio_dpdk: { # Default options diff --git a/platform/linux-generic/odp_ishm.c b/platform/linux-generic/odp_ishm.c index 59d1fe534..aeda50bec 100644 --- a/platform/linux-generic/odp_ishm.c +++ b/platform/linux-generic/odp_ishm.c @@ -63,6 +63,7 @@ #include #include #include +#include #include #include #include @@ -164,7 +165,7 @@ typedef struct ishm_fragment { * will allocate both a block and a fragment. * Blocks contain only global data common to all processes. */ -typedef enum {UNKNOWN, HUGE, NORMAL, EXTERNAL} huge_flag_t; +typedef enum {UNKNOWN, HUGE, NORMAL, EXTERNAL, CACHED} huge_flag_t; typedef struct ishm_block { char name[ISHM_NAME_MAXLEN]; /* name for the ishm block (if any) */ char filename[ISHM_FILENAME_MAXLEN]; /* name of the .../odp-* file */ @@ -238,6 +239,16 @@ typedef struct { } ishm_ftable_t; static ishm_ftable_t *ishm_ftbl; +#define HP_CACHE_SIZE 64 +struct huge_page_cache { + uint64_t len; + int total; /* amount of actually pre-allocated huge pages */ + int idx; /* retrieve fd[idx] to get a free file descriptor */ + int fd[HP_CACHE_SIZE]; /* list of file descriptors */ +}; + +static struct huge_page_cache hpc; + #ifndef MAP_ANONYMOUS #define MAP_ANONYMOUS MAP_ANON #endif @@ -245,6 +256,142 @@ static ishm_ftable_t *ishm_ftbl; /* prototypes: */ static void procsync(void); +static int hp_create_file(uint64_t len, const char *filename) +{ + int fd; + void *addr; + + if (len <= 0) { + ODP_ERR("Length is wrong\n"); + return -1; + } + + fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, + S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + if (fd < 0) { + ODP_ERR("Could not create cache file %s\n", filename); + return -1; + } + + /* remove file from file system */ + unlink(filename); + + if (ftruncate(fd, len) == -1) { + ODP_ERR("Could not truncate file: %s\n", strerror(errno)); + close(fd); + return -1; + } + + /* commit huge page */ + addr = _odp_ishmphy_map(fd, NULL, len, 0); + if (addr == NULL) { + /* no more pages available */ + close(fd); + return -1; + } + _odp_ishmphy_unmap(addr, len, 0); + + ODP_DBG("Created HP cache file %s, fd: %d\n", filename, fd); + + return fd; +} + +static void hp_init(void) +{ + char filename[ISHM_FILENAME_MAXLEN]; + char dir[ISHM_FILENAME_MAXLEN]; + int count; + + hpc.total = 0; + hpc.idx = -1; + hpc.len = odp_sys_huge_page_size(); + + if (!_odp_libconfig_lookup_ext_int("shm", NULL, "num_cached_hp", + &count)) { + return; + } + + if (count > HP_CACHE_SIZE) + count = HP_CACHE_SIZE; + else if (count <= 0) + return; + + ODP_DBG("Init HP cache with up to %d pages\n", count); + + if (!odp_global_data.hugepage_info.default_huge_page_dir) { + ODP_ERR("No huge page dir\n"); + return; + } + + snprintf(dir, ISHM_FILENAME_MAXLEN, "%s/%s", + odp_global_data.hugepage_info.default_huge_page_dir, + odp_global_data.uid); + + if (mkdir(dir, 0744) != 0) { + if (errno != EEXIST) { + ODP_ERR("Failed to create dir: %s\n", strerror(errno)); + return; + } + } + + snprintf(filename, ISHM_FILENAME_MAXLEN, + "%s/odp-%d-ishm_cached", + dir, + odp_global_data.main_pid); + + for (int i = 0; i < count; ++i) { + int fd; + + fd = hp_create_file(hpc.len, filename); + if (fd == -1) + break; + hpc.total++; + hpc.fd[i] = fd; + } + hpc.idx = hpc.total - 1; + + ODP_DBG("HP cache has %d huge pages of size 0x%08" PRIx64 "\n", + hpc.total, hpc.len); +} + +static void hp_term(void) +{ + for (int i = 0; i < hpc.total; i++) { + if (hpc.fd[i] != -1) + close(hpc.fd[i]); + } + + hpc.total = 0; + hpc.idx = -1; + hpc.len = 0; +} + +static int hp_get_cached(uint64_t len) +{ + int fd; + + if (hpc.idx < 0 || len != hpc.len) + return -1; + + fd = hpc.fd[hpc.idx]; + hpc.fd[hpc.idx--] = -1; + + return fd; +} + +static int hp_put_cached(int fd) +{ + if (odp_unlikely(++hpc.idx >= hpc.total)) { + hpc.idx--; + ODP_ERR("Trying to put more FD than allowed: %d\n", fd); + return -1; + } + + hpc.fd[hpc.idx] = fd; + + return 0; +} + /* * Take a piece of the preallocated virtual space to fit "size" bytes. * (best fit). Size must be rounded up to an integer number of pages size. @@ -798,8 +945,14 @@ static int block_free_internal(int block_index, int close_fd, int deregister) block_index); /* close the related fd */ - if (close_fd) - close(ishm_proctable->entry[proc_index].fd); + if (close_fd) { + int fd = ishm_proctable->entry[proc_index].fd; + + if (block->huge == CACHED) + hp_put_cached(fd); + else + close(fd); + } /* remove entry from process local table: */ last = ishm_proctable->nb_entries - 1; @@ -910,6 +1063,7 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, new_block->huge = EXTERNAL; } else { new_block->external_fd = 0; + new_block->huge = UNKNOWN; } /* Otherwise, Try first huge pages when possible and needed: */ @@ -927,17 +1081,38 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, /* roundup to page size */ len = (size + (page_hp_size - 1)) & (-page_hp_size); - addr = do_map(new_index, len, hp_align, flags, HUGE, &fd); - - if (addr == NULL) { - if (!huge_error_printed) { - ODP_ERR("No huge pages, fall back to normal " - "pages. " - "check: /proc/sys/vm/nr_hugepages.\n"); - huge_error_printed = 1; + if (!(flags & _ODP_ISHM_SINGLE_VA)) { + /* try pre-allocated pages */ + fd = hp_get_cached(len); + if (fd != -1) { + /* do as if user provided a fd */ + new_block->external_fd = 1; + addr = do_map(new_index, len, hp_align, flags, + CACHED, &fd); + if (addr == NULL) { + ODP_ERR("Could not use cached hp %d\n", + fd); + hp_put_cached(fd); + fd = -1; + } else { + new_block->huge = CACHED; + } + } + } + if (fd == -1) { + addr = do_map(new_index, len, hp_align, flags, HUGE, + &fd); + + if (addr == NULL) { + if (!huge_error_printed) { + ODP_ERR("No huge pages, fall back to " + "normal pages. Check: " + "/proc/sys/vm/nr_hugepages.\n"); + huge_error_printed = 1; + } + } else { + new_block->huge = HUGE; } - } else { - new_block->huge = HUGE; } } @@ -961,8 +1136,12 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, /* if neither huge pages or normal pages works, we cannot proceed: */ if ((fd < 0) || (addr == NULL) || (len == 0)) { - if ((!new_block->external_fd) && (fd >= 0)) + if (new_block->external_fd) { + if (new_block->huge == CACHED) + hp_put_cached(fd); + } else if (fd >= 0) { close(fd); + } delete_file(new_block); odp_spinlock_unlock(&ishm_tbl->lock); ODP_ERR("_ishm_reserve failed.\n"); @@ -1564,6 +1743,9 @@ int _odp_ishm_init_global(const odp_init_t *init) /* get ready to create pools: */ _odp_ishm_pool_init(); + /* init cache files */ + hp_init(); + return 0; init_glob_err4: @@ -1705,6 +1887,8 @@ int _odp_ishm_term_global(void) if (!odp_global_data.shm_dir_from_env) free(odp_global_data.shm_dir); + hp_term(); + return ret; } @@ -1778,6 +1962,9 @@ int _odp_ishm_status(const char *title) case EXTERNAL: huge = 'E'; break; + case CACHED: + huge = 'C'; + break; default: huge = '?'; } @@ -1911,6 +2098,9 @@ void _odp_ishm_print(int block_index) case EXTERNAL: str = "external"; break; + case CACHED: + str = "cached"; + break; default: str = "??"; } From patchwork Tue Sep 11 14:00:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 146461 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp3682717ljw; Tue, 11 Sep 2018 07:07:29 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdb0eOBlbmkAeyCuiwNgnaZixk7repvUQR1zLM1VdkPWIXdLxfUJLNwcrXpI8yQD5lXpZSef X-Received: by 2002:a37:6e01:: with SMTP id j1-v6mr5107404qkc.70.1536674849542; Tue, 11 Sep 2018 07:07:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536674849; cv=none; d=google.com; s=arc-20160816; b=oG+yXso3i51nuRypM0NMkFd5uQIAdY1bEl695y1g0aRoR69jOEDgWtN7qPjZdiPYWR kI25RxL5VSFgLAiMX1HfCEKN8s+frsVqRiges4eiEZO1OgjRO2++ywc11Ju9igU6FaMn 2H4QCrpgCfYlZ3FpM0y6YiDMI7fKDvoS5FZJmsJie/nzs3RR+62DFPxMbZ14IOBvmkVH IALEOJOmO5DFauUAPjjLBvpbr6FsL+qtcdZ4GAU8FLSBmavyzDIq+o+3favirEqU7lOu 5e7uUVLTOXkyUjP9jrXsA29So+SI0fKPkqzH1sFcNnEFRbML8eJM80a5H75FWbOzCmfB b5aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to; bh=8pIbwfKE2tu10ClN1xfYUCeRvH3woy/UuAXTtJcC9U0=; b=d/oCuvwHXXzPGJfVigy50CiSpYoQR/0EMN1MnLpxFdM9FjBTjsJP3thC1hUbsV++y1 6DERq55NwO5DTr7n6jVKwy2K1F+iSoh9mUFEyFpmcPCDuQJiHts3psSYYFr+sEi1b28i +CiYEEgiTNwY2S0VHukHzMIq10OcXEarNjDHAXzxfkZcHsHLURDlFbs0WmQtUVjvptym 32/x5MkfnosKbetjFmqIXXE34xHyr46sZqZLeSuuGSa6YhYOkX62v+gK2UHeCs2nSQbc 7vZ0y4jzARWeJwBBtaQZVt8RlrXWl48EnpjEr+Hhxh/v/vpwoTZmDsVl1AFfPxqH3/zo IpqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id 57-v6si1969818qtz.128.2018.09.11.07.07.29; Tue, 11 Sep 2018 07:07:29 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 3772761C13; Tue, 11 Sep 2018 14:07:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 4164661C2A; Tue, 11 Sep 2018 14:02:42 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 089AB61C11; Tue, 11 Sep 2018 14:02:04 +0000 (UTC) Received: from forward105o.mail.yandex.net (forward105o.mail.yandex.net [37.140.190.183]) by lists.linaro.org (Postfix) with ESMTPS id 73AC661C25 for ; Tue, 11 Sep 2018 14:00:54 +0000 (UTC) Received: from mxback12j.mail.yandex.net (mxback12j.mail.yandex.net [IPv6:2a02:6b8:0:1619::87]) by forward105o.mail.yandex.net (Yandex) with ESMTP id E13AD44460B2 for ; Tue, 11 Sep 2018 17:00:46 +0300 (MSK) Received: from smtp2o.mail.yandex.net (smtp2o.mail.yandex.net [2a02:6b8:0:1a2d::26]) by mxback12j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id 2uEwRl84pO-0kseKJn2; Tue, 11 Sep 2018 17:00:46 +0300 Received: by smtp2o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id XLXHUeLpaf-0i0mPlad; Tue, 11 Sep 2018 17:00:44 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Tue, 11 Sep 2018 14:00:39 +0000 Message-Id: <1536674439-8532-3-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1536674439-8532-1-git-send-email-odpbot@yandex.ru> References: <1536674439-8532-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 685 Subject: [lng-odp] [PATCH v5 2/2] linux-gen: ishm: make huge page cache size dynamic X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Josep Puigdemont Signed-off-by: Josep Puigdemont --- /** Email created from pull request 685 (joseppc:fix/cache_huge_pages) ** https://github.com/Linaro/odp/pull/685 ** Patch: https://github.com/Linaro/odp/pull/685.patch ** Base sha: 33fbc04b6373960ec3f84de4e7e7b34c49d71508 ** Merge commit sha: 9826130fb2849a5c4088572ca285b00e358be707 **/ config/odp-linux-generic.conf | 16 ++++--- platform/linux-generic/odp_ishm.c | 73 +++++++++++++++++++------------ 2 files changed, 56 insertions(+), 33 deletions(-) diff --git a/config/odp-linux-generic.conf b/config/odp-linux-generic.conf index 0dd2a6c13..bddc92dd4 100644 --- a/config/odp-linux-generic.conf +++ b/config/odp-linux-generic.conf @@ -18,14 +18,20 @@ odp_implementation = "linux-generic" config_file_version = "0.0.1" -# Internal shared memory allocator +# Shared memory options shm: { - # ODP will try to reserve as many huge pages as the number indicated - # here, up to 64. A zero value means that no pages should be reserved. + # Number of cached default size huge pages. These pages are allocated + # during odp_init_global() and freed back to the kernel in + # odp_term_global(). A value of zero means no pages are cached. + # No negative values should be used here, they are reserved for future + # implementations. + # + # ODP will reserve as many huge pages as possible, which may be less + # than requested here if the system does not have enough huge pages + # available. + # # When using process mode threads, this value should be set to 0 # because the current implementation won't work properly otherwise. - # These pages will only be freed when the application calls - # odp_term_global(). num_cached_hp = 0 } diff --git a/platform/linux-generic/odp_ishm.c b/platform/linux-generic/odp_ishm.c index aeda50bec..11fbe8ef0 100644 --- a/platform/linux-generic/odp_ishm.c +++ b/platform/linux-generic/odp_ishm.c @@ -239,15 +239,15 @@ typedef struct { } ishm_ftable_t; static ishm_ftable_t *ishm_ftbl; -#define HP_CACHE_SIZE 64 struct huge_page_cache { uint64_t len; + int max_fds; /* maximum amount requested of pre-allocated huge pages */ int total; /* amount of actually pre-allocated huge pages */ int idx; /* retrieve fd[idx] to get a free file descriptor */ - int fd[HP_CACHE_SIZE]; /* list of file descriptors */ + int fd[]; /* list of file descriptors */ }; -static struct huge_page_cache hpc; +static struct huge_page_cache *hpc; #ifndef MAP_ANONYMOUS #define MAP_ANONYMOUS MAP_ANON @@ -301,19 +301,14 @@ static void hp_init(void) char filename[ISHM_FILENAME_MAXLEN]; char dir[ISHM_FILENAME_MAXLEN]; int count; - - hpc.total = 0; - hpc.idx = -1; - hpc.len = odp_sys_huge_page_size(); + void *addr; if (!_odp_libconfig_lookup_ext_int("shm", NULL, "num_cached_hp", &count)) { return; } - if (count > HP_CACHE_SIZE) - count = HP_CACHE_SIZE; - else if (count <= 0) + if (count <= 0) return; ODP_DBG("Init HP cache with up to %d pages\n", count); @@ -339,55 +334,77 @@ static void hp_init(void) dir, odp_global_data.main_pid); + addr = mmap(NULL, + sizeof(struct huge_page_cache) + sizeof(int) * count, + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); + if (addr == MAP_FAILED) { + ODP_ERR("Unable to mmap memory for huge page cache\n."); + return; + } + + hpc = addr; + + hpc->max_fds = count; + hpc->total = 0; + hpc->idx = -1; + hpc->len = odp_sys_huge_page_size(); + for (int i = 0; i < count; ++i) { int fd; - fd = hp_create_file(hpc.len, filename); - if (fd == -1) + fd = hp_create_file(hpc->len, filename); + if (fd == -1) { + do { + hpc->fd[i++] = -1; + } while (i < count); break; - hpc.total++; - hpc.fd[i] = fd; + } + hpc->total++; + hpc->fd[i] = fd; } - hpc.idx = hpc.total - 1; + hpc->idx = hpc->total - 1; ODP_DBG("HP cache has %d huge pages of size 0x%08" PRIx64 "\n", - hpc.total, hpc.len); + hpc->total, hpc->len); } static void hp_term(void) { - for (int i = 0; i < hpc.total; i++) { - if (hpc.fd[i] != -1) - close(hpc.fd[i]); + if (NULL == hpc) + return; + + for (int i = 0; i < hpc->total; i++) { + if (hpc->fd[i] != -1) + close(hpc->fd[i]); } - hpc.total = 0; - hpc.idx = -1; - hpc.len = 0; + hpc->total = 0; + hpc->idx = -1; + hpc->len = 0; } static int hp_get_cached(uint64_t len) { int fd; - if (hpc.idx < 0 || len != hpc.len) + if (NULL == hpc || hpc->idx < 0 || len != hpc->len) return -1; - fd = hpc.fd[hpc.idx]; - hpc.fd[hpc.idx--] = -1; + fd = hpc->fd[hpc->idx]; + hpc->fd[hpc->idx--] = -1; return fd; } static int hp_put_cached(int fd) { - if (odp_unlikely(++hpc.idx >= hpc.total)) { - hpc.idx--; + if (NULL == hpc || odp_unlikely(++hpc->idx >= hpc->total)) { + hpc->idx--; ODP_ERR("Trying to put more FD than allowed: %d\n", fd); return -1; } - hpc.fd[hpc.idx] = fd; + hpc->fd[hpc->idx] = fd; return 0; }