From patchwork Thu Oct 11 04:59:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148598 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682188lji; Wed, 10 Oct 2018 21:59:51 -0700 (PDT) X-Google-Smtp-Source: ACcGV637o4sjDJT+ev+BXP/i+6bSbt83PizKUKtYydqYjoW7oUlk+iZ7LgDB0X6eGZn0H96VNLTm X-Received: by 2002:a17:906:6a8f:: with SMTP id p15-v6mr381623ejr.235.1539233991245; Wed, 10 Oct 2018 21:59:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539233991; cv=none; d=google.com; s=arc-20160816; b=CZ4bEw8oGcoUspCtnaV/G8EL6yTUL+ekokpacL4fs3pa1ouR8anZmUDa+AbhVo2bbu RKf1zNo9Hfb2vzm7GfmXNS3+bnypU34cBqUFFj93SiWkjlTfCdgftRD2EkkWn/zR3ex7 xU8QhpiFQq3L2ToznjS5mNzGb1H2LCx73eM6rYQDXdNv+6MoMvSPG0nGwq7IX3TY4Aap 9wlodp1j56WEr7u7PVqH+K85ECuc/3ThTJL8K1Ck18rsA64uFhrGY8+jwyrfR+rpYnbE Z8HNUlfzlEsSCyyGqNdUggUh3rlLcvga2dmHfZFBwkG8rwHrEDLbp8z4c2976+rd5H0S PtgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=mMXNZNU6KNi3ZCPWHCNKHrWaNikIELC2QH1t8qkg/Rg=; b=xCpKX43++Jj1pR+L5me3xNAhU2b8DNUti4xUsOcrUbwb9WXKG87vUqb63xXahIJiLI /jv8pf/Sxm2J2cPA0MPTZmm+CwFqoHQSeJTtBokegLPsTz7J8U4TtoNhxR1X0lsGpDop Xtu8rQXuBpAEz2baUGr30KumVlraNcoKIJa/V5XPXsoTivfEhjizmOcg87n9Ze3Tkg7y Qi8chrgnAfUq49T6AqGvLTJw5wcpiwlcZZ30FXZFoK74mq3KjjZY8agLyv2XMF005hsm mBhz3g+S+EZWz4PM/9JvFbWa2Nkkml5Yy0yxCd4vwTn2+QXibiY/ei8HFc90Pt4L1RKl cJHw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b10-v6si11466394ejl.286.2018.10.10.21.59.51; Wed, 10 Oct 2018 21:59:51 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 18A121B3A5; Thu, 11 Oct 2018 06:59:46 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id E18161B2B3 for ; Thu, 11 Oct 2018 06:59:43 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3ACD27A9; Wed, 10 Oct 2018 21:59:43 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D35C63F5B3; Wed, 10 Oct 2018 21:59:42 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:26 -0500 Message-Id: <1539233972-49860-2-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 1/7] hash: separate multi-writer from rw-concurrency X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" RW concurrency is required with single writer and multiple reader usecase as well. Hence, multi-writer should not be enabled by default when RW concurrency is enabled. Fixes: f2e3001b53ec ("hash: support read/write concurrency") Cc: yipeng1.wang@intel.com Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- lib/librte_hash/rte_cuckoo_hash.c | 27 ++++++++++++++++----------- lib/librte_hash/rte_cuckoo_hash.h | 2 ++ test/test/test_hash_readwrite.c | 6 ++++-- 3 files changed, 22 insertions(+), 13 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index f7b86c8..e32b746 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -93,6 +93,7 @@ rte_hash_create(const struct rte_hash_parameters *params) unsigned i; unsigned int hw_trans_mem_support = 0, multi_writer_support = 0; unsigned int readwrite_concur_support = 0; + unsigned int writer_takes_lock = 0; rte_hash_function default_hash_func = (rte_hash_function)rte_jhash; @@ -116,12 +117,14 @@ rte_hash_create(const struct rte_hash_parameters *params) if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_TRANS_MEM_SUPPORT) hw_trans_mem_support = 1; - if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) + if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) { multi_writer_support = 1; + writer_takes_lock = 1; + } if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) { readwrite_concur_support = 1; - multi_writer_support = 1; + writer_takes_lock = 1; } /* Store all keys and leave the first entry as a dummy entry for lookup_bulk */ @@ -269,6 +272,7 @@ rte_hash_create(const struct rte_hash_parameters *params) h->hw_trans_mem_support = hw_trans_mem_support; h->multi_writer_support = multi_writer_support; h->readwrite_concur_support = readwrite_concur_support; + h->writer_takes_lock = writer_takes_lock; #if defined(RTE_ARCH_X86) if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) @@ -279,10 +283,11 @@ rte_hash_create(const struct rte_hash_parameters *params) #endif h->sig_cmp_fn = RTE_HASH_COMPARE_SCALAR; - /* Turn on multi-writer only with explicit flag from user and TM - * support. + /* Writer threads need to take the lock when: + * 1) RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY is enabled OR + * 2) RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD is enabled */ - if (h->multi_writer_support) { + if (h->writer_takes_lock) { h->readwrite_lock = rte_malloc(NULL, sizeof(rte_rwlock_t), RTE_CACHE_LINE_SIZE); if (h->readwrite_lock == NULL) @@ -339,10 +344,10 @@ rte_hash_free(struct rte_hash *h) rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK); - if (h->multi_writer_support) { + if (h->multi_writer_support) rte_free(h->local_free_slots); + if (h->writer_takes_lock) rte_free(h->readwrite_lock); - } rte_ring_free(h->free_slots); rte_free(h->key_store); rte_free(h->buckets); @@ -397,9 +402,9 @@ rte_hash_count(const struct rte_hash *h) static inline void __hash_rw_writer_lock(const struct rte_hash *h) { - if (h->multi_writer_support && h->hw_trans_mem_support) + if (h->writer_takes_lock && h->hw_trans_mem_support) rte_rwlock_write_lock_tm(h->readwrite_lock); - else if (h->multi_writer_support) + else if (h->writer_takes_lock) rte_rwlock_write_lock(h->readwrite_lock); } @@ -416,9 +421,9 @@ __hash_rw_reader_lock(const struct rte_hash *h) static inline void __hash_rw_writer_unlock(const struct rte_hash *h) { - if (h->multi_writer_support && h->hw_trans_mem_support) + if (h->writer_takes_lock && h->hw_trans_mem_support) rte_rwlock_write_unlock_tm(h->readwrite_lock); - else if (h->multi_writer_support) + else if (h->writer_takes_lock) rte_rwlock_write_unlock(h->readwrite_lock); } diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index b43f467..6fa429e 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -168,6 +168,8 @@ struct rte_hash { /**< If multi-writer support is enabled. */ uint8_t readwrite_concur_support; /**< If read-write concurrency support is enabled */ + uint8_t writer_takes_lock; + /**< Indicates if the writer threads need to take lock */ rte_hash_function hash_func; /**< Function used to calculate hash. */ uint32_t hash_func_init_val; /**< Init value used by hash_func. */ rte_hash_cmp_eq_t rte_hash_custom_cmp_eq; diff --git a/test/test/test_hash_readwrite.c b/test/test/test_hash_readwrite.c index 55ae33d..af57708 100644 --- a/test/test/test_hash_readwrite.c +++ b/test/test/test_hash_readwrite.c @@ -118,10 +118,12 @@ init_params(int use_htm, int use_jhash) if (use_htm) hash_params.extra_flag = RTE_HASH_EXTRA_FLAGS_TRANS_MEM_SUPPORT | - RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY; + RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY | + RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD; else hash_params.extra_flag = - RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY; + RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY | + RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD; hash_params.name = "tests"; From patchwork Thu Oct 11 04:59:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148599 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682290lji; Wed, 10 Oct 2018 22:00:01 -0700 (PDT) X-Google-Smtp-Source: ACcGV62KOK/xBvtG00W9dj3alEmOl8gslxbVWPZjS/r3uTZO0GaPZQfnndj8UH8jF9gLY6QpA7mf X-Received: by 2002:a50:ab81:: with SMTP id u1-v6mr467469edc.259.1539234001244; Wed, 10 Oct 2018 22:00:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234001; cv=none; d=google.com; s=arc-20160816; b=lsrqcJ4NFLYcSm06qk75JLfvUR3nL2ErzFqpgQklKS6LQynUuBb+hPjr26XFUQFY67 jD+qHA69anxhkqLpRrJ5jXTJpmth0UiMYfqFqOiVyUpQFJC8RBRZMQ33wzDJixh1qx2G h2TENqlCLFDMZLBw4K/sVdfcidlmThWzXkjcbkA+lYdl0lafkDv4PaEvIvcDur+fYLT7 jmSurQknZDsTvRTYvina0eEvZCmNky7BOLIQhQJaCh4T0yQVA1+3s+d45JX3bMMperso yzUWLYgaFibJa3MqKFhJ1PZ7AS+0x4lDXunVlrUEUpgv/EVtfu7YxrKEN7gNYwMDgn0O HqTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=yovA/Q+ocWCeeA/pJ9YH1xBaS9vYM2OtmJQnouJsQuU=; b=1AvrVbt0abI4BgAAowsGXLHHAneeDM7QLLpfW+kjCST5dDQLcUiXYwNHQc3xBWouE3 uldn7m8hZXAUorLS4mQhaKybGCUDzMFm3LYlFKLsEZXVQHOiug/eykPFj76Ay4ft5/l0 v9TwYE31//vhksoTvN7dt0b361k2hwTDSWD88bzaBUIbYEspOg6BTlusBgJjFQOt19kD G82CMoIL87kY+hq/vnnPZodwJVj8NjqKqiENbG7Bg1wi5D8IEIJFyTbUNB3SOIfUHKhG l5j8s0m6b4XhEWS9iUXdlMi0hlkoQ1NEBMUFfKAybL9UVRpkSj0tPdO2/e19O0Ok44Yv IiIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id h8-v6si2043625eja.18.2018.10.10.22.00.00; Wed, 10 Oct 2018 22:00:01 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 96BF31B3DA; Thu, 11 Oct 2018 06:59:48 +0200 (CEST) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 02C371B3A2 for ; Thu, 11 Oct 2018 06:59:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6E2141596; Wed, 10 Oct 2018 21:59:44 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 10E483F5B3; Wed, 10 Oct 2018 21:59:44 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:27 -0500 Message-Id: <1539233972-49860-3-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 2/7] hash: support do not recycle on delete X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_hash_lookup_xxx APIs return the index of the element in the key store. Application(reader) can use that index to reference other data structures in its scope. Because of this, the index should not be recycled till the application completes using the index. RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is introduced to support this. When this flag is enabled rte_hash_del_xxx APIs do not free the key-store index/internal memory associated with the deleted entry. The new API rte_hash_free_key_with_position should be called to free the key-store index/internal memory after calling rte_hash_del_xxx APIs. Suggested-by: Yipeng Wang Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- lib/librte_hash/rte_cuckoo_hash.c | 52 +++++++++++++- lib/librte_hash/rte_cuckoo_hash.h | 8 +++ lib/librte_hash/rte_hash.h | 40 +++++++++++ test/test/test_hash.c | 140 +++++++++++++++++++++++++++++++++++++- 4 files changed, 235 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index e32b746..50d632e 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -94,6 +94,7 @@ rte_hash_create(const struct rte_hash_parameters *params) unsigned int hw_trans_mem_support = 0, multi_writer_support = 0; unsigned int readwrite_concur_support = 0; unsigned int writer_takes_lock = 0; + unsigned int recycle_on_del = 1; rte_hash_function default_hash_func = (rte_hash_function)rte_jhash; @@ -127,6 +128,9 @@ rte_hash_create(const struct rte_hash_parameters *params) writer_takes_lock = 1; } + if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL) + recycle_on_del = 0; + /* Store all keys and leave the first entry as a dummy entry for lookup_bulk */ if (multi_writer_support) /* @@ -273,6 +277,7 @@ rte_hash_create(const struct rte_hash_parameters *params) h->multi_writer_support = multi_writer_support; h->readwrite_concur_support = readwrite_concur_support; h->writer_takes_lock = writer_takes_lock; + h->recycle_on_del = recycle_on_del; #if defined(RTE_ARCH_X86) if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) @@ -960,8 +965,6 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) unsigned lcore_id, n_slots; struct lcore_cache *cached_free_slots; - bkt->sig_current[i] = NULL_SIGNATURE; - bkt->sig_alt[i] = NULL_SIGNATURE; if (h->multi_writer_support) { lcore_id = rte_lcore_id(); cached_free_slots = &h->local_free_slots[lcore_id]; @@ -999,7 +1002,13 @@ search_and_remove(const struct rte_hash *h, const void *key, k = (struct rte_hash_key *) ((char *)keys + bkt->key_idx[i] * h->key_entry_size); if (rte_hash_cmp_eq(key, k->key, h) == 0) { - remove_entry(h, bkt, i); + bkt->sig_current[i] = NULL_SIGNATURE; + bkt->sig_alt[i] = NULL_SIGNATURE; + /* Do not free the key store element if + * recycle_on_del is disabled. + */ + if (h->recycle_on_del) + remove_entry(h, bkt, i); /* * Return index where key is stored, @@ -1085,6 +1094,43 @@ rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position, return 0; } +int __rte_experimental +rte_hash_free_key_with_position(const struct rte_hash *h, + const int32_t position) +{ + RETURN_IF_TRUE(((h == NULL) || (position == EMPTY_SLOT)), -EINVAL); + + unsigned int lcore_id, n_slots; + struct lcore_cache *cached_free_slots; + const int32_t total_entries = h->num_buckets * RTE_HASH_BUCKET_ENTRIES; + + /* Out of bounds */ + if (position >= total_entries) + return -EINVAL; + + if (h->multi_writer_support) { + lcore_id = rte_lcore_id(); + cached_free_slots = &h->local_free_slots[lcore_id]; + /* Cache full, need to free it. */ + if (cached_free_slots->len == LCORE_CACHE_SIZE) { + /* Need to enqueue the free slots in global ring. */ + n_slots = rte_ring_mp_enqueue_burst(h->free_slots, + cached_free_slots->objs, + LCORE_CACHE_SIZE, NULL); + cached_free_slots->len -= n_slots; + } + /* Put index of new free slot in cache. */ + cached_free_slots->objs[cached_free_slots->len] = + (void *)((uintptr_t)position); + cached_free_slots->len++; + } else { + rte_ring_sp_enqueue(h->free_slots, + (void *)((uintptr_t)position)); + } + + return 0; +} + static inline void compare_signatures(uint32_t *prim_hash_matches, uint32_t *sec_hash_matches, const struct rte_hash_bucket *prim_bkt, diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index 6fa429e..8627c80 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -168,6 +168,14 @@ struct rte_hash { /**< If multi-writer support is enabled. */ uint8_t readwrite_concur_support; /**< If read-write concurrency support is enabled */ + uint8_t recycle_on_del; + /**< If internal memory/key-store entry should be + * freed on calling the rte_hash_del_xxx APIs. + * If this is set, rte_hash_free_key_with_position must be + * called to free the internal memory associated with + * the deleted entry. + * This flag is enabled by default. + */ uint8_t writer_takes_lock; /**< Indicates if the writer threads need to take lock */ rte_hash_function hash_func; /**< Function used to calculate hash. */ diff --git a/lib/librte_hash/rte_hash.h b/lib/librte_hash/rte_hash.h index 9e7d931..dd59cb0 100644 --- a/lib/librte_hash/rte_hash.h +++ b/lib/librte_hash/rte_hash.h @@ -14,6 +14,8 @@ #include #include +#include + #ifdef __cplusplus extern "C" { #endif @@ -37,6 +39,11 @@ extern "C" { /** Flag to support reader writer concurrency */ #define RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY 0x04 +/** Flag to disable freeing of internal memory/indices on hash delete. + * Refer to rte_hash_del_xxx APIs for more details. + */ +#define RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL 0x08 + /** Signature of key that is stored internally. */ typedef uint32_t hash_sig_t; @@ -230,6 +237,10 @@ rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, hash_sig_t * and should only be called from one thread by default. * Thread safety can be enabled by setting flag during * table creation. + * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is enabled, + * the hash library's internal memory/index will not be freed by this + * API. rte_hash_free_key_with_position API must be called additionally + * to free the internal memory/index associated with the key. * * @param h * Hash table to remove the key from. @@ -251,6 +262,10 @@ rte_hash_del_key(const struct rte_hash *h, const void *key); * and should only be called from one thread by default. * Thread safety can be enabled by setting flag during * table creation. + * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is enabled, + * the hash library's internal memory/index will not be freed by this + * API. rte_hash_free_key_with_position API must be called additionally + * to free the internal memory/index associated with the key. * * @param h * Hash table to remove the key from. @@ -290,6 +305,31 @@ rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position, void **key); /** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Free hash library's internal memory/index given the position + * of the key. This operation is not multi-thread safe and should + * only be called from one thread by default. Thread safety + * can be enabled by setting flag during table creation. + * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is enabled, + * the hash library's internal memory/index must be freed using this API + * after the key is deleted using rte_hash_del_key_xxx APIs. + * This API does not validate if the key is already freed. + * + * @param h + * Hash table to free the key from. + * @param position + * Position returned when the key was deleted. + * @return + * - 0 if freed successfully + * - -EINVAL if the parameters are invalid. + */ +int __rte_experimental +rte_hash_free_key_with_position(const struct rte_hash *h, + const int32_t position); + +/** * Find a key-value pair in the hash table. * This operation is multi-thread safe with regarding to other lookup threads. * Read-write concurrency can be enabled by setting flag during diff --git a/test/test/test_hash.c b/test/test/test_hash.c index b3db9fd..82f4c03 100644 --- a/test/test/test_hash.c +++ b/test/test/test_hash.c @@ -260,6 +260,13 @@ static void run_hash_func_tests(void) * - lookup (hit) * - delete * - lookup (miss) + * + * Repeat the test case when 'free on delete' is disabled. + * - add + * - lookup (hit) + * - delete + * - lookup (miss) + * - free */ static int test_add_delete(void) { @@ -295,10 +302,12 @@ static int test_add_delete(void) /* repeat test with precomputed hash functions */ hash_sig_t hash_value; - int pos1, expectedPos1; + int pos1, expectedPos1, delPos1; + ut_params.extra_flag = RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL; handle = rte_hash_create(&ut_params); RETURN_IF_ERROR(handle == NULL, "hash creation failed"); + ut_params.extra_flag = 0; hash_value = rte_hash_hash(handle, &keys[0]); pos1 = rte_hash_add_key_with_hash(handle, &keys[0], hash_value); @@ -315,12 +324,18 @@ static int test_add_delete(void) print_key_info("Del", &keys[0], pos1); RETURN_IF_ERROR(pos1 != expectedPos1, "failed to delete key (pos1=%d)", pos1); + delPos1 = pos1; pos1 = rte_hash_lookup_with_hash(handle, &keys[0], hash_value); print_key_info("Lkp", &keys[0], pos1); RETURN_IF_ERROR(pos1 != -ENOENT, "fail: found key after deleting! (pos1=%d)", pos1); + pos1 = rte_hash_free_key_with_position(handle, delPos1); + print_key_info("Free", &keys[0], delPos1); + RETURN_IF_ERROR(pos1 != 0, + "failed to free key (pos1=%d)", delPos1); + rte_hash_free(handle); return 0; @@ -391,6 +406,84 @@ static int test_add_update_delete(void) } /* + * Sequence of operations for a single key with 'disable free on del' set: + * - delete: miss + * - add + * - lookup: hit + * - add: update + * - lookup: hit (updated data) + * - delete: hit + * - delete: miss + * - lookup: miss + * - free: hit + * - lookup: miss + */ +static int test_add_update_delete_free(void) +{ + struct rte_hash *handle; + int pos0, expectedPos0, delPos0, result; + + ut_params.name = "test2"; + ut_params.extra_flag = RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL; + handle = rte_hash_create(&ut_params); + RETURN_IF_ERROR(handle == NULL, "hash creation failed"); + ut_params.extra_flag = 0; + + pos0 = rte_hash_del_key(handle, &keys[0]); + print_key_info("Del", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != -ENOENT, + "fail: found non-existent key (pos0=%d)", pos0); + + pos0 = rte_hash_add_key(handle, &keys[0]); + print_key_info("Add", &keys[0], pos0); + RETURN_IF_ERROR(pos0 < 0, "failed to add key (pos0=%d)", pos0); + expectedPos0 = pos0; + + pos0 = rte_hash_lookup(handle, &keys[0]); + print_key_info("Lkp", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != expectedPos0, + "failed to find key (pos0=%d)", pos0); + + pos0 = rte_hash_add_key(handle, &keys[0]); + print_key_info("Add", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != expectedPos0, + "failed to re-add key (pos0=%d)", pos0); + + pos0 = rte_hash_lookup(handle, &keys[0]); + print_key_info("Lkp", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != expectedPos0, + "failed to find key (pos0=%d)", pos0); + + delPos0 = rte_hash_del_key(handle, &keys[0]); + print_key_info("Del", &keys[0], delPos0); + RETURN_IF_ERROR(delPos0 != expectedPos0, + "failed to delete key (pos0=%d)", delPos0); + + pos0 = rte_hash_del_key(handle, &keys[0]); + print_key_info("Del", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != -ENOENT, + "fail: deleted already deleted key (pos0=%d)", pos0); + + pos0 = rte_hash_lookup(handle, &keys[0]); + print_key_info("Lkp", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != -ENOENT, + "fail: found key after deleting! (pos0=%d)", pos0); + + result = rte_hash_free_key_with_position(handle, delPos0); + print_key_info("Free", &keys[0], delPos0); + RETURN_IF_ERROR(result != 0, + "failed to free key (pos1=%d)", delPos0); + + pos0 = rte_hash_lookup(handle, &keys[0]); + print_key_info("Lkp", &keys[0], pos0); + RETURN_IF_ERROR(pos0 != -ENOENT, + "fail: found key after deleting! (pos0=%d)", pos0); + + rte_hash_free(handle); + return 0; +} + +/* * Sequence of operations for retrieving a key with its position * * - create table @@ -399,11 +492,20 @@ static int test_add_update_delete(void) * - delete key * - try to get the deleted key: miss * + * Repeat the test case when 'free on delete' is disabled. + * - create table + * - add key + * - get the key with its position: hit + * - delete key + * - try to get the deleted key: hit + * - free key + * - try to get the deleted key: miss + * */ static int test_hash_get_key_with_position(void) { struct rte_hash *handle = NULL; - int pos, expectedPos, result; + int pos, expectedPos, delPos, result; void *key; ut_params.name = "hash_get_key_w_pos"; @@ -427,6 +529,38 @@ static int test_hash_get_key_with_position(void) RETURN_IF_ERROR(result != -ENOENT, "non valid key retrieved"); rte_hash_free(handle); + + ut_params.name = "hash_get_key_w_pos"; + ut_params.extra_flag = RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL; + handle = rte_hash_create(&ut_params); + RETURN_IF_ERROR(handle == NULL, "hash creation failed"); + ut_params.extra_flag = 0; + + pos = rte_hash_add_key(handle, &keys[0]); + print_key_info("Add", &keys[0], pos); + RETURN_IF_ERROR(pos < 0, "failed to add key (pos0=%d)", pos); + expectedPos = pos; + + result = rte_hash_get_key_with_position(handle, pos, &key); + RETURN_IF_ERROR(result != 0, "error retrieving a key"); + + delPos = rte_hash_del_key(handle, &keys[0]); + print_key_info("Del", &keys[0], delPos); + RETURN_IF_ERROR(delPos != expectedPos, + "failed to delete key (pos0=%d)", delPos); + + result = rte_hash_get_key_with_position(handle, delPos, &key); + RETURN_IF_ERROR(result != -ENOENT, "non valid key retrieved"); + + result = rte_hash_free_key_with_position(handle, delPos); + print_key_info("Free", &keys[0], delPos); + RETURN_IF_ERROR(result != 0, + "failed to free key (pos1=%d)", delPos); + + result = rte_hash_get_key_with_position(handle, delPos, &key); + RETURN_IF_ERROR(result != -ENOENT, "non valid key retrieved"); + + rte_hash_free(handle); return 0; } @@ -1470,6 +1604,8 @@ test_hash(void) return -1; if (test_add_update_delete() < 0) return -1; + if (test_add_update_delete_free() < 0) + return -1; if (test_five_keys() < 0) return -1; if (test_full_bucket() < 0) From patchwork Thu Oct 11 04:59:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148600 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682407lji; Wed, 10 Oct 2018 22:00:11 -0700 (PDT) X-Google-Smtp-Source: ACcGV61tZxkBdsjIIAwplcNDyFq0yd4+pN7YqxTy17n8rCgpZYU5uGzs3dMKzmuXWGNZIGFuGB2P X-Received: by 2002:a17:906:4b0f:: with SMTP id y15-v6mr426318eju.64.1539234011184; Wed, 10 Oct 2018 22:00:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234011; cv=none; d=google.com; s=arc-20160816; b=XLBnTRoOfAjQCC+7TSsSvb8u1S7wpDEM65rsFD62AHe0+84/X9MditX6TVALQw1Swx l6a5G+5oIHI1V5krD4V3hZ93x5PDFc+0xz8Td2QtExaWJg20pUlC5bslFXHT3xoqTMnh QHq12gMMqarXVNEaAj6PnqyUN0lURqXP2oK/Ktp3cAc7KrZl4OZympHnvtkjXObcHtv9 cYm3ny2IoY742kRq+SAMrFdOKutR8ucBGOmfBq/0evBduKbgLMdQBZsGdnreqKZeRVrj xy46jkvdXMw/MBIJ1maGLlxFp2aPUj9HN+hKiF4ARgrm+/vWFG6cbJUJGNTCIaWrWINZ e9eA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=vOyqsLv5cJiprZLwNLwqDOBFpfVWsVkuqB7P7C7IMos=; b=Q1RXDbiNu64nEY40k4xkKdAbnC3tV/ncnQcZHuCgNBGz3v3GHSpa6NwIn/p5BjrUec w1uQFhsZBxZfeE/Y76walQyX1TyRzOsfqzlCShVmTab7BiO9LnORExvZSy5K6CKRtG00 zXVPrNUcvSadkPPAQh1iSCTYiaP83zPUyNpJNPaWrCR6NqaKCS/ByYx2BcJIO1MGMF+V f3KVYzVCh1ZFZiprJRE0DD6x4K6jr2LvZnoX+tmC1oUJi4B/TZ/ZdLaIWic2gKdue2C7 7LaoXkuNx8w5jDuE7WD1OBWtdUTwkBRXCI8cqMO2dVfqOHJJ81YgSinF/um8WBVGhNjd Qy3Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b11-v6si7693484edj.131.2018.10.10.22.00.11; Wed, 10 Oct 2018 22:00:11 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9FFBC1B415; Thu, 11 Oct 2018 06:59:50 +0200 (CEST) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 257AD1B3AC for ; Thu, 11 Oct 2018 06:59:46 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9061B7A9; Wed, 10 Oct 2018 21:59:45 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 34C803F5B3; Wed, 10 Oct 2018 21:59:45 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:28 -0500 Message-Id: <1539233972-49860-4-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 3/7] hash: correct key store element alignment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Correct the key store array element alignment. This is required to make 'pdata' in 'struct rte_hash_key' align on the correct boundary. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper --- lib/librte_hash/rte_cuckoo_hash.c | 4 +++- lib/librte_hash/rte_cuckoo_hash.h | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index 50d632e..f3e95f2 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -196,7 +196,9 @@ rte_hash_create(const struct rte_hash_parameters *params) goto err_unlock; } - const uint32_t key_entry_size = sizeof(struct rte_hash_key) + params->key_len; + const uint32_t key_entry_size = + RTE_ALIGN(sizeof(struct rte_hash_key) + params->key_len, + KEY_ALIGNMENT); const uint64_t key_tbl_size = (uint64_t) key_entry_size * num_key_slots; k = rte_zmalloc_socket(NULL, key_tbl_size, diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index 8627c80..a44c6be 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -125,7 +125,7 @@ struct rte_hash_key { }; /* Variable key size */ char key[0]; -} __attribute__((aligned(KEY_ALIGNMENT))); +}; /* All different signature compare functions */ enum rte_hash_sig_compare_function { From patchwork Thu Oct 11 04:59:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148601 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682543lji; Wed, 10 Oct 2018 22:00:19 -0700 (PDT) X-Google-Smtp-Source: ACcGV6122snl3kXMZpTRpWkVNsuLw+/tyq6UvzRN6kAuY1uMUOMHoxP2jOeLu3A6Hqp5M19qRb9k X-Received: by 2002:a17:906:27d9:: with SMTP id k25-v6mr431403ejc.122.1539234019590; Wed, 10 Oct 2018 22:00:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234019; cv=none; d=google.com; s=arc-20160816; b=zCy0fxmsTKiMyeH+hlHKzyDbDGIeABUDAetkLlKLl8Dugt5C5FDhvU53EMJq972eyA dlB+CFHDuC3wnu76e3Nt7IG37SH6ysQ5zIJccnoxN8vUqB/TnrHr7l0vZNpbO8wnxlzW JGBRcPjzkCd1Wnd1o6Y1Ie5OBW30uYT6SBSHY8vkOnIQyxQ339xCRS30n3zHPGM+D+/Y /DN/up9LuWNLbf+G07tF4nueV59fWdjEdPvqvZYaLPoY0vk4wvpb+N/6bYliF7+Xb2/F 5q5GvBRCdWOmglE/r3DwXNKhTFdwOzRzhhXCyX122fRaZOXga6aqa46c/u5MYzHKOh8u Ri/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=mEE9rrfoUt2f4jKhvaDq5tykSJFU6ZcHGZ+6E160t6A=; b=xJFX+CwsI7xBKFZy4Zq2JB+5YdrnrlQga27ROpTyMUnkBaHlqYhL3uuK3oG4GgBxI3 CT+9HQ7ZEqKvf717b+pQHOheY57gVJsn8mtiHLMnCTaPEKS2FkacjU2GbLw7CRKmWFbS djH3IEtU0Z6cn0DvqYQlGB6DC+L92Nfib91DY2DQsRNbE6Keso9YcQzNFpEWGgfn5kjF 874HyOP3n7pNP8guFCGv0JDWUJlcYu+9S7se2l+VlUj4HJMhUSZw3rsS4seCKdcmZ+iA NpQaBr+yIm8TkzHjbWcIpe3ennUkjV4/uYYRc40vUkErqOhE6DSf6fR3rwhdi1T0HMeI BsZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b11-v6si7693843edj.131.2018.10.10.22.00.19; Wed, 10 Oct 2018 22:00:19 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8BEC51B432; Thu, 11 Oct 2018 06:59:53 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 38A0F1B3AC for ; Thu, 11 Oct 2018 06:59:47 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A9B0C1596; Wed, 10 Oct 2018 21:59:46 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D2A23F5B3; Wed, 10 Oct 2018 21:59:46 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:29 -0500 Message-Id: <1539233972-49860-5-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 4/7] hash: add memory ordering to avoid race conditions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Only race condition that can occur is - using the key store element before the key write is completed. Hence, while inserting the element the release memory order is used. Any other race condition is caught by the key comparison. Memory orderings are added only where needed. For ex: reads in the writer's context do not need memory ordering as there is a single writer. key_idx in the bucket entry and pdata in the key store element are used for synchronisation. key_idx is used to release an inserted entry in the bucket to the reader. Use of pdata for synchronisation is required due to updation of an existing entry where-in only the pdata is updated without updating key_idx. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper Reviewed-by: Yipeng Wang --- lib/librte_hash/rte_cuckoo_hash.c | 112 ++++++++++++++++++++++++++++---------- 1 file changed, 83 insertions(+), 29 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index f3e95f2..e2b0260 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2016 Intel Corporation + * Copyright(c) 2018 Arm Limited */ #include @@ -495,7 +496,9 @@ enqueue_slot_back(const struct rte_hash *h, rte_ring_sp_enqueue(h->free_slots, slot_id); } -/* Search a key from bucket and update its data */ +/* Search a key from bucket and update its data. + * Writer holds the lock before calling this. + */ static inline int32_t search_and_update(const struct rte_hash *h, void *data, const void *key, struct rte_hash_bucket *bkt, hash_sig_t sig, hash_sig_t alt_hash) @@ -509,8 +512,13 @@ search_and_update(const struct rte_hash *h, void *data, const void *key, k = (struct rte_hash_key *) ((char *)keys + bkt->key_idx[i] * h->key_entry_size); if (rte_hash_cmp_eq(key, k->key, h) == 0) { - /* Update data */ - k->pdata = data; + /* 'pdata' acts as the synchronization point + * when an existing hash entry is updated. + * Key is not updated in this case. + */ + __atomic_store_n(&k->pdata, + data, + __ATOMIC_RELEASE); /* * Return index where key is stored, * subtracting the first dummy index @@ -564,7 +572,15 @@ rte_hash_cuckoo_insert_mw(const struct rte_hash *h, if (likely(prim_bkt->key_idx[i] == EMPTY_SLOT)) { prim_bkt->sig_current[i] = sig; prim_bkt->sig_alt[i] = alt_hash; - prim_bkt->key_idx[i] = new_idx; + /* Key can be of arbitrary length, so it is + * not possible to store it atomically. + * Hence the new key element's memory stores + * (key as well as data) should be complete + * before it is referenced. + */ + __atomic_store_n(&prim_bkt->key_idx[i], + new_idx, + __ATOMIC_RELEASE); break; } } @@ -647,8 +663,10 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, prev_bkt->sig_current[prev_slot]; curr_bkt->sig_current[curr_slot] = prev_bkt->sig_alt[prev_slot]; - curr_bkt->key_idx[curr_slot] = - prev_bkt->key_idx[prev_slot]; + /* Release the updated bucket entry */ + __atomic_store_n(&curr_bkt->key_idx[curr_slot], + prev_bkt->key_idx[prev_slot], + __ATOMIC_RELEASE); curr_slot = prev_slot; curr_node = prev_node; @@ -657,7 +675,10 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, curr_bkt->sig_current[curr_slot] = sig; curr_bkt->sig_alt[curr_slot] = alt_hash; - curr_bkt->key_idx[curr_slot] = new_idx; + /* Release the new bucket entry */ + __atomic_store_n(&curr_bkt->key_idx[curr_slot], + new_idx, + __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); @@ -788,8 +809,15 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, new_idx = (uint32_t)((uintptr_t) slot_id); /* Copy key */ rte_memcpy(new_k->key, key, h->key_len); - new_k->pdata = data; - + /* Key can be of arbitrary length, so it is not possible to store + * it atomically. Hence the new key element's memory stores + * (key as well as data) should be complete before it is referenced. + * 'pdata' acts as the synchronization point when an existing hash + * entry is updated. + */ + __atomic_store_n(&new_k->pdata, + data, + __ATOMIC_RELEASE); /* Find an empty slot and insert */ ret = rte_hash_cuckoo_insert_mw(h, prim_bkt, sec_bkt, key, data, @@ -875,21 +903,27 @@ search_one_bucket(const struct rte_hash *h, const void *key, hash_sig_t sig, void **data, const struct rte_hash_bucket *bkt) { int i; + uint32_t key_idx; + void *pdata; struct rte_hash_key *k, *keys = h->key_store; for (i = 0; i < RTE_HASH_BUCKET_ENTRIES; i++) { - if (bkt->sig_current[i] == sig && - bkt->key_idx[i] != EMPTY_SLOT) { + key_idx = __atomic_load_n(&bkt->key_idx[i], + __ATOMIC_ACQUIRE); + if (bkt->sig_current[i] == sig && key_idx != EMPTY_SLOT) { k = (struct rte_hash_key *) ((char *)keys + - bkt->key_idx[i] * h->key_entry_size); + key_idx * h->key_entry_size); + pdata = __atomic_load_n(&k->pdata, + __ATOMIC_ACQUIRE); + if (rte_hash_cmp_eq(key, k->key, h) == 0) { if (data != NULL) - *data = k->pdata; + *data = pdata; /* * Return index where key is stored, * subtracting the first dummy index */ - return bkt->key_idx[i] - 1; + return key_idx - 1; } } } @@ -988,21 +1022,25 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) } } -/* Search one bucket and remove the matched key */ +/* Search one bucket and remove the matched key. + * Writer is expected to hold the lock while calling this + * function. + */ static inline int32_t search_and_remove(const struct rte_hash *h, const void *key, struct rte_hash_bucket *bkt, hash_sig_t sig) { struct rte_hash_key *k, *keys = h->key_store; unsigned int i; - int32_t ret; + uint32_t key_idx; /* Check if key is in primary location */ for (i = 0; i < RTE_HASH_BUCKET_ENTRIES; i++) { - if (bkt->sig_current[i] == sig && - bkt->key_idx[i] != EMPTY_SLOT) { + key_idx = __atomic_load_n(&bkt->key_idx[i], + __ATOMIC_ACQUIRE); + if (bkt->sig_current[i] == sig && key_idx != EMPTY_SLOT) { k = (struct rte_hash_key *) ((char *)keys + - bkt->key_idx[i] * h->key_entry_size); + key_idx * h->key_entry_size); if (rte_hash_cmp_eq(key, k->key, h) == 0) { bkt->sig_current[i] = NULL_SIGNATURE; bkt->sig_alt[i] = NULL_SIGNATURE; @@ -1012,13 +1050,14 @@ search_and_remove(const struct rte_hash *h, const void *key, if (h->recycle_on_del) remove_entry(h, bkt, i); + __atomic_store_n(&bkt->key_idx[i], + EMPTY_SLOT, + __ATOMIC_RELEASE); /* * Return index where key is stored, * subtracting the first dummy index */ - ret = bkt->key_idx[i] - 1; - bkt->key_idx[i] = EMPTY_SLOT; - return ret; + return key_idx - 1; } } } @@ -1202,6 +1241,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, const struct rte_hash_bucket *secondary_bkt[RTE_HASH_LOOKUP_BULK_MAX]; uint32_t prim_hitmask[RTE_HASH_LOOKUP_BULK_MAX] = {0}; uint32_t sec_hitmask[RTE_HASH_LOOKUP_BULK_MAX] = {0}; + void *pdata[RTE_HASH_LOOKUP_BULK_MAX]; /* Prefetch first keys */ for (i = 0; i < PREFETCH_OFFSET && i < num_keys; i++) @@ -1271,18 +1311,25 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, while (prim_hitmask[i]) { uint32_t hit_index = __builtin_ctzl(prim_hitmask[i]); - uint32_t key_idx = primary_bkt[i]->key_idx[hit_index]; + uint32_t key_idx = + __atomic_load_n( + &primary_bkt[i]->key_idx[hit_index], + __ATOMIC_ACQUIRE); const struct rte_hash_key *key_slot = (const struct rte_hash_key *)( (const char *)h->key_store + key_idx * h->key_entry_size); + + if (key_idx != EMPTY_SLOT) + pdata[i] = __atomic_load_n(&key_slot->pdata, + __ATOMIC_ACQUIRE); /* * If key index is 0, do not compare key, * as it is checking the dummy slot */ if (!!key_idx & !rte_hash_cmp_eq(key_slot->key, keys[i], h)) { if (data != NULL) - data[i] = key_slot->pdata; + data[i] = pdata[i]; hits |= 1ULL << i; positions[i] = key_idx - 1; @@ -1294,11 +1341,19 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, while (sec_hitmask[i]) { uint32_t hit_index = __builtin_ctzl(sec_hitmask[i]); - uint32_t key_idx = secondary_bkt[i]->key_idx[hit_index]; + uint32_t key_idx = + __atomic_load_n( + &secondary_bkt[i]->key_idx[hit_index], + __ATOMIC_ACQUIRE); const struct rte_hash_key *key_slot = (const struct rte_hash_key *)( (const char *)h->key_store + key_idx * h->key_entry_size); + + if (key_idx != EMPTY_SLOT) + pdata[i] = __atomic_load_n(&key_slot->pdata, + __ATOMIC_ACQUIRE); + /* * If key index is 0, do not compare key, * as it is checking the dummy slot @@ -1306,7 +1361,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, if (!!key_idx & !rte_hash_cmp_eq(key_slot->key, keys[i], h)) { if (data != NULL) - data[i] = key_slot->pdata; + data[i] = pdata[i]; hits |= 1ULL << i; positions[i] = key_idx - 1; @@ -1371,7 +1426,8 @@ rte_hash_iterate(const struct rte_hash *h, const void **key, void **data, uint32 idx = *next % RTE_HASH_BUCKET_ENTRIES; /* If current position is empty, go to the next one */ - while (h->buckets[bucket_idx].key_idx[idx] == EMPTY_SLOT) { + while ((position = __atomic_load_n(&h->buckets[bucket_idx].key_idx[idx], + __ATOMIC_ACQUIRE)) == EMPTY_SLOT) { (*next)++; /* End of table */ if (*next == total_entries) @@ -1380,8 +1436,6 @@ rte_hash_iterate(const struct rte_hash *h, const void **key, void **data, uint32 idx = *next % RTE_HASH_BUCKET_ENTRIES; } __hash_rw_reader_lock(h); - /* Get position of entry in key table */ - position = h->buckets[bucket_idx].key_idx[idx]; next_key = (struct rte_hash_key *) ((char *)h->key_store + position * h->key_entry_size); /* Return key and data */ From patchwork Thu Oct 11 04:59:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148602 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682677lji; Wed, 10 Oct 2018 22:00:29 -0700 (PDT) X-Google-Smtp-Source: ACcGV62f0J92RbOhOuHwHe2xLdHeQoxnnpVidK4y/Pb45uUntKqOuUVQJOXEXGv9z/aQ613NqT3F X-Received: by 2002:a17:906:5959:: with SMTP id g25-v6mr425073ejr.224.1539234029237; Wed, 10 Oct 2018 22:00:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234029; cv=none; d=google.com; s=arc-20160816; b=HHHw8l39HKW7YV/srHe2J3BjnwnbhvZnxfp7XfCFS5uD74gCbT20FIGScoT6ejBG1y qDmdvLG9GP5JsnJZ7NARXque1pivZQ7SSOVxHwEucvKBxawrGAHTF2Tr5Z2MbZSK9rJq EItZnySHtfw8MpUpZs1IrcE/dkDzb9bd8ZnwhME+U5lxyaUd/MJJcdoDtDlG2mCwBoUY l8UmazAXHx9EtxBsSzx0GdtXzuyz4f0nWZu8dl+RB4f7uGkPbNKBeTpakpmZ/zJvsuTI VzuELdc1gB6OisDVTaMeBPcO886zArrvJmuhcx8uvJVaU85cRfyIK+PEZDHnSjTMe/ai k8FQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=UJZ3mGMYXiU5PnWifuJuoG+UKEm0R5eZU0Aself2Eao=; b=n9PoChKfginJxqcIQVmrWxF1b3KgdyYJ3uZeRRkoltsupX5Q10y3GMdzzjxSFtA6eY fC7k/190cWS9D0YUVIN/ciUnOoO+SN7ujUbUDhloJbRaxKeG59yJiGPlJbWHpcdKiXs3 L5Nm9gpWay0APjLyC1s8kxGSk3QfQlSyxyNHdscrmrvqjcO+uwXsvT7AX8kljRGA/Rmf 5AOUT7xZguFMmkFlXO0pP1AQ5PFgWl64Anghd/pFwGA0oTeTFMVnetjVNof395GsFyYa NM92oGdC1EoXlPVDPrlKKMWyL8WXS9bC4a/Ioh8zeT33rNl6yE1VXivaIou1SPxHCTUZ al/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id w21si168813edx.263.2018.10.10.22.00.28; Wed, 10 Oct 2018 22:00:29 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0AE4D1B439; Thu, 11 Oct 2018 06:59:57 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 62D611B3B8 for ; Thu, 11 Oct 2018 06:59:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D4D567A9; Wed, 10 Oct 2018 21:59:47 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7043D3F5B3; Wed, 10 Oct 2018 21:59:47 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:30 -0500 Message-Id: <1539233972-49860-6-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 5/7] hash: fix rw concurrency while moving keys X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reader-writer concurrency issue, caused by moving the keys to their alternative locations during key insert, is solved by introducing a global counter(tbl_chng_cnt) indicating a change in table. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper Reviewed-by: Yipeng Wang --- lib/librte_hash/rte_cuckoo_hash.c | 306 +++++++++++++++++++++++++------------- lib/librte_hash/rte_cuckoo_hash.h | 3 + 2 files changed, 209 insertions(+), 100 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index e2b0260..dfd5f2a 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -96,6 +96,7 @@ rte_hash_create(const struct rte_hash_parameters *params) unsigned int readwrite_concur_support = 0; unsigned int writer_takes_lock = 0; unsigned int recycle_on_del = 1; + uint32_t *tbl_chng_cnt = NULL; rte_hash_function default_hash_func = (rte_hash_function)rte_jhash; @@ -210,6 +211,14 @@ rte_hash_create(const struct rte_hash_parameters *params) goto err_unlock; } + tbl_chng_cnt = rte_zmalloc_socket(NULL, sizeof(uint32_t), + RTE_CACHE_LINE_SIZE, params->socket_id); + + if (tbl_chng_cnt == NULL) { + RTE_LOG(ERR, HASH, "memory allocation failed\n"); + goto err_unlock; + } + /* * If x86 architecture is used, select appropriate compare function, * which may use x86 intrinsics, otherwise use memcmp @@ -276,6 +285,8 @@ rte_hash_create(const struct rte_hash_parameters *params) default_hash_func : params->hash_func; h->key_store = k; h->free_slots = r; + h->tbl_chng_cnt = tbl_chng_cnt; + *h->tbl_chng_cnt = 0; h->hw_trans_mem_support = hw_trans_mem_support; h->multi_writer_support = multi_writer_support; h->readwrite_concur_support = readwrite_concur_support; @@ -321,6 +332,7 @@ rte_hash_create(const struct rte_hash_parameters *params) rte_free(h); rte_free(buckets); rte_free(k); + rte_free(tbl_chng_cnt); return NULL; } @@ -359,6 +371,7 @@ rte_hash_free(struct rte_hash *h) rte_ring_free(h->free_slots); rte_free(h->key_store); rte_free(h->buckets); + rte_free(h->tbl_chng_cnt); rte_free(h); rte_free(te); } @@ -456,6 +469,7 @@ rte_hash_reset(struct rte_hash *h) __hash_rw_writer_lock(h); memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket)); memset(h->key_store, 0, h->key_entry_size * (h->entries + 1)); + *h->tbl_chng_cnt = 0; /* clear the free ring */ while (rte_ring_dequeue(h->free_slots, &ptr) == 0) @@ -650,11 +664,27 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, if (unlikely(&h->buckets[prev_alt_bkt_idx] != curr_bkt)) { /* revert it to empty, otherwise duplicated keys */ - curr_bkt->key_idx[curr_slot] = EMPTY_SLOT; + __atomic_store_n(&curr_bkt->key_idx[curr_slot], + EMPTY_SLOT, + __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); return -1; } + /* Inform the previous move. The current move need + * not be informed now as the current bucket entry + * is present in both primary and secondary. + * Since there is one writer, load acquires on + * tbl_chng_cnt are not required. + */ + __atomic_store_n(h->tbl_chng_cnt, + *h->tbl_chng_cnt + 1, + __ATOMIC_RELEASE); + /* The stores to sig_alt and sig_current should not + * move above the store to tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_RELEASE); + /* Need to swap current/alt sig to allow later * Cuckoo insert to move elements back to its * primary bucket if available @@ -673,6 +703,20 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, curr_bkt = curr_node->bkt; } + /* Inform the previous move. The current move need + * not be informed now as the current bucket entry + * is present in both primary and secondary. + * Since there is one writer, load acquires on + * tbl_chng_cnt are not required. + */ + __atomic_store_n(h->tbl_chng_cnt, + *h->tbl_chng_cnt + 1, + __ATOMIC_RELEASE); + /* The stores to sig_alt and sig_current should not + * move above the store to tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_RELEASE); + curr_bkt->sig_current[curr_slot] = sig; curr_bkt->sig_alt[curr_slot] = alt_hash; /* Release the new bucket entry */ @@ -937,30 +981,56 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key, uint32_t bucket_idx; hash_sig_t alt_hash; struct rte_hash_bucket *bkt; + uint32_t cnt_b, cnt_a; int ret; - bucket_idx = sig & h->bucket_bitmask; - bkt = &h->buckets[bucket_idx]; - __hash_rw_reader_lock(h); - /* Check if key is in primary location */ - ret = search_one_bucket(h, key, sig, data, bkt); - if (ret != -1) { - __hash_rw_reader_unlock(h); - return ret; - } - /* Calculate secondary hash */ - alt_hash = rte_hash_secondary_hash(sig); - bucket_idx = alt_hash & h->bucket_bitmask; - bkt = &h->buckets[bucket_idx]; + do { + /* Load the table change counter before the lookup + * starts. Acquire semantics will make sure that + * loads in search_one_bucket are not hoisted. + */ + cnt_b = __atomic_load_n(h->tbl_chng_cnt, + __ATOMIC_ACQUIRE); + + bucket_idx = sig & h->bucket_bitmask; + bkt = &h->buckets[bucket_idx]; + + /* Check if key is in primary location */ + ret = search_one_bucket(h, key, sig, data, bkt); + if (ret != -1) { + __hash_rw_reader_unlock(h); + return ret; + } + /* Calculate secondary hash */ + alt_hash = rte_hash_secondary_hash(sig); + bucket_idx = alt_hash & h->bucket_bitmask; + bkt = &h->buckets[bucket_idx]; + + /* Check if key is in secondary location */ + ret = search_one_bucket(h, key, alt_hash, data, bkt); + if (ret != -1) { + __hash_rw_reader_unlock(h); + return ret; + } + + /* The loads of sig_current in search_one_bucket + * should not move below the load from tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_ACQUIRE); + /* Re-read the table change counter to check if the + * table has changed during search. If yes, re-do + * the search. + * This load should not get hoisted. The load + * acquires on cnt_b, key index in primary bucket + * and key index in secondary bucket will make sure + * that it does not get hoisted. + */ + cnt_a = __atomic_load_n(h->tbl_chng_cnt, + __ATOMIC_ACQUIRE); + } while (cnt_b != cnt_a); - /* Check if key is in secondary location */ - ret = search_one_bucket(h, key, alt_hash, data, bkt); - if (ret != -1) { - __hash_rw_reader_unlock(h); - return ret; - } __hash_rw_reader_unlock(h); return -ENOENT; } @@ -1242,6 +1312,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, uint32_t prim_hitmask[RTE_HASH_LOOKUP_BULK_MAX] = {0}; uint32_t sec_hitmask[RTE_HASH_LOOKUP_BULK_MAX] = {0}; void *pdata[RTE_HASH_LOOKUP_BULK_MAX]; + uint32_t cnt_b, cnt_a; /* Prefetch first keys */ for (i = 0; i < PREFETCH_OFFSET && i < num_keys; i++) @@ -1277,102 +1348,137 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, } __hash_rw_reader_lock(h); - /* Compare signatures and prefetch key slot of first hit */ - for (i = 0; i < num_keys; i++) { - compare_signatures(&prim_hitmask[i], &sec_hitmask[i], + do { + /* Load the table change counter before the lookup + * starts. Acquire semantics will make sure that + * loads in compare_signatures are not hoisted. + */ + cnt_b = __atomic_load_n(h->tbl_chng_cnt, + __ATOMIC_ACQUIRE); + + /* Compare signatures and prefetch key slot of first hit */ + for (i = 0; i < num_keys; i++) { + compare_signatures(&prim_hitmask[i], &sec_hitmask[i], primary_bkt[i], secondary_bkt[i], prim_hash[i], sec_hash[i], h->sig_cmp_fn); - if (prim_hitmask[i]) { - uint32_t first_hit = __builtin_ctzl(prim_hitmask[i]); - uint32_t key_idx = primary_bkt[i]->key_idx[first_hit]; - const struct rte_hash_key *key_slot = - (const struct rte_hash_key *)( - (const char *)h->key_store + - key_idx * h->key_entry_size); - rte_prefetch0(key_slot); - continue; - } + if (prim_hitmask[i]) { + uint32_t first_hit = + __builtin_ctzl(prim_hitmask[i]); + uint32_t key_idx = + primary_bkt[i]->key_idx[first_hit]; + const struct rte_hash_key *key_slot = + (const struct rte_hash_key *)( + (const char *)h->key_store + + key_idx * h->key_entry_size); + rte_prefetch0(key_slot); + continue; + } - if (sec_hitmask[i]) { - uint32_t first_hit = __builtin_ctzl(sec_hitmask[i]); - uint32_t key_idx = secondary_bkt[i]->key_idx[first_hit]; - const struct rte_hash_key *key_slot = - (const struct rte_hash_key *)( - (const char *)h->key_store + - key_idx * h->key_entry_size); - rte_prefetch0(key_slot); + if (sec_hitmask[i]) { + uint32_t first_hit = + __builtin_ctzl(sec_hitmask[i]); + uint32_t key_idx = + secondary_bkt[i]->key_idx[first_hit]; + const struct rte_hash_key *key_slot = + (const struct rte_hash_key *)( + (const char *)h->key_store + + key_idx * h->key_entry_size); + rte_prefetch0(key_slot); + } } - } - /* Compare keys, first hits in primary first */ - for (i = 0; i < num_keys; i++) { - positions[i] = -ENOENT; - while (prim_hitmask[i]) { - uint32_t hit_index = __builtin_ctzl(prim_hitmask[i]); + /* Compare keys, first hits in primary first */ + for (i = 0; i < num_keys; i++) { + positions[i] = -ENOENT; + while (prim_hitmask[i]) { + uint32_t hit_index = + __builtin_ctzl(prim_hitmask[i]); - uint32_t key_idx = - __atomic_load_n( - &primary_bkt[i]->key_idx[hit_index], - __ATOMIC_ACQUIRE); - const struct rte_hash_key *key_slot = - (const struct rte_hash_key *)( - (const char *)h->key_store + - key_idx * h->key_entry_size); - - if (key_idx != EMPTY_SLOT) - pdata[i] = __atomic_load_n(&key_slot->pdata, - __ATOMIC_ACQUIRE); - /* - * If key index is 0, do not compare key, - * as it is checking the dummy slot - */ - if (!!key_idx & !rte_hash_cmp_eq(key_slot->key, keys[i], h)) { - if (data != NULL) - data[i] = pdata[i]; + uint32_t key_idx = + __atomic_load_n( + &primary_bkt[i]->key_idx[hit_index], + __ATOMIC_ACQUIRE); + const struct rte_hash_key *key_slot = + (const struct rte_hash_key *)( + (const char *)h->key_store + + key_idx * h->key_entry_size); - hits |= 1ULL << i; - positions[i] = key_idx - 1; - goto next_key; + if (key_idx != EMPTY_SLOT) + pdata[i] = __atomic_load_n( + &key_slot->pdata, + __ATOMIC_ACQUIRE); + /* + * If key index is 0, do not compare key, + * as it is checking the dummy slot + */ + if (!!key_idx & + !rte_hash_cmp_eq( + key_slot->key, keys[i], h)) { + if (data != NULL) + data[i] = pdata[i]; + + hits |= 1ULL << i; + positions[i] = key_idx - 1; + goto next_key; + } + prim_hitmask[i] &= ~(1 << (hit_index)); } - prim_hitmask[i] &= ~(1 << (hit_index)); - } - while (sec_hitmask[i]) { - uint32_t hit_index = __builtin_ctzl(sec_hitmask[i]); + while (sec_hitmask[i]) { + uint32_t hit_index = + __builtin_ctzl(sec_hitmask[i]); - uint32_t key_idx = - __atomic_load_n( - &secondary_bkt[i]->key_idx[hit_index], - __ATOMIC_ACQUIRE); - const struct rte_hash_key *key_slot = - (const struct rte_hash_key *)( - (const char *)h->key_store + - key_idx * h->key_entry_size); - - if (key_idx != EMPTY_SLOT) - pdata[i] = __atomic_load_n(&key_slot->pdata, - __ATOMIC_ACQUIRE); - - /* - * If key index is 0, do not compare key, - * as it is checking the dummy slot - */ + uint32_t key_idx = + __atomic_load_n( + &secondary_bkt[i]->key_idx[hit_index], + __ATOMIC_ACQUIRE); + const struct rte_hash_key *key_slot = + (const struct rte_hash_key *)( + (const char *)h->key_store + + key_idx * h->key_entry_size); - if (!!key_idx & !rte_hash_cmp_eq(key_slot->key, keys[i], h)) { - if (data != NULL) - data[i] = pdata[i]; + if (key_idx != EMPTY_SLOT) + pdata[i] = __atomic_load_n( + &key_slot->pdata, + __ATOMIC_ACQUIRE); + /* + * If key index is 0, do not compare key, + * as it is checking the dummy slot + */ - hits |= 1ULL << i; - positions[i] = key_idx - 1; - goto next_key; + if (!!key_idx & + !rte_hash_cmp_eq( + key_slot->key, keys[i], h)) { + if (data != NULL) + data[i] = pdata[i]; + + hits |= 1ULL << i; + positions[i] = key_idx - 1; + goto next_key; + } + sec_hitmask[i] &= ~(1 << (hit_index)); } - sec_hitmask[i] &= ~(1 << (hit_index)); - } next_key: - continue; - } + continue; + } + + /* The loads of sig_current in compare_signatures + * should not move below the load from tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_ACQUIRE); + /* Re-read the table change counter to check if the + * table has changed during search. If yes, re-do + * the search. + * This load should not get hoisted. The load + * acquires on cnt_b, primary key index and secondary + * key index will make sure that it does not get + * hoisted. + */ + cnt_a = __atomic_load_n(h->tbl_chng_cnt, + __ATOMIC_ACQUIRE); + } while (cnt_b != cnt_a); __hash_rw_reader_unlock(h); diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index a44c6be..cf50ada 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2016 Intel Corporation + * Copyright(c) 2018 Arm Limited */ /* rte_cuckoo_hash.h @@ -196,6 +197,8 @@ struct rte_hash { * to the key table. */ rte_rwlock_t *readwrite_lock; /**< Read-write lock thread-safety. */ + uint32_t *tbl_chng_cnt; + /**< Indicates if the hash table changed from last read. */ } __rte_cache_aligned; struct queue_node { From patchwork Thu Oct 11 04:59:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148603 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682771lji; Wed, 10 Oct 2018 22:00:38 -0700 (PDT) X-Google-Smtp-Source: ACcGV62UfNnsCzD1XaA+rqXjjvWrk2JTYa/gBOzbFubSa7bp5XREZfwBOwWNOB/36Mb+g0+VdZ4z X-Received: by 2002:a17:906:1f87:: with SMTP id t7-v6mr392287ejr.153.1539234038061; Wed, 10 Oct 2018 22:00:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234038; cv=none; d=google.com; s=arc-20160816; b=SR8x2ssPSd6kAnMGbolhPI68EplEST7iKsB6USFxilTMgSp2Hlskv6FcAgldU/L164 V7PzLHTi8Yfku57eT8G9JZ1U6Citph4Be0TU7GRcWevOnmA3evv2z7bTqjml2sorulUc aojZBmIRSaRKBPr7cDkluaZ3edk1Dj3m6tK89WguPRtbFR1miGTv6x7kJAAHosMasN5C PYqaY/3p2B2xYhqFWT9b4f5J2lpzVwgneSt3mnXXsKUOXtabnM3O3uwd4s613K5sEIFk rE+V1idRHGH9jl4pYK6CBxzIz8Khlrygj5EcoBQvskpoFAtHnUfiUjmxTKMSmVCCnKoP I3rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=VTQtFG2ZfNeofK2+EYkGszAcCpW3dpharGrG2qcv2yg=; b=qXZ8CEZrfxy7ICRtUFXSvCQ0xnP/MaUqVXdDCT8jPDEJS1hbdMz4ylZfKbozH81mCY ReVi+ZzwPBM/oDAAwyXWZSVrcFcX08jBYRSndvTZJVTdo2tGiRyhBvln289PZHT4gjgP lMnGXxRK+vxkKJkBWomzfBl+TSf8UnBg2HKL6PwuMxo9BeGIo4EutfVUe2OGtETR3h8g hSUKjxhTmncRFRmWytKm31vAdfKmp/YPEgX3KHgWDhpr5fFBz7WujRSsUKMFf1YerU1i REwAi/Z+CEZ/GX88b54PKH0Z3C0OeyRS2qOJ/VsZrJrUe5Ks/JFMaKQ9Y1SWDzpye9IE 3e6g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id i14-v6si15745308ejh.50.2018.10.10.22.00.37; Wed, 10 Oct 2018 22:00:38 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6F8961B440; Thu, 11 Oct 2018 06:59:59 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 8ED991B3B8 for ; Thu, 11 Oct 2018 06:59:49 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A1757A9; Wed, 10 Oct 2018 21:59:49 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A1E833F5B3; Wed, 10 Oct 2018 21:59:48 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:31 -0500 Message-Id: <1539233972-49860-7-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 6/7] hash: enable lock-free reader-writer concurrency X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the flag to enable reader-writer concurrency during run time. The rte_hash_del_xxx APIs do not free the keystore element when this flag is enabled. Hence a new API, rte_hash_free_key_with_position, to free the key store element is added. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper Reviewed-by: Yipeng Wang --- lib/librte_hash/rte_cuckoo_hash.c | 64 +++++++++++++++++++++--------------- lib/librte_hash/rte_cuckoo_hash.h | 2 ++ lib/librte_hash/rte_hash.h | 58 +++++++++++++++++++++++++++----- lib/librte_hash/rte_hash_version.map | 7 ++++ 4 files changed, 96 insertions(+), 35 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index dfd5f2a..1b13dd0 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -97,6 +97,7 @@ rte_hash_create(const struct rte_hash_parameters *params) unsigned int writer_takes_lock = 0; unsigned int recycle_on_del = 1; uint32_t *tbl_chng_cnt = NULL; + unsigned int readwrite_concur_lf_support = 0; rte_hash_function default_hash_func = (rte_hash_function)rte_jhash; @@ -133,6 +134,12 @@ rte_hash_create(const struct rte_hash_parameters *params) if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL) recycle_on_del = 0; + if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF) { + readwrite_concur_lf_support = 1; + /* Disable freeing internal memory/index on delete */ + recycle_on_del = 0; + } + /* Store all keys and leave the first entry as a dummy entry for lookup_bulk */ if (multi_writer_support) /* @@ -292,6 +299,7 @@ rte_hash_create(const struct rte_hash_parameters *params) h->readwrite_concur_support = readwrite_concur_support; h->writer_takes_lock = writer_takes_lock; h->recycle_on_del = recycle_on_del; + h->readwrite_concur_lf_support = readwrite_concur_lf_support; #if defined(RTE_ARCH_X86) if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) @@ -671,19 +679,21 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, return -1; } - /* Inform the previous move. The current move need - * not be informed now as the current bucket entry - * is present in both primary and secondary. - * Since there is one writer, load acquires on - * tbl_chng_cnt are not required. - */ - __atomic_store_n(h->tbl_chng_cnt, - *h->tbl_chng_cnt + 1, - __ATOMIC_RELEASE); - /* The stores to sig_alt and sig_current should not - * move above the store to tbl_chng_cnt. - */ - __atomic_thread_fence(__ATOMIC_RELEASE); + if (h->readwrite_concur_lf_support) { + /* Inform the previous move. The current move need + * not be informed now as the current bucket entry + * is present in both primary and secondary. + * Since there is one writer, load acquires on + * tbl_chng_cnt are not required. + */ + __atomic_store_n(h->tbl_chng_cnt, + *h->tbl_chng_cnt + 1, + __ATOMIC_RELEASE); + /* The stores to sig_alt and sig_current should not + * move above the store to tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_RELEASE); + } /* Need to swap current/alt sig to allow later * Cuckoo insert to move elements back to its @@ -703,19 +713,21 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, curr_bkt = curr_node->bkt; } - /* Inform the previous move. The current move need - * not be informed now as the current bucket entry - * is present in both primary and secondary. - * Since there is one writer, load acquires on - * tbl_chng_cnt are not required. - */ - __atomic_store_n(h->tbl_chng_cnt, - *h->tbl_chng_cnt + 1, - __ATOMIC_RELEASE); - /* The stores to sig_alt and sig_current should not - * move above the store to tbl_chng_cnt. - */ - __atomic_thread_fence(__ATOMIC_RELEASE); + if (h->readwrite_concur_lf_support) { + /* Inform the previous move. The current move need + * not be informed now as the current bucket entry + * is present in both primary and secondary. + * Since there is one writer, load acquires on + * tbl_chng_cnt are not required. + */ + __atomic_store_n(h->tbl_chng_cnt, + *h->tbl_chng_cnt + 1, + __ATOMIC_RELEASE); + /* The stores to sig_alt and sig_current should not + * move above the store to tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_RELEASE); + } curr_bkt->sig_current[curr_slot] = sig; curr_bkt->sig_alt[curr_slot] = alt_hash; diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index cf50ada..2e05d08 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -177,6 +177,8 @@ struct rte_hash { * the deleted entry. * This flag is enabled by default. */ + uint8_t readwrite_concur_lf_support; + /**< If read-write concurrency lock free support is enabled */ uint8_t writer_takes_lock; /**< Indicates if the writer threads need to take lock */ rte_hash_function hash_func; /**< Function used to calculate hash. */ diff --git a/lib/librte_hash/rte_hash.h b/lib/librte_hash/rte_hash.h index dd59cb0..fb88510 100644 --- a/lib/librte_hash/rte_hash.h +++ b/lib/librte_hash/rte_hash.h @@ -41,9 +41,14 @@ extern "C" { /** Flag to disable freeing of internal memory/indices on hash delete. * Refer to rte_hash_del_xxx APIs for more details. + * This is enabled by default when RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF + * is enabled. */ #define RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL 0x08 +/** Flag to support lock free reader writer concurrency */ +#define RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF 0x10 + /** Signature of key that is stored internally. */ typedef uint32_t hash_sig_t; @@ -126,7 +131,11 @@ void rte_hash_free(struct rte_hash *h); /** - * Reset all hash structure, by zeroing all entries + * Reset all hash structure, by zeroing all entries. + * When RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * it is application's responsibility to make sure that + * none of the readers are referencing the hash table. + * * @param h * Hash table to reset */ @@ -150,6 +159,12 @@ rte_hash_count(const struct rte_hash *h); * and should only be called from one thread by default. * Thread safety can be enabled by setting flag during * table creation. + * When RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * the writer needs to be aware if this API is called to update + * an existing entry. The application should free any memory + * allocated for the existing 'data' only after all the readers + * have stopped referrencing it. RCU mechanisms can be used to + * determine such a state. * * @param h * Hash table to add the key to. @@ -172,6 +187,12 @@ rte_hash_add_key_data(const struct rte_hash *h, const void *key, void *data); * and should only be called from one thread by default. * Thread safety can be enabled by setting flag during * table creation. + * When RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * the writer needs to be aware if this API is called to update + * an existing entry. The application should free any memory + * allocated for the existing 'data' only after all the readers + * have stopped referencing it. RCU mechanisms can be used to + * determine such a state. * * @param h * Hash table to add the key to. @@ -237,10 +258,15 @@ rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, hash_sig_t * and should only be called from one thread by default. * Thread safety can be enabled by setting flag during * table creation. - * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is enabled, - * the hash library's internal memory/index will not be freed by this + * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL or + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * the hash library's internal memory will not be freed by this * API. rte_hash_free_key_with_position API must be called additionally - * to free the internal memory/index associated with the key. + * to free any internal memory associated with the key. + * If RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * rte_hash_free_key_with_position API should be called after all + * the readers have stopped referencing the entry corresponding to + * this key. RCU mechanisms can be used to determine such a state. * * @param h * Hash table to remove the key from. @@ -252,6 +278,8 @@ rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, hash_sig_t * - A positive value that can be used by the caller as an offset into an * array of user data. This value is unique for this key, and is the same * value that was returned when the key was added. + * When lock free concurrency is enabled, this value should be used + * while calling the rte_hash_free_key_with_position API. */ int32_t rte_hash_del_key(const struct rte_hash *h, const void *key); @@ -262,10 +290,15 @@ rte_hash_del_key(const struct rte_hash *h, const void *key); * and should only be called from one thread by default. * Thread safety can be enabled by setting flag during * table creation. - * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is enabled, - * the hash library's internal memory/index will not be freed by this + * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL or + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * the hash library's internal memory will not be freed by this * API. rte_hash_free_key_with_position API must be called additionally - * to free the internal memory/index associated with the key. + * to free any internal memory associated with the key. + * If RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * rte_hash_free_key_with_position API should be called after all + * the readers have stopped referencing the entry corresponding to + * this key. RCU mechanisms can be used to determine such a state. * * @param h * Hash table to remove the key from. @@ -279,6 +312,8 @@ rte_hash_del_key(const struct rte_hash *h, const void *key); * - A positive value that can be used by the caller as an offset into an * array of user data. This value is unique for this key, and is the same * value that was returned when the key was added. + * When lock free concurrency is enabled, this value should be used + * while calling the rte_hash_free_key_with_position API. */ int32_t rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, hash_sig_t sig); @@ -312,10 +347,15 @@ rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position, * of the key. This operation is not multi-thread safe and should * only be called from one thread by default. Thread safety * can be enabled by setting flag during table creation. - * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL is enabled, - * the hash library's internal memory/index must be freed using this API + * If RTE_HASH_EXTRA_FLAGS_RECYCLE_ON_DEL or + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * the hash library's internal memory must be freed using this API * after the key is deleted using rte_hash_del_key_xxx APIs. * This API does not validate if the key is already freed. + * If RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, + * this API should be called only after all the readers have stopped + * referencing the entry corresponding to this key. RCU mechanisms can + * be used to determine such a state. * * @param h * Hash table to free the key from. diff --git a/lib/librte_hash/rte_hash_version.map b/lib/librte_hash/rte_hash_version.map index e216ac8..734ae28 100644 --- a/lib/librte_hash/rte_hash_version.map +++ b/lib/librte_hash/rte_hash_version.map @@ -53,3 +53,10 @@ DPDK_18.08 { rte_hash_count; } DPDK_16.07; + +EXPERIMENTAL { + global: + + rte_hash_free_key_with_position; + +}; From patchwork Thu Oct 11 04:59:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148604 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682929lji; Wed, 10 Oct 2018 22:00:47 -0700 (PDT) X-Google-Smtp-Source: ACcGV62PbogJ5oSQgm3j1NQSC+jFgwYcgkEtujPJ7bXdWue5JPLjWSCsWTjRIf+D5/VW/bOo4baR X-Received: by 2002:a50:af61:: with SMTP id g88-v6mr527260edd.220.1539234047473; Wed, 10 Oct 2018 22:00:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234047; cv=none; d=google.com; s=arc-20160816; b=tKtqp8J/sUV0oiBD7QdfE7KljhA1CvceywBZbshFpeNur04RaThU6ZVIJvoZCzjzz8 7COoZL3M5FQFxq+qQHqROUv7jmSY+BQXeXKtx0ycl/s6y/i9Js/pOFIZOKv96ozq1Wte Y8qa/QwpXzphpS7+Yo3o0iCnRAy6IzXBJMDllzeLYsOEgvn8a19QrKhKk7LfMBAjL/Wq TfSOq0pFBWmESCFNkzQ2X6VUqFY1qwhgvHTx49H8BjI5ysL5Cn8o0o6NheBbsl2flP4E Jza234lIudsuOZQYOJ/G3iHI6+1hHCvnV90IGs7JFtS6qdSG0X6TwHijihMTYm9EwI2d FFpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=4hOVkDFPFPUDmMDmraVWYQGwSduMFa8g0OjsbzxjyAc=; b=Lp/3abYmGRaNYQhxVtjc6ciE+Ia0BxlcFjy+04HSsXtej37oQIr3CsprHdOOYwQRA6 m9a8almZFoGnX0YCeVxhmPHoSoRWnNnxYdXgiKuatxbdwe7tHFRu3hGZrEOIrxPxiIVu pqcJgYCiBBfOql9+RgEQ8txN/Cjo729d887eZqSuzkkXDLQhhyuwafcNy9QE7dXg+cD6 Lby0ulhN5TcOjj66n45LbHkbIjXMQxagQtLcyCe/WyEEoGUozF3g5iRwAkE0exvIRNps djuM/CNVpQQ+RGUgPXYhb7SKuMQYsHvrj+YFgKlQxd1/Ire25o3a1EEQ5ogj4ap0EsOg RY/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id f11-v6si2369769edn.275.2018.10.10.22.00.47; Wed, 10 Oct 2018 22:00:47 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 477EE1B451; Thu, 11 Oct 2018 07:00:01 +0200 (CEST) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 214461B42B for ; Thu, 11 Oct 2018 06:59:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4755F1596; Wed, 10 Oct 2018 21:59:50 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C45143F5B3; Wed, 10 Oct 2018 21:59:49 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com, Dharmik Thakkar Date: Wed, 10 Oct 2018 23:59:32 -0500 Message-Id: <1539233972-49860-8-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 7/7] test/hash: read-write lock-free concurrency test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Dharmik Thakkar Unit tests to check for hash lookup perf with lock-free enabled and with lock-free disabled. Unit tests performed with readers running in parallel with writers. Tests include: - hash lookup on existing keys with: - hash add causing NO key-shifts of existing keys in the table - hash lookup on existing keys likely to be on shift-path with: - hash add causing key-shifts of existing keys in the table - hash lookup on existing keys NOT likely to be on shift-path with: - hash add causing key-shifts of existing keys in the table - hash lookup on non-existing keys with: - hash add causing NO key-shifts of existing keys in the table - hash add causing key-shifts of existing keys in the table - hash lookup on keys likely to be on shift-path with: - multiple writers causing key-shifts of existing keys in the table Signed-off-by: Dharmik Thakkar Reviewed-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- test/test/Makefile | 1 + test/test/meson.build | 1 + test/test/test_hash_readwrite_lf.c | 1084 ++++++++++++++++++++++++++++++++++++ 3 files changed, 1086 insertions(+) create mode 100644 test/test/test_hash_readwrite_lf.c -- 2.7.4 diff --git a/test/test/Makefile b/test/test/Makefile index e6967ba..068ed72 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -115,6 +115,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_functions.c SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_scaling.c SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_multiwriter.c SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_readwrite.c +SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_readwrite_lf.c SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm.c SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm_perf.c diff --git a/test/test/meson.build b/test/test/meson.build index b1dd6ec..366d9a7 100644 --- a/test/test/meson.build +++ b/test/test/meson.build @@ -41,6 +41,7 @@ test_sources = files('commands.c', 'test_hash_functions.c', 'test_hash_multiwriter.c', 'test_hash_perf.c', + 'test_hash_readwrite_lf.c', 'test_hash_scaling.c', 'test_interrupts.c', 'test_kni.c', diff --git a/test/test/test_hash_readwrite_lf.c b/test/test/test_hash_readwrite_lf.c new file mode 100644 index 0000000..841e989 --- /dev/null +++ b/test/test/test_hash_readwrite_lf.c @@ -0,0 +1,1084 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Arm Limited + */ + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#ifndef RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF +#define RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF 0 +#endif + +#define RUN_WITH_HTM_DISABLED 0 + +#if (RUN_WITH_HTM_DISABLED) + +#define TOTAL_ENTRY (5*1024) +#define TOTAL_INSERT (5*1024) + +#else + +#define TOTAL_ENTRY (16*1024*1024) +#define TOTAL_INSERT (16*1024*1024) + +#endif + +#define READ_FAIL 0 +#define READ_PASS_NO_KEY_SHIFTS 1 +#define READ_PASS_SHIFT_PATH 2 +#define READ_PASS_NON_SHIFT_PATH 3 + +#define NUM_TEST 3 +unsigned int rwc_core_cnt[NUM_TEST] = {1, 2, 4}; + +struct rwc_perf { + uint32_t w_no_ks_r_pass[NUM_TEST]; + uint32_t w_no_ks_r_fail[NUM_TEST]; + uint32_t w_ks_r_pass_nsp[NUM_TEST]; + uint32_t w_ks_r_pass_sp[NUM_TEST]; + uint32_t w_ks_r_fail[NUM_TEST]; + uint32_t multi_rw[NUM_TEST - 1][NUM_TEST]; +}; + +static struct rwc_perf rwc_lf_results, rwc_non_lf_results; + +struct { + uint32_t *keys; + uint32_t *keys_no_ks; + uint32_t *keys_ks; + uint32_t *keys_absent; + uint32_t *keys_shift_path; + uint32_t *keys_non_shift_path; + uint32_t count_keys_no_ks; + uint32_t count_keys_ks; + uint32_t count_keys_absent; + uint32_t count_keys_shift_path; + uint32_t count_keys_non_shift_path; + uint32_t single_insert; + struct rte_hash *h; +} tbl_rwc_test_param; + +static rte_atomic64_t gread_cycles; +static rte_atomic64_t greads; + +static volatile uint8_t writer_done; +static volatile uint8_t multi_writer_done[4]; +uint8_t num_test; +uint8_t htm; + +uint16_t enabled_core_ids[RTE_MAX_LCORE]; + +uint8_t *scanned_bkts; + +static inline int +get_enabled_cores_list(void) +{ + uint32_t i = 0; + uint16_t core_id; + uint32_t max_cores = rte_lcore_count(); + for (core_id = 0; core_id < RTE_MAX_LCORE && i < max_cores; core_id++) { + if (rte_lcore_is_enabled(core_id)) { + enabled_core_ids[i] = core_id; + i++; + } + } + + if (i != rte_lcore_count()) { + printf("Number of enabled cores in list is different from " + "number given by rte_lcore_count()\n"); + return -1; + } + return 0; +} + +static inline int +check_bucket(uint32_t bkt_idx, uint32_t key) +{ + uint32_t iter; + uint32_t prev_iter; + uint32_t diff; + uint32_t count = 0; + const void *next_key; + void *next_data; + + /* Temporary bucket to hold the keys */ + uint32_t keys_in_bkt[8]; + + iter = bkt_idx * 8; + prev_iter = iter; + while (rte_hash_iterate(tbl_rwc_test_param.h, + &next_key, &next_data, &iter) >= 0) { + + /* Check for duplicate entries */ + if (*(const uint32_t *)next_key == key) + return 1; + + /* Identify if there is any free entry in the bucket */ + diff = iter - prev_iter; + if (diff > 1) + break; + + prev_iter = iter; + keys_in_bkt[count] = *(const uint32_t *)next_key; + count++; + + /* All entries in the bucket are occupied */ + if (count == 8) { + + /* + * Check if bucket was not scanned before, to avoid + * duplicate keys. + */ + if (scanned_bkts[bkt_idx] == 0) { + /* + * Since this bucket (pointed to by bkt_idx) is + * full, it is likely that key(s) in this + * bucket will be on the shift path, when + * collision occurs. Thus, add it to + * keys_shift_path. + */ + memcpy(tbl_rwc_test_param.keys_shift_path + + tbl_rwc_test_param.count_keys_shift_path + , keys_in_bkt, 32); + tbl_rwc_test_param.count_keys_shift_path += 8; + scanned_bkts[bkt_idx] = 1; + } + return -1; + } + } + return 0; +} + +static int +generate_keys(void) +{ + uint32_t *keys = NULL; + uint32_t *keys_no_ks = NULL; + uint32_t *keys_ks = NULL; + uint32_t *keys_absent = NULL; + uint32_t *keys_non_shift_path = NULL; + uint32_t *found = NULL; + uint32_t count_keys_no_ks = 0; + uint32_t count_keys_ks = 0; + uint32_t i; + + /* + * keys will consist of a) keys whose addition to the hash table + * will result in shifting of the existing keys to their alternate + * locations b) keys whose addition to the hash table will not result + * in shifting of the existing keys. + */ + keys = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_INSERT, 0); + if (keys == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* + * keys_no_ks (no key-shifts): Subset of 'keys' - consists of keys that + * will NOT result in shifting of the existing keys to their alternate + * locations. Roughly around 900K keys. + */ + keys_no_ks = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_INSERT, 0); + if (keys_no_ks == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* + * keys_ks (key-shifts): Subset of 'keys' - consists of keys that will + * result in shifting of the existing keys to their alternate locations. + * Roughly around 146K keys. There might be repeating keys. More code is + * required to filter out these keys which will complicate the test case + */ + keys_ks = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_INSERT, 0); + if (keys_ks == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* Used to identify keys not inserted in the hash table */ + found = rte_zmalloc(NULL, sizeof(uint32_t) * TOTAL_INSERT, 0); + if (found == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* + * This consist of keys not inserted to the hash table. + * Used to test perf of lookup on keys that do not exist in the table. + */ + keys_absent = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_INSERT, 0); + if (keys_absent == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* + * This consist of keys which are likely to be on the shift + * path (i.e. being moved to alternate location), when collision occurs + * on addition of a key to an already full primary bucket. + * Used to test perf of lookup on keys that are on the shift path. + */ + tbl_rwc_test_param.keys_shift_path = rte_malloc(NULL, sizeof(uint32_t) * + TOTAL_INSERT, 0); + if (tbl_rwc_test_param.keys_shift_path == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* + * This consist of keys which are never on the shift + * path (i.e. being moved to alternate location), when collision occurs + * on addition of a key to an already full primary bucket. + * Used to test perf of lookup on keys that are not on the shift path. + */ + keys_non_shift_path = rte_malloc(NULL, sizeof(uint32_t) * TOTAL_INSERT, + 0); + if (keys_non_shift_path == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + /* + * Used to mark bkts in which at least one key was shifted to its + * alternate location + */ + scanned_bkts = rte_malloc(NULL, sizeof(uint8_t) * TOTAL_INSERT / 8, 0); + if (scanned_bkts == NULL) { + printf("RTE_MALLOC failed\n"); + goto err; + } + + tbl_rwc_test_param.keys = keys; + tbl_rwc_test_param.keys_no_ks = keys_no_ks; + tbl_rwc_test_param.keys_ks = keys_ks; + tbl_rwc_test_param.keys_absent = keys_absent; + tbl_rwc_test_param.keys_non_shift_path = keys_non_shift_path; + + hash_sig_t sig; + uint32_t prim_bucket_idx; + int ret; + uint32_t num_buckets; + uint32_t bucket_bitmask; + num_buckets = TOTAL_ENTRY/8; + bucket_bitmask = num_buckets - 1; + + /* Generate keys by adding previous two keys, neglect overflow */ + keys[0] = 0; + keys[1] = 1; + for (i = 2; i < TOTAL_INSERT; i++) + keys[i] = keys[i-1] + keys[i-2]; + + /* Segregate keys into keys_no_ks and keys_ks */ + for (i = 0; i < TOTAL_INSERT; i++) { + /* Check if primary bucket has space.*/ + sig = rte_hash_hash(tbl_rwc_test_param.h, + tbl_rwc_test_param.keys+i); + prim_bucket_idx = sig & bucket_bitmask; + ret = check_bucket(prim_bucket_idx, keys[i]); + if (ret < 0) { + /* + * Primary bucket is full, this key will result in + * shifting of the keys to their alternate locations. + */ + keys_ks[count_keys_ks] = keys[i]; + count_keys_ks++; + } else if (ret == 0) { + /* + * Primary bucket has space, this key will not result in + * shifting of the keys. Hence, add key to the table. + */ + ret = rte_hash_add_key_data(tbl_rwc_test_param.h, + keys+i, + (void *)((uintptr_t)i)); + if (ret < 0) { + printf("writer failed %"PRIu32"\n", i); + break; + } + keys_no_ks[count_keys_no_ks] = keys[i]; + count_keys_no_ks++; + } + } + + for (i = 0; i < count_keys_no_ks; i++) { + /* Identify keys in keys_no_ks with value less than 1M */ + if (keys_no_ks[i] < TOTAL_INSERT) + found[keys_no_ks[i]]++; + } + + for (i = 0; i < count_keys_ks; i++) { + /* Identify keys in keys_ks with value less than 1M */ + if (keys_ks[i] < TOTAL_INSERT) + found[keys_ks[i]]++; + } + + uint32_t count_keys_absent = 0; + for (i = 0; i < TOTAL_INSERT; i++) { + /* Identify missing keys between 0 and 1M */ + if (found[i] == 0) + keys_absent[count_keys_absent++] = i; + } + + /* Find keys that will not be on the shift path */ + uint32_t iter; + const void *next_key; + void *next_data; + uint32_t count = 0; + for (i = 0; i < TOTAL_INSERT / 8; i++) { + /* Check bucket for no keys shifted to alternate locations */ + if (scanned_bkts[i] == 0) { + iter = i * 8; + while (rte_hash_iterate(tbl_rwc_test_param.h, + &next_key, &next_data, &iter) >= 0) { + + /* Check if key belongs to the current bucket */ + if (i >= (iter-1)/8) + keys_non_shift_path[count++] + = *(const uint32_t *)next_key; + else + break; + } + } + } + + tbl_rwc_test_param.count_keys_no_ks = count_keys_no_ks; + tbl_rwc_test_param.count_keys_ks = count_keys_ks; + tbl_rwc_test_param.count_keys_absent = count_keys_absent; + tbl_rwc_test_param.count_keys_non_shift_path = count; + + printf("\nCount of keys NOT causing shifting of existing keys to " + "alternate location: %d\n", tbl_rwc_test_param.count_keys_no_ks); + printf("\nCount of keys causing shifting of existing keys to alternate " + "locations: %d\n\n", tbl_rwc_test_param.count_keys_ks); + printf("Count of absent keys that will never be added to the hash " + "table: %d\n\n", tbl_rwc_test_param.count_keys_absent); + printf("Count of keys likely to be on the shift path: %d\n\n", + tbl_rwc_test_param.count_keys_shift_path); + printf("Count of keys not likely to be on the shift path: %d\n\n", + tbl_rwc_test_param.count_keys_non_shift_path); + + rte_free(found); + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_free(keys); + rte_free(keys_no_ks); + rte_free(keys_ks); + rte_free(keys_absent); + rte_free(found); + rte_free(tbl_rwc_test_param.keys_shift_path); + rte_free(scanned_bkts); + return -1; +} + +static int +init_params(int rwc_lf, int use_jhash) +{ + struct rte_hash *handle; + + struct rte_hash_parameters hash_params = { + .entries = TOTAL_ENTRY, + .key_len = sizeof(uint32_t), + .hash_func_init_val = 0, + .socket_id = rte_socket_id(), + }; + + if (use_jhash) + hash_params.hash_func = rte_jhash; + else + hash_params.hash_func = rte_hash_crc; + + if (rwc_lf) + hash_params.extra_flag = + RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF | + RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD; + else if (htm) + hash_params.extra_flag = + RTE_HASH_EXTRA_FLAGS_TRANS_MEM_SUPPORT | + RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY | + RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD; + else + hash_params.extra_flag = + RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY | + RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD; + + hash_params.name = "tests"; + + handle = rte_hash_create(&hash_params); + if (handle == NULL) { + printf("hash creation failed"); + return -1; + } + + tbl_rwc_test_param.h = handle; + return 0; +} + +static int +test_rwc_reader(__attribute__((unused)) void *arg) +{ + uint32_t i; + int ret; + uint64_t begin, cycles; + uint32_t loop_cnt = 0; + uint8_t read_type = (uint8_t)((uintptr_t)arg); + uint32_t read_cnt; + uint32_t *keys; + + if (read_type == READ_FAIL) { + keys = tbl_rwc_test_param.keys_absent; + read_cnt = tbl_rwc_test_param.count_keys_absent; + } else if (read_type == READ_PASS_NO_KEY_SHIFTS) { + keys = tbl_rwc_test_param.keys_no_ks; + read_cnt = tbl_rwc_test_param.count_keys_no_ks; + } else if (read_type == READ_PASS_SHIFT_PATH) { + keys = tbl_rwc_test_param.keys_shift_path; + read_cnt = tbl_rwc_test_param.count_keys_shift_path; + } else { + keys = tbl_rwc_test_param.keys_non_shift_path; + read_cnt = tbl_rwc_test_param.count_keys_non_shift_path; + } + + begin = rte_rdtsc_precise(); + do { + for (i = 0; i < read_cnt; i++) { + ret = rte_hash_lookup(tbl_rwc_test_param.h, keys + i); + if ((read_type == READ_FAIL && ret != -ENOENT) + || (read_type != READ_FAIL && ret == -ENOENT)) { + printf("lookup failed! %"PRIu32"\n", keys[i]); + return -1; + } + } + loop_cnt++; + } while (!writer_done); + + cycles = rte_rdtsc_precise() - begin; + rte_atomic64_add(&gread_cycles, cycles); + rte_atomic64_add(&greads, i*loop_cnt); + return 0; +} + +static int +write_keys(uint8_t key_shift) +{ + uint32_t i; + int ret; + uint32_t key_cnt; + uint32_t *keys; + if (key_shift) { + key_cnt = tbl_rwc_test_param.count_keys_ks; + keys = tbl_rwc_test_param.keys_ks; + } else { + key_cnt = tbl_rwc_test_param.count_keys_no_ks; + keys = tbl_rwc_test_param.keys_no_ks; + } + for (i = 0; i < key_cnt; i++) { + ret = rte_hash_add_key(tbl_rwc_test_param.h, keys + i); + if (!key_shift && ret < 0) { + printf("writer failed %"PRIu32"\n", i); + return -1; + } + } + return 0; +} + +static int +test_rwc_multi_writer(__attribute__((unused)) void *arg) +{ + uint32_t i, offset; + uint32_t pos_core = (uint32_t)((uintptr_t)arg); + offset = pos_core * tbl_rwc_test_param.single_insert; + for (i = offset; i < offset + tbl_rwc_test_param.single_insert; i++) + rte_hash_add_key(tbl_rwc_test_param.h, + tbl_rwc_test_param.keys_ks + i); + multi_writer_done[pos_core] = 1; + return 0; +} + +/* + * Test lookup perf: + * Reader(s) lookup keys present in the table. + */ +static int +test_hash_add_no_ks_lookup_pass(struct rwc_perf *rwc_perf_results, int rwc_lf) +{ + unsigned int n; + uint64_t i; + int use_jhash = 0; + uint8_t key_shift = 0; + uint8_t read_type = READ_PASS_NO_KEY_SHIFTS; + + rte_atomic64_init(&greads); + rte_atomic64_init(&gread_cycles); + + if (init_params(rwc_lf, use_jhash) != 0) + goto err; + printf("\nTest: Hash add - no key-shifts, read - pass\n"); + for (n = 0; n < num_test; n++) { + unsigned int tot_lcore = rte_lcore_count(); + if (tot_lcore < rwc_core_cnt[n] + 1) + goto finish; + + printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); + + rte_atomic64_clear(&greads); + rte_atomic64_clear(&gread_cycles); + + rte_hash_reset(tbl_rwc_test_param.h); + writer_done = 0; + if (write_keys(key_shift) < 0) + goto err; + writer_done = 1; + for (i = 1; i <= rwc_core_cnt[n]; i++) + rte_eal_remote_launch(test_rwc_reader, + (void *)(uintptr_t)read_type, + enabled_core_ids[i]); + rte_eal_mp_wait_lcore(); + + for (i = 1; i <= rwc_core_cnt[n]; i++) + if (lcore_config[i].ret < 0) + goto err; + + unsigned long long cycles_per_lookup = + rte_atomic64_read(&gread_cycles) / + rte_atomic64_read(&greads); + rwc_perf_results->w_no_ks_r_pass[n] = cycles_per_lookup; + printf("Cycles per lookup: %llu\n", cycles_per_lookup); + } + +finish: + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_hash_free(tbl_rwc_test_param.h); + return -1; +} + +/* + * Test lookup perf: + * Reader(s) lookup keys absent in the table while + * 'Main' thread adds with no key-shifts. + */ +static int +test_hash_add_no_ks_lookup_fail(struct rwc_perf *rwc_perf_results, int rwc_lf) +{ + unsigned int n; + uint64_t i; + int use_jhash = 0; + uint8_t key_shift = 0; + uint8_t read_type = READ_FAIL; + int ret; + + rte_atomic64_init(&greads); + rte_atomic64_init(&gread_cycles); + + if (init_params(rwc_lf, use_jhash) != 0) + goto err; + printf("\nTest: Hash add - no key-shifts, Hash lookup - fail\n"); + for (n = 0; n < num_test; n++) { + unsigned int tot_lcore = rte_lcore_count(); + if (tot_lcore < rwc_core_cnt[n] + 1) + goto finish; + + printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); + + rte_atomic64_clear(&greads); + rte_atomic64_clear(&gread_cycles); + + rte_hash_reset(tbl_rwc_test_param.h); + writer_done = 0; + + for (i = 1; i <= rwc_core_cnt[n]; i++) + rte_eal_remote_launch(test_rwc_reader, + (void *)(uintptr_t)read_type, + enabled_core_ids[i]); + ret = write_keys(key_shift); + writer_done = 1; + rte_eal_mp_wait_lcore(); + + if (ret < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + if (lcore_config[i].ret < 0) + goto err; + + unsigned long long cycles_per_lookup = + rte_atomic64_read(&gread_cycles) / + rte_atomic64_read(&greads); + rwc_perf_results->w_no_ks_r_fail[n] = cycles_per_lookup; + printf("Cycles per lookup: %llu\n", cycles_per_lookup); + } + +finish: + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_hash_free(tbl_rwc_test_param.h); + return -1; +} + +/* + * Test lookup perf: + * Reader(s) lookup keys present in the table and not likely to be on the + * shift path while 'Main' thread adds keys causing key-shifts. + */ +static int +test_hash_add_ks_lookup_pass_non_sp(struct rwc_perf *rwc_perf_results, + int rwc_lf) +{ + unsigned int n; + uint64_t i; + int use_jhash = 0; + int ret; + uint8_t key_shift; + uint8_t read_type = READ_PASS_NON_SHIFT_PATH; + + rte_atomic64_init(&greads); + rte_atomic64_init(&gread_cycles); + + if (init_params(rwc_lf, use_jhash) != 0) + goto err; + printf("\nTest: Hash add - key shift, Hash lookup - pass" + " (non-shift-path)\n"); + for (n = 0; n < num_test; n++) { + unsigned int tot_lcore = rte_lcore_count(); + if (tot_lcore < rwc_core_cnt[n] + 1) + goto finish; + + printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); + + rte_atomic64_clear(&greads); + rte_atomic64_clear(&gread_cycles); + + rte_hash_reset(tbl_rwc_test_param.h); + writer_done = 0; + key_shift = 0; + if (write_keys(key_shift) < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + rte_eal_remote_launch(test_rwc_reader, + (void *)(uintptr_t)read_type, + enabled_core_ids[i]); + key_shift = 1; + ret = write_keys(key_shift); + writer_done = 1; + rte_eal_mp_wait_lcore(); + + if (ret < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + if (lcore_config[i].ret < 0) + goto err; + + unsigned long long cycles_per_lookup = + rte_atomic64_read(&gread_cycles) / + rte_atomic64_read(&greads); + rwc_perf_results->w_ks_r_pass_nsp[n] = cycles_per_lookup; + printf("Cycles per lookup: %llu\n", cycles_per_lookup); + } + +finish: + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_hash_free(tbl_rwc_test_param.h); + return -1; +} + +/* + * Test lookup perf: + * Reader(s) lookup keys present in the table and likely on the shift-path while + * 'Main' thread adds keys causing key-shifts. + */ +static int +test_hash_add_ks_lookup_pass_sp(struct rwc_perf *rwc_perf_results, int rwc_lf) +{ + unsigned int n; + uint64_t i; + int use_jhash = 0; + int ret; + uint8_t key_shift; + uint8_t read_type = READ_PASS_SHIFT_PATH; + + rte_atomic64_init(&greads); + rte_atomic64_init(&gread_cycles); + + if (init_params(rwc_lf, use_jhash) != 0) + goto err; + printf("\nTest: Hash add - key shift, Hash lookup - pass (shift-path)" + "\n"); + + for (n = 0; n < num_test; n++) { + unsigned int tot_lcore = rte_lcore_count(); + if (tot_lcore < rwc_core_cnt[n]) + goto finish; + + printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); + rte_atomic64_clear(&greads); + rte_atomic64_clear(&gread_cycles); + + rte_hash_reset(tbl_rwc_test_param.h); + writer_done = 0; + key_shift = 0; + if (write_keys(key_shift) < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + rte_eal_remote_launch(test_rwc_reader, + (void *)(uintptr_t)read_type, + enabled_core_ids[i]); + key_shift = 1; + ret = write_keys(key_shift); + writer_done = 1; + rte_eal_mp_wait_lcore(); + + if (ret < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + if (lcore_config[i].ret < 0) + goto err; + + unsigned long long cycles_per_lookup = + rte_atomic64_read(&gread_cycles) / + rte_atomic64_read(&greads); + rwc_perf_results->w_ks_r_pass_sp[n] = cycles_per_lookup; + printf("Cycles per lookup: %llu\n", cycles_per_lookup); + } + +finish: + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_hash_free(tbl_rwc_test_param.h); + return -1; +} + +/* + * Test lookup perf: + * Reader(s) lookup keys absent in the table while + * 'Main' thread adds keys causing key-shifts. + */ +static int +test_hash_add_ks_lookup_fail(struct rwc_perf *rwc_perf_results, int rwc_lf) +{ + unsigned int n; + uint64_t i; + int use_jhash = 0; + int ret; + uint8_t key_shift; + uint8_t read_type = READ_FAIL; + + rte_atomic64_init(&greads); + rte_atomic64_init(&gread_cycles); + + if (init_params(rwc_lf, use_jhash) != 0) + goto err; + printf("\nTest: Hash add - key shift, Hash lookup - fail\n"); + for (n = 0; n < num_test; n++) { + unsigned int tot_lcore = rte_lcore_count(); + if (tot_lcore < rwc_core_cnt[n] + 1) + goto finish; + + printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); + + rte_atomic64_clear(&greads); + rte_atomic64_clear(&gread_cycles); + + rte_hash_reset(tbl_rwc_test_param.h); + writer_done = 0; + key_shift = 0; + if (write_keys(key_shift) < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + rte_eal_remote_launch(test_rwc_reader, + (void *)(uintptr_t)read_type, + enabled_core_ids[i]); + key_shift = 1; + ret = write_keys(key_shift); + writer_done = 1; + rte_eal_mp_wait_lcore(); + + if (ret < 0) + goto err; + for (i = 1; i <= rwc_core_cnt[n]; i++) + if (lcore_config[i].ret < 0) + goto err; + + unsigned long long cycles_per_lookup = + rte_atomic64_read(&gread_cycles) / + rte_atomic64_read(&greads); + rwc_perf_results->w_ks_r_fail[n] = cycles_per_lookup; + printf("Cycles per lookup: %llu\n", cycles_per_lookup); + } + +finish: + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_hash_free(tbl_rwc_test_param.h); + return -1; +} + +/* + * Test lookup perf: + * Reader(s) lookup keys present in the table and likely on the shift-path while + * Writers add keys causing key-shifts. + */ +static int +test_hash_multi_add_lookup(struct rwc_perf *rwc_perf_results, int rwc_lf) +{ + unsigned int n, m; + uint64_t i; + int use_jhash = 0; + uint8_t key_shift; + uint8_t read_type = READ_PASS_SHIFT_PATH; + + rte_atomic64_init(&greads); + rte_atomic64_init(&gread_cycles); + + if (init_params(rwc_lf, use_jhash) != 0) + goto err; + printf("\nTest: Multi-add-lookup\n"); + uint8_t pos_core; + for (m = 1; m < num_test; m++) { + /* Calculate keys added by each writer */ + tbl_rwc_test_param.single_insert = + tbl_rwc_test_param.count_keys_ks / rwc_core_cnt[m]; + + for (n = 0; n < num_test; n++) { + unsigned int tot_lcore = rte_lcore_count(); + if (tot_lcore < rwc_core_cnt[n] + rwc_core_cnt[m] + 1) + goto finish; + + printf("\nNumber of writers: %u", rwc_core_cnt[m]); + printf("\nNumber of readers: %u\n", rwc_core_cnt[n]); + + rte_atomic64_clear(&greads); + rte_atomic64_clear(&gread_cycles); + + rte_hash_reset(tbl_rwc_test_param.h); + writer_done = 0; + for (i = 0; i < 4; i++) + multi_writer_done[i] = 0; + key_shift = 0; + if (write_keys(key_shift) < 0) + goto err; + + /* Launch reader(s) */ + for (i = 1; i <= rwc_core_cnt[n]; i++) + rte_eal_remote_launch(test_rwc_reader, + (void *)(uintptr_t)read_type, + enabled_core_ids[i]); + key_shift = 1; + pos_core = 0; + + /* Launch writers */ + for (; i <= rwc_core_cnt[m] + rwc_core_cnt[n]; i++) { + rte_eal_remote_launch(test_rwc_multi_writer, + (void *)(uintptr_t)pos_core, + enabled_core_ids[i]); + pos_core++; + } + + /* Wait for writers to complete */ + for (i = 0; i < rwc_core_cnt[m]; i++) + while + (multi_writer_done[i] == 0); + writer_done = 1; + + rte_eal_mp_wait_lcore(); + + for (i = 1; i <= rwc_core_cnt[n]; i++) + if (lcore_config[i].ret < 0) + goto err; + + unsigned long long cycles_per_lookup = + rte_atomic64_read(&gread_cycles) / + rte_atomic64_read(&greads); + rwc_perf_results->multi_rw[m][n] = cycles_per_lookup; + printf("Cycles per lookup: %llu\n", cycles_per_lookup); + } + } + +finish: + rte_hash_free(tbl_rwc_test_param.h); + return 0; + +err: + rte_hash_free(tbl_rwc_test_param.h); + return -1; +} + +static int +test_hash_readwrite_lf_main(void) +{ + /* + * Variables used to choose different tests. + * rwc_lf indicates if read-write concurrency lock-free support is + * enabled. + * htm indicates if Hardware transactional memory support is enabled. + */ + int rwc_lf = 0; + int use_jhash = 0; + num_test = NUM_TEST; + if (rte_lcore_count() == 1) { + printf("More than one lcore is required " + "to do read write lock-free concurrency test\n"); + return -1; + } + + setlocale(LC_NUMERIC, ""); + + if (rte_tm_supported()) + htm = 1; + else + htm = 0; + + if (init_params(rwc_lf, use_jhash) != 0) + return -1; + if (generate_keys() != 0) + return -1; + if (get_enabled_cores_list() != 0) + return -1; + + if (RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF) { + rwc_lf = 1; + printf("Test lookup with read-write concurrency lock free support" + " enabled\n"); + if (test_hash_add_no_ks_lookup_pass(&rwc_lf_results, rwc_lf) + < 0) + return -1; + if (test_hash_add_no_ks_lookup_fail(&rwc_lf_results, rwc_lf) + < 0) + return -1; + if (test_hash_add_ks_lookup_pass_non_sp(&rwc_lf_results, rwc_lf) + < 0) + return -1; + if (test_hash_add_ks_lookup_pass_sp(&rwc_lf_results, rwc_lf) + < 0) + return -1; + if (test_hash_add_ks_lookup_fail(&rwc_lf_results, rwc_lf) < 0) + return -1; + if (test_hash_multi_add_lookup(&rwc_lf_results, rwc_lf) < 0) + return -1; + } + printf("\nTest lookup with read-write concurrency lock free support" + " disabled\n"); + rwc_lf = 0; + if (!htm) { + printf("With HTM Disabled\n"); + if (!RUN_WITH_HTM_DISABLED) { + printf("Enable RUN_WITH_HTM_DISABLED to test with" + " lock-free disabled"); + goto results; + } + } else + printf("With HTM Enabled\n"); + if (test_hash_add_no_ks_lookup_pass(&rwc_non_lf_results, rwc_lf) < 0) + return -1; + if (test_hash_add_no_ks_lookup_fail(&rwc_non_lf_results, rwc_lf) < 0) + return -1; + if (test_hash_add_ks_lookup_pass_non_sp(&rwc_non_lf_results, rwc_lf) + < 0) + return -1; + if (test_hash_add_ks_lookup_pass_sp(&rwc_non_lf_results, rwc_lf) < 0) + return -1; + if (test_hash_add_ks_lookup_fail(&rwc_non_lf_results, rwc_lf) < 0) + return -1; + if (test_hash_multi_add_lookup(&rwc_non_lf_results, rwc_lf) < 0) + return -1; +results: + printf("\n\t\t\t\t\t\t********** Results summary **********\n\n"); + printf("_______\t\t_______\t\t_________\t___\t\t_________\t\t\t\t\t\t" + "_________________\n"); + int i, j; + printf("Writers\t\tReaders\t\tLock-free\tHTM\t\tTest-case\t\t\t\t\t\t" + "Cycles per lookup\n"); + printf("_______\t\t_______\t\t_________\t___\t\t_________\t\t\t\t\t\t" + "_________________\n"); + for (i = 0; i < NUM_TEST; i++) { + printf("%u\t\t%u\t\t", 1, rwc_core_cnt[i]); + printf("Enabled\t\t"); + printf("N/A\t\t"); + printf("Hash add - no key-shifts, lookup - pass\t\t\t\t%u\n\t\t" + "\t\t\t\t\t\t", rwc_lf_results.w_no_ks_r_pass[i]); + printf("Hash add - no key-shifts, lookup - fail\t\t\t\t%u\n\t\t" + "\t\t\t\t\t\t", rwc_lf_results.w_no_ks_r_fail[i]); + printf("Hash add - key-shifts, lookup - pass (non-shift-path)\t" + "\t%u\n\t\t\t\t\t\t\t\t", + rwc_lf_results.w_ks_r_pass_nsp[i]); + printf("Hash add - key-shifts, lookup - pass (shift-path)\t\t%u" + "\n\t\t\t\t\t\t\t\t", rwc_lf_results.w_ks_r_pass_sp[i]); + printf("Hash add - key-shifts, Hash lookup fail\t\t\t\t%u\n\n" + "\t\t\t\t", rwc_lf_results.w_ks_r_fail[i]); + + printf("Disabled\t"); + if (htm) + printf("Enabled\t\t"); + else + printf("Disabled\t"); + printf("Hash add - no key-shifts, lookup - pass\t\t\t\t%u\n\t\t" + "\t\t\t\t\t\t", rwc_non_lf_results.w_no_ks_r_pass[i]); + printf("Hash add - no key-shifts, lookup - fail\t\t\t\t%u\n\t\t" + "\t\t\t\t\t\t", rwc_non_lf_results.w_no_ks_r_fail[i]); + printf("Hash add - key-shifts, lookup - pass (non-shift-path)\t" + "\t%u\n\t\t\t\t\t\t\t\t", + rwc_non_lf_results.w_ks_r_pass_nsp[i]); + printf("Hash add - key-shifts, lookup - pass (shift-path)\t\t%u" + "\n\t\t\t\t\t\t\t\t", + rwc_non_lf_results.w_ks_r_pass_sp[i]); + printf("Hash add - key-shifts, Hash lookup fail\t\t\t\t%u\n", + rwc_non_lf_results.w_ks_r_fail[i]); + + printf("_______\t\t_______\t\t_________\t___\t\t_________\t\t\t\t" + "\t\t_________________\n"); + } + + for (i = 1; i < NUM_TEST; i++) { + for (j = 0; j < NUM_TEST; j++) { + printf("%u", rwc_core_cnt[i]); + printf("\t\t%u\t\t", rwc_core_cnt[j]); + printf("Enabled\t\t"); + printf("N/A\t\t"); + printf("Multi-add-lookup\t\t\t\t\t\t%u\n\n\t\t\t\t", + rwc_lf_results.multi_rw[i][j]); + printf("Disabled\t"); + if (htm) + printf("Enabled\t\t"); + else + printf("Disabled\t"); + printf("Multi-add-lookup\t\t\t\t\t\t%u\n", + rwc_non_lf_results.multi_rw[i][j]); + + printf("_______\t\t_______\t\t_________\t___\t\t" + "_________\t\t\t\t\t\t_________________\n"); + } + } + + rte_free(tbl_rwc_test_param.keys); + rte_free(tbl_rwc_test_param.keys_no_ks); + rte_free(tbl_rwc_test_param.keys_ks); + rte_free(tbl_rwc_test_param.keys_absent); + rte_free(tbl_rwc_test_param.keys_shift_path); + rte_free(scanned_bkts); + return 0; +} + +REGISTER_TEST_COMMAND(hash_readwrite_lf_autotest, test_hash_readwrite_lf_main);