From patchwork Thu May 16 01:14:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 164312 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp161575ili; Wed, 15 May 2019 18:14:31 -0700 (PDT) X-Google-Smtp-Source: APXvYqy0ELD3dRdP9HSu2svRMRcc8E0gRrBHaO94quxShx8pwg0N0nq1yy0bBkhWudvw8nBJCb5E X-Received: by 2002:a17:906:948:: with SMTP id j8mr35718993ejd.240.1557969270921; Wed, 15 May 2019 18:14:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557969270; cv=none; d=google.com; s=arc-20160816; b=tDdKLNSjOQcVSxXJggV+qbss5t3e47edqRs7f1+MZlVC3XdWabTgPA5KovIITY0H5a IN6njq75jaqXNZew0kjEx+gL2s9tVSxKY/Jki6yxrAaeQgyAJ5PTmpSUQ5vR1WUWG3O9 8VgblXdiD1v0zNFkqYvFT/28J6+qh8RLwEtmkYL0GTEuVgkyNSOhSIjM+AsKuKNT9NHz VU1dpLJ+m8M6e5asgmynguIpmRVGtCSu6C6zQOXUFUVniyPMvwMyofGIVNKKSertUnIb 23DwG5VODzImbgoY5G31SuItgUri1EJ6xPaGsI2Zbw8+p+/xO0ijaNL6iEvrM74EhkS9 aDcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=XhUcOBGm+9DfFDTWxbCJ4BVyJeHi9RB6wQZJwk+Lp64=; b=MJTynfIjH/XRni6SzxhBrj+QAAqsYi5Dk9oD7v7hl+jv0No1XuS0gI5+OOa8/YP0xm Jgw+Oj9wQAWU6rGAZd020826OUf/RNAD9gvVfaSRJzKXlwUAIR/1Ve/F5V5FwjorcGJN BQCcGjImSqRpZuS9YLJPerJ8Pp6fLT5QO3ZwKpi7Abr6zYch7vINq+6DHwG3cEfl8dzi 0ooYWDdhJY7IVnezaGpGU+/kwOV+f92BUZ5S3FX7kVIIk2b5o/mNOBhn6DN29JMaaa0V oBm6B4+df/hzrmvusAMUToMY3rdpECNbU8EPGG0CeiOtmu3bsveUh8j98yoUyGbniLQC c7OQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id p18si2468533ejb.287.2019.05.15.18.14.30; Wed, 15 May 2019 18:14:30 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 277945A6E; Thu, 16 May 2019 03:14:29 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id A6F534CAB for ; Thu, 16 May 2019 03:14:27 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E3BC915BE; Wed, 15 May 2019 18:14:26 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.65]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 932C53F5AF; Wed, 15 May 2019 18:14:26 -0700 (PDT) From: Honnappa Nagarahalli To: honnappa.nagarahalli@arm.com, david.marchand@redhat.com, dev@dpdk.org Cc: dharmik.thakkar@arm.com, nd@arm.com Date: Wed, 15 May 2019 20:14:17 -0500 Message-Id: <20190516011417.24752-1-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190515204255.4667-1-honnappa.nagarahalli@arm.com> References: <20190515204255.4667-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2] rcu/test: make gloabl variable per core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Each hash entry has a pointer to one uint32 memory location. However, all the readers increment the same location causing race conditions. Allocate memory for each thread so that each thread will increment its own memory location. Fixes: b87089b0bb19 ("test/rcu: add API and functional tests") Reported-by: David Marchand Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar Tested-by: David Marchand --- This patch is dependent on http://patchwork.dpdk.org/patch/53421 V2: added 'Fixes' app/test/test_rcu_qsbr.c | 163 ++++++++++++++++++++++------------ app/test/test_rcu_qsbr_perf.c | 105 +++++++++++----------- 2 files changed, 160 insertions(+), 108 deletions(-) -- 2.17.1 diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c index 92ab0c20a..5f0b1383e 100644 --- a/app/test/test_rcu_qsbr.c +++ b/app/test/test_rcu_qsbr.c @@ -40,6 +40,16 @@ static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE]; struct rte_hash *h[TEST_RCU_MAX_LCORE]; char hash_name[TEST_RCU_MAX_LCORE][8]; +struct test_rcu_thread_info { + /* Index in RCU array */ + int ir; + /* Index in hash array */ + int ih; + /* lcore IDs registered on the RCU variable */ + uint16_t r_core_ids[2]; +}; +struct test_rcu_thread_info thread_info[TEST_RCU_MAX_LCORE/4]; + static inline int get_enabled_cores_mask(void) { @@ -629,11 +639,12 @@ test_rcu_qsbr_reader(void *arg) struct rte_hash *hash = NULL; int i; uint32_t lcore_id = rte_lcore_id(); - uint8_t read_type = (uint8_t)((uintptr_t)arg); + struct test_rcu_thread_info *ti; uint32_t *pdata; - temp = t[read_type]; - hash = h[read_type]; + ti = (struct test_rcu_thread_info *)arg; + temp = t[ti->ir]; + hash = h[ti->ih]; do { rte_rcu_qsbr_thread_register(temp, lcore_id); @@ -642,9 +653,9 @@ test_rcu_qsbr_reader(void *arg) rte_rcu_qsbr_lock(temp, lcore_id); if (rte_hash_lookup_data(hash, keys+i, (void **)&pdata) != -ENOENT) { - *pdata = 0; - while (*pdata < COUNTER_VALUE) - ++*pdata; + pdata[lcore_id] = 0; + while (pdata[lcore_id] < COUNTER_VALUE) + pdata[lcore_id]++; } rte_rcu_qsbr_unlock(temp, lcore_id); } @@ -661,44 +672,42 @@ static int test_rcu_qsbr_writer(void *arg) { uint64_t token; - int32_t pos; + int32_t i, pos, del; + uint32_t c; struct rte_rcu_qsbr *temp; struct rte_hash *hash = NULL; - uint8_t writer_type = (uint8_t)((uintptr_t)arg); + struct test_rcu_thread_info *ti; - temp = t[(writer_type/2) % TEST_RCU_MAX_LCORE]; - hash = h[(writer_type/2) % TEST_RCU_MAX_LCORE]; + ti = (struct test_rcu_thread_info *)arg; + temp = t[ti->ir]; + hash = h[ti->ih]; /* Delete element from the shared data structure */ - pos = rte_hash_del_key(hash, keys + (writer_type % TOTAL_ENTRY)); + del = rte_lcore_id() % TOTAL_ENTRY; + pos = rte_hash_del_key(hash, keys + del); if (pos < 0) { - printf("Delete key failed #%d\n", - keys[writer_type % TOTAL_ENTRY]); + printf("Delete key failed #%d\n", keys[del]); return -1; } /* Start the quiescent state query process */ token = rte_rcu_qsbr_start(temp); /* Check the quiescent state status */ rte_rcu_qsbr_check(temp, token, true); - if (*hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE] - [writer_type % TOTAL_ENTRY] != COUNTER_VALUE && - *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE] - [writer_type % TOTAL_ENTRY] != 0) { - printf("Reader did not complete #%d = %d\t", writer_type, - *hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE] - [writer_type % TOTAL_ENTRY]); - return -1; + for (i = 0; i < 2; i++) { + c = hash_data[ti->ih][del][ti->r_core_ids[i]]; + if (c != COUNTER_VALUE && c != 0) { + printf("Reader lcore id %u did not complete = %u\t", + rte_lcore_id(), c); + return -1; + } } if (rte_hash_free_key_with_position(hash, pos) < 0) { - printf("Failed to free the key #%d\n", - keys[writer_type % TOTAL_ENTRY]); + printf("Failed to free the key #%d\n", keys[del]); return -1; } - rte_free(hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE] - [writer_type % TOTAL_ENTRY]); - hash_data[(writer_type/2) % TEST_RCU_MAX_LCORE] - [writer_type % TOTAL_ENTRY] = NULL; + rte_free(hash_data[ti->ih][del]); + hash_data[ti->ih][del] = NULL; return 0; } @@ -728,7 +737,9 @@ init_hash(int hash_id) } for (i = 0; i < TOTAL_ENTRY; i++) { - hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0); + hash_data[hash_id][i] = + rte_zmalloc(NULL, + sizeof(uint32_t) * TEST_RCU_MAX_LCORE, 0); if (hash_data[hash_id][i] == NULL) { printf("No memory\n"); return NULL; @@ -762,6 +773,7 @@ static int test_rcu_qsbr_sw_sv_3qs(void) { uint64_t token[3]; + uint32_t c; int i; int32_t pos[3]; @@ -778,9 +790,15 @@ test_rcu_qsbr_sw_sv_3qs(void) goto error; } + /* No need to fill the registered core IDs as the writer + * thread is not launched. + */ + thread_info[0].ir = 0; + thread_info[0].ih = 0; + /* Reader threads are launched */ for (i = 0; i < 4; i++) - rte_eal_remote_launch(test_rcu_qsbr_reader, NULL, + rte_eal_remote_launch(test_rcu_qsbr_reader, &thread_info[0], enabled_core_ids[i]); /* Delete element from the shared data structure */ @@ -812,9 +830,13 @@ test_rcu_qsbr_sw_sv_3qs(void) /* Check the quiescent state status */ rte_rcu_qsbr_check(t[0], token[0], true); - if (*hash_data[0][0] != COUNTER_VALUE && *hash_data[0][0] != 0) { - printf("Reader did not complete #0 = %d\n", *hash_data[0][0]); - goto error; + for (i = 0; i < 4; i++) { + c = hash_data[0][0][enabled_core_ids[i]]; + if (c != COUNTER_VALUE && c != 0) { + printf("Reader lcore %d did not complete #0 = %d\n", + enabled_core_ids[i], c); + goto error; + } } if (rte_hash_free_key_with_position(h[0], pos[0]) < 0) { @@ -826,9 +848,13 @@ test_rcu_qsbr_sw_sv_3qs(void) /* Check the quiescent state status */ rte_rcu_qsbr_check(t[0], token[1], true); - if (*hash_data[0][3] != COUNTER_VALUE && *hash_data[0][3] != 0) { - printf("Reader did not complete #3 = %d\n", *hash_data[0][3]); - goto error; + for (i = 0; i < 4; i++) { + c = hash_data[0][3][enabled_core_ids[i]]; + if (c != COUNTER_VALUE && c != 0) { + printf("Reader lcore %d did not complete #3 = %d\n", + enabled_core_ids[i], c); + goto error; + } } if (rte_hash_free_key_with_position(h[0], pos[1]) < 0) { @@ -840,9 +866,13 @@ test_rcu_qsbr_sw_sv_3qs(void) /* Check the quiescent state status */ rte_rcu_qsbr_check(t[0], token[2], true); - if (*hash_data[0][6] != COUNTER_VALUE && *hash_data[0][6] != 0) { - printf("Reader did not complete #6 = %d\n", *hash_data[0][6]); - goto error; + for (i = 0; i < 4; i++) { + c = hash_data[0][6][enabled_core_ids[i]]; + if (c != COUNTER_VALUE && c != 0) { + printf("Reader lcore %d did not complete #6 = %d\n", + enabled_core_ids[i], c); + goto error; + } } if (rte_hash_free_key_with_position(h[0], pos[2]) < 0) { @@ -889,42 +919,61 @@ test_rcu_qsbr_mw_mv_mqs(void) test_cores = num_cores / 4; test_cores = test_cores * 4; - printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n" - , test_cores / 2, test_cores / 4); + printf("Test: %d writers, %d QSBR variable, simultaneous QSBR queries\n", + test_cores / 2, test_cores / 4); - for (i = 0; i < num_cores / 4; i++) { + for (i = 0; i < test_cores / 4; i++) { + j = i * 4; rte_rcu_qsbr_init(t[i], TEST_RCU_MAX_LCORE); h[i] = init_hash(i); if (h[i] == NULL) { printf("Hash init failed\n"); goto error; } - } + thread_info[i].ir = i; + thread_info[i].ih = i; + thread_info[i].r_core_ids[0] = enabled_core_ids[j]; + thread_info[i].r_core_ids[1] = enabled_core_ids[j + 1]; - /* Reader threads are launched */ - for (i = 0; i < test_cores / 2; i++) + /* Reader threads are launched */ rte_eal_remote_launch(test_rcu_qsbr_reader, - (void *)(uintptr_t)(i / 2), - enabled_core_ids[i]); + (void *)&thread_info[i], + enabled_core_ids[j]); + rte_eal_remote_launch(test_rcu_qsbr_reader, + (void *)&thread_info[i], + enabled_core_ids[j + 1]); - /* Writer threads are launched */ - for (; i < test_cores; i++) + /* Writer threads are launched */ rte_eal_remote_launch(test_rcu_qsbr_writer, - (void *)(uintptr_t)(i - (test_cores / 2)), - enabled_core_ids[i]); + (void *)&thread_info[i], + enabled_core_ids[j + 2]); + rte_eal_remote_launch(test_rcu_qsbr_writer, + (void *)&thread_info[i], + enabled_core_ids[j + 3]); + } + /* Wait and check return value from writer threads */ - for (i = test_cores / 2; i < test_cores; i++) - if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) + for (i = 0; i < test_cores / 4; i++) { + j = i * 4; + if (rte_eal_wait_lcore(enabled_core_ids[j + 2]) < 0) goto error; + if (rte_eal_wait_lcore(enabled_core_ids[j + 3]) < 0) + goto error; + } writer_done = 1; /* Wait and check return value from reader threads */ - for (i = 0; i < test_cores / 2; i++) - if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) + for (i = 0; i < test_cores / 4; i++) { + j = i * 4; + if (rte_eal_wait_lcore(enabled_core_ids[j]) < 0) goto error; - for (i = 0; i < num_cores / 4; i++) + if (rte_eal_wait_lcore(enabled_core_ids[j + 1]) < 0) + goto error; + } + + for (i = 0; i < test_cores / 4; i++) rte_hash_free(h[i]); rte_free(keys); @@ -936,10 +985,10 @@ test_rcu_qsbr_mw_mv_mqs(void) /* Wait until all readers and writers have exited */ rte_eal_mp_wait_lcore(); - for (i = 0; i < num_cores / 4; i++) + for (i = 0; i < test_cores / 4; i++) rte_hash_free(h[i]); rte_free(keys); - for (j = 0; j < TEST_RCU_MAX_LCORE; j++) + for (j = 0; j < test_cores / 4; j++) for (i = 0; i < TOTAL_ENTRY; i++) rte_free(hash_data[j][i]); diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c index 6b1912c0c..33ca36c63 100644 --- a/app/test/test_rcu_qsbr_perf.c +++ b/app/test/test_rcu_qsbr_perf.c @@ -23,14 +23,14 @@ static uint8_t num_cores; static uint32_t *keys; #define TOTAL_ENTRY (1024 * 8) #define COUNTER_VALUE 4096 -static uint32_t *hash_data[TEST_RCU_MAX_LCORE][TOTAL_ENTRY]; +static uint32_t *hash_data[TOTAL_ENTRY]; static volatile uint8_t writer_done; static volatile uint8_t all_registered; static volatile uint32_t thr_id; static struct rte_rcu_qsbr *t[TEST_RCU_MAX_LCORE]; -static struct rte_hash *h[TEST_RCU_MAX_LCORE]; -static char hash_name[TEST_RCU_MAX_LCORE][8]; +static struct rte_hash *h; +static char hash_name[8]; static rte_atomic64_t updates, checks; static rte_atomic64_t update_cycles, check_cycles; @@ -309,7 +309,7 @@ test_rcu_qsbr_hash_reader(void *arg) uint32_t *pdata; temp = t[read_type]; - hash = h[read_type]; + hash = h; rte_rcu_qsbr_thread_register(temp, thread_id); @@ -319,11 +319,11 @@ test_rcu_qsbr_hash_reader(void *arg) rte_rcu_qsbr_thread_online(temp, thread_id); for (i = 0; i < TOTAL_ENTRY; i++) { rte_rcu_qsbr_lock(temp, thread_id); - if (rte_hash_lookup_data(hash, keys+i, + if (rte_hash_lookup_data(hash, keys + i, (void **)&pdata) != -ENOENT) { - *pdata = 0; - while (*pdata < COUNTER_VALUE) - ++*pdata; + pdata[thread_id] = 0; + while (pdata[thread_id] < COUNTER_VALUE) + pdata[thread_id]++; } rte_rcu_qsbr_unlock(temp, thread_id); } @@ -342,13 +342,12 @@ test_rcu_qsbr_hash_reader(void *arg) return 0; } -static struct rte_hash * -init_hash(int hash_id) +static struct rte_hash *init_hash(void) { int i; - struct rte_hash *h = NULL; + struct rte_hash *hash = NULL; - sprintf(hash_name[hash_id], "hash%d", hash_id); + snprintf(hash_name, 8, "hash"); struct rte_hash_parameters hash_params = { .entries = TOTAL_ENTRY, .key_len = sizeof(uint32_t), @@ -357,18 +356,19 @@ init_hash(int hash_id) .hash_func = rte_hash_crc, .extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF, - .name = hash_name[hash_id], + .name = hash_name, }; - h = rte_hash_create(&hash_params); - if (h == NULL) { + hash = rte_hash_create(&hash_params); + if (hash == NULL) { printf("Hash create Failed\n"); return NULL; } for (i = 0; i < TOTAL_ENTRY; i++) { - hash_data[hash_id][i] = rte_zmalloc(NULL, sizeof(uint32_t), 0); - if (hash_data[hash_id][i] == NULL) { + hash_data[i] = rte_zmalloc(NULL, + sizeof(uint32_t) * TEST_RCU_MAX_LCORE, 0); + if (hash_data[i] == NULL) { printf("No memory\n"); return NULL; } @@ -383,14 +383,13 @@ init_hash(int hash_id) keys[i] = i; for (i = 0; i < TOTAL_ENTRY; i++) { - if (rte_hash_add_key_data(h, keys + i, - (void *)((uintptr_t)hash_data[hash_id][i])) - < 0) { + if (rte_hash_add_key_data(hash, keys + i, + (void *)((uintptr_t)hash_data[i])) < 0) { printf("Hash key add Failed #%d\n", i); return NULL; } } - return h; + return hash; } /* @@ -401,7 +400,7 @@ static int test_rcu_qsbr_sw_sv_1qs(void) { uint64_t token, begin, cycles; - int i, tmp_num_cores, sz; + int i, j, tmp_num_cores, sz; int32_t pos; writer_done = 0; @@ -427,8 +426,8 @@ test_rcu_qsbr_sw_sv_1qs(void) rte_rcu_qsbr_init(t[0], tmp_num_cores); /* Shared data structure created */ - h[0] = init_hash(0); - if (h[0] == NULL) { + h = init_hash(); + if (h == NULL) { printf("Hash init failed\n"); goto error; } @@ -442,7 +441,7 @@ test_rcu_qsbr_sw_sv_1qs(void) for (i = 0; i < TOTAL_ENTRY; i++) { /* Delete elements from the shared data structure */ - pos = rte_hash_del_key(h[0], keys + i); + pos = rte_hash_del_key(h, keys + i); if (pos < 0) { printf("Delete key failed #%d\n", keys[i]); goto error; @@ -452,19 +451,21 @@ test_rcu_qsbr_sw_sv_1qs(void) /* Check the quiescent state status */ rte_rcu_qsbr_check(t[0], token, true); - if (*hash_data[0][i] != COUNTER_VALUE && - *hash_data[0][i] != 0) { - printf("Reader did not complete #%d = %d\n", i, - *hash_data[0][i]); - goto error; + for (j = 0; j < tmp_num_cores; j++) { + if (hash_data[i][j] != COUNTER_VALUE && + hash_data[i][j] != 0) { + printf("Reader thread ID %u did not complete #%d = %d\n", + j, i, hash_data[i][j]); + goto error; + } } - if (rte_hash_free_key_with_position(h[0], pos) < 0) { + if (rte_hash_free_key_with_position(h, pos) < 0) { printf("Failed to free the key #%d\n", keys[i]); goto error; } - rte_free(hash_data[0][i]); - hash_data[0][i] = NULL; + rte_free(hash_data[i]); + hash_data[i] = NULL; } cycles = rte_rdtsc_precise() - begin; @@ -477,7 +478,7 @@ test_rcu_qsbr_sw_sv_1qs(void) for (i = 0; i < num_cores; i++) if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto error; - rte_hash_free(h[0]); + rte_hash_free(h); rte_free(keys); printf("Following numbers include calls to rte_hash functions\n"); @@ -498,10 +499,10 @@ test_rcu_qsbr_sw_sv_1qs(void) /* Wait until all readers have exited */ rte_eal_mp_wait_lcore(); - rte_hash_free(h[0]); + rte_hash_free(h); rte_free(keys); for (i = 0; i < TOTAL_ENTRY; i++) - rte_free(hash_data[0][i]); + rte_free(hash_data[i]); rte_free(t[0]); @@ -517,7 +518,7 @@ static int test_rcu_qsbr_sw_sv_1qs_non_blocking(void) { uint64_t token, begin, cycles; - int i, ret, tmp_num_cores, sz; + int i, j, ret, tmp_num_cores, sz; int32_t pos; writer_done = 0; @@ -538,8 +539,8 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void) rte_rcu_qsbr_init(t[0], tmp_num_cores); /* Shared data structure created */ - h[0] = init_hash(0); - if (h[0] == NULL) { + h = init_hash(); + if (h == NULL) { printf("Hash init failed\n"); goto error; } @@ -553,7 +554,7 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void) for (i = 0; i < TOTAL_ENTRY; i++) { /* Delete elements from the shared data structure */ - pos = rte_hash_del_key(h[0], keys + i); + pos = rte_hash_del_key(h, keys + i); if (pos < 0) { printf("Delete key failed #%d\n", keys[i]); goto error; @@ -565,19 +566,21 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void) do { ret = rte_rcu_qsbr_check(t[0], token, false); } while (ret == 0); - if (*hash_data[0][i] != COUNTER_VALUE && - *hash_data[0][i] != 0) { - printf("Reader did not complete #%d = %d\n", i, - *hash_data[0][i]); - goto error; + for (j = 0; j < tmp_num_cores; j++) { + if (hash_data[i][j] != COUNTER_VALUE && + hash_data[i][j] != 0) { + printf("Reader thread ID %u did not complete #%d = %d\n", + j, i, hash_data[i][j]); + goto error; + } } - if (rte_hash_free_key_with_position(h[0], pos) < 0) { + if (rte_hash_free_key_with_position(h, pos) < 0) { printf("Failed to free the key #%d\n", keys[i]); goto error; } - rte_free(hash_data[0][i]); - hash_data[0][i] = NULL; + rte_free(hash_data[i]); + hash_data[i] = NULL; } cycles = rte_rdtsc_precise() - begin; @@ -589,7 +592,7 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void) for (i = 0; i < num_cores; i++) if (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) goto error; - rte_hash_free(h[0]); + rte_hash_free(h); rte_free(keys); printf("Following numbers include calls to rte_hash functions\n"); @@ -610,10 +613,10 @@ test_rcu_qsbr_sw_sv_1qs_non_blocking(void) /* Wait until all readers have exited */ rte_eal_mp_wait_lcore(); - rte_hash_free(h[0]); + rte_hash_free(h); rte_free(keys); for (i = 0; i < TOTAL_ENTRY; i++) - rte_free(hash_data[0][i]); + rte_free(hash_data[i]); rte_free(t[0]);