From patchwork Mon Mar 23 13:04:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 46194 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5C0D0214B0 for ; Mon, 23 Mar 2015 13:05:12 +0000 (UTC) Received: by wibbs8 with SMTP id bs8sf7522226wib.3 for ; Mon, 23 Mar 2015 06:05:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=DGypJOlGo3DT7x5CcNI7VmSeHMZ/4qEVxXk+5iHIWpc=; b=X1TQXWLReYWk/h16+WwVzbQKWMNByMeLMw9IetuNq9TT8a8zJa5mmkDUy+XnAOvCu2 wy6SyxWZkhq/FgyPAK+/ggyN8voYKudJW3PwrezULcMCn9yxwtism/IYIyr+sug6ClZn HXbGEg42nSU0geOF3f6ayMpROKtMlsVM9VIfgUhjWpt5u/g7ksjesH2ObwubIamZ8zi3 j2/DnSi6Wvrt3RI+MezOgUSIyppge5s4Wqxpasz3gH/mhVkv85d/bdgE0Os7JVxaZXRu piYJukleXC6KtyBxn8S6LedQsXyOUmsu5IkEgXZJ7Z9yJiL0XhaHoWffxcQZoZ4HEFjM qgSA== X-Gm-Message-State: ALoCoQkGz0PVjjm64i+mRFWXKZgILbe8EJOGDX4oTg+OMruwsvi/UF4UI+jCGR9ypAz2B88wIbbw X-Received: by 10.112.35.135 with SMTP id h7mr15307408lbj.23.1427115911614; Mon, 23 Mar 2015 06:05:11 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.1.168 with SMTP id 8ls171933lan.59.gmail; Mon, 23 Mar 2015 06:05:11 -0700 (PDT) X-Received: by 10.112.85.165 with SMTP id i5mr86020343lbz.7.1427115911253; Mon, 23 Mar 2015 06:05:11 -0700 (PDT) Received: from mail-lb0-f174.google.com (mail-lb0-f174.google.com. [209.85.217.174]) by mx.google.com with ESMTPS id w6si510922lae.164.2015.03.23.06.05.11 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Mar 2015 06:05:11 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) client-ip=209.85.217.174; Received: by lbcmq2 with SMTP id mq2so7554344lbc.0 for ; Mon, 23 Mar 2015 06:05:11 -0700 (PDT) X-Received: by 10.112.130.100 with SMTP id od4mr61236265lbb.86.1427115911148; Mon, 23 Mar 2015 06:05:11 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.57.201 with SMTP id k9csp813653lbq; Mon, 23 Mar 2015 06:05:10 -0700 (PDT) X-Received: by 10.55.17.164 with SMTP id 36mr137943493qkr.18.1427115909406; Mon, 23 Mar 2015 06:05:09 -0700 (PDT) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id c65si619897qge.65.2015.03.23.06.05.08 (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 23 Mar 2015 06:05:09 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Ya22T-0003Bn-Ga; Mon, 23 Mar 2015 13:05:05 +0000 Received: from mail-lb0-f178.google.com ([209.85.217.178]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Ya22N-0003B4-9F for lng-odp@lists.linaro.org; Mon, 23 Mar 2015 13:04:59 +0000 Received: by lbbug6 with SMTP id ug6so23261306lbb.3 for ; Mon, 23 Mar 2015 06:04:53 -0700 (PDT) X-Received: by 10.112.85.165 with SMTP id i5mr86019188lbz.7.1427115893110; Mon, 23 Mar 2015 06:04:53 -0700 (PDT) Received: from localhost.localdomain (ppp91-76-169-66.pppoe.mtu-net.ru. [91.76.169.66]) by mx.google.com with ESMTPSA id az19sm164997lab.45.2015.03.23.06.04.50 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 23 Mar 2015 06:04:51 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Mon, 23 Mar 2015 16:04:46 +0300 Message-Id: <1427115886-23775-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.9.1 X-Topics: patch Subject: [lng-odp] [PATCH] validation: synchronizers: rename global_mem to mem X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.174 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 global_mem is global static variable usage same name variable inside function is confusing. Rename local vars from global_mem to mem. Signed-off-by: Maxim Uvarov Reviewed-by: Bill Fischofer --- test/validation/odp_synchronizers.c | 158 ++++++++++++++++++------------------ 1 file changed, 79 insertions(+), 79 deletions(-) diff --git a/test/validation/odp_synchronizers.c b/test/validation/odp_synchronizers.c index ab9164f..b8f4e6a 100644 --- a/test/validation/odp_synchronizers.c +++ b/test/validation/odp_synchronizers.c @@ -107,7 +107,7 @@ static void thread_delay(per_thread_mem_t *per_thread_mem, uint32_t iterations) /* Initialise per-thread memory */ static per_thread_mem_t *thread_init(void) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; odp_shm_t global_shm; uint32_t per_thread_mem_len; @@ -122,10 +122,10 @@ static per_thread_mem_t *thread_init(void) per_thread_mem->thread_core = odp_cpu_id(); global_shm = odp_shm_lookup(GLOBAL_SHM_NAME); - global_mem = odp_shm_addr(global_shm); + mem = odp_shm_addr(global_shm); CU_ASSERT(global_mem != NULL); - per_thread_mem->global_mem = global_mem; + per_thread_mem->global_mem = mem; return per_thread_mem; } @@ -160,13 +160,13 @@ static void custom_barrier_wait(custom_barrier_t *custom_barrier) static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, odp_bool_t no_barrier_test) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; uint32_t barrier_errs, iterations, cnt, i_am_slow_thread; uint32_t thread_num, slow_thread_num, next_slow_thread, num_threads; uint32_t lock_owner_delay, barrier_cnt1, barrier_cnt2; thread_num = odp_cpu_id() + 1; - global_mem = per_thread_mem->global_mem; + mem = per_thread_mem->global_mem; num_threads = global_mem->g_num_threads; iterations = BARRIER_ITERATIONS; @@ -175,10 +175,10 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, for (cnt = 1; cnt < iterations; cnt++) { /* Wait here until all of the threads reach this point */ - custom_barrier_wait(&global_mem->custom_barrier1[cnt]); + custom_barrier_wait(&mem->custom_barrier1[cnt]); - barrier_cnt1 = global_mem->barrier_cnt1; - barrier_cnt2 = global_mem->barrier_cnt2; + barrier_cnt1 = mem->barrier_cnt1; + barrier_cnt2 = mem->barrier_cnt2; if ((barrier_cnt1 != cnt) || (barrier_cnt2 != cnt)) { printf("thread_num=%u barrier_cnts of %u %u cnt=%u\n", @@ -187,9 +187,9 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, } /* Wait here until all of the threads reach this point */ - custom_barrier_wait(&global_mem->custom_barrier2[cnt]); + custom_barrier_wait(&mem->custom_barrier2[cnt]); - slow_thread_num = global_mem->slow_thread_num; + slow_thread_num = mem->slow_thread_num; i_am_slow_thread = thread_num == slow_thread_num; next_slow_thread = slow_thread_num + 1; if (num_threads < next_slow_thread) @@ -206,30 +206,30 @@ static uint32_t barrier_test(per_thread_mem_t *per_thread_mem, if (i_am_slow_thread) { thread_delay(per_thread_mem, lock_owner_delay); lock_owner_delay += BASE_DELAY; - if ((global_mem->barrier_cnt1 != cnt) || - (global_mem->barrier_cnt2 != cnt) || - (global_mem->slow_thread_num + if ((mem->barrier_cnt1 != cnt) || + (mem->barrier_cnt2 != cnt) || + (mem->slow_thread_num != slow_thread_num)) barrier_errs++; } if (no_barrier_test == 0) - odp_barrier_wait(&global_mem->test_barriers[cnt]); + odp_barrier_wait(&mem->test_barriers[cnt]); - global_mem->barrier_cnt1 = cnt + 1; + mem->barrier_cnt1 = cnt + 1; odp_sync_stores(); if (i_am_slow_thread) { - global_mem->slow_thread_num = next_slow_thread; - global_mem->barrier_cnt2 = cnt + 1; + mem->slow_thread_num = next_slow_thread; + mem->barrier_cnt2 = cnt + 1; odp_sync_stores(); } else { - while (global_mem->barrier_cnt2 != (cnt + 1)) + while (mem->barrier_cnt2 != (cnt + 1)) thread_delay(per_thread_mem, BASE_DELAY); } } - if ((global_mem->g_verbose) && (barrier_errs != 0)) + if ((mem->g_verbose) && (barrier_errs != 0)) printf("\nThread %u (id=%d core=%d) had %u barrier_errs" " in %u iterations\n", thread_num, per_thread_mem->thread_id, @@ -293,14 +293,14 @@ static void spinlock_api_test(odp_spinlock_t *spinlock) static void *spinlock_api_tests(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; odp_spinlock_t local_spin_lock; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; + mem = per_thread_mem->global_mem; - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); spinlock_api_test(&local_spin_lock); spinlock_api_test(&per_thread_mem->per_thread_spinlock); @@ -331,14 +331,14 @@ static void ticketlock_api_test(odp_ticketlock_t *ticketlock) static void *ticketlock_api_tests(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; odp_ticketlock_t local_ticket_lock; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; + mem = per_thread_mem->global_mem; - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); ticketlock_api_test(&local_ticket_lock); ticketlock_api_test(&per_thread_mem->per_thread_ticketlock); @@ -365,14 +365,14 @@ static void rwlock_api_test(odp_rwlock_t *rw_lock) static void *rwlock_api_tests(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; odp_rwlock_t local_rwlock; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; + mem = per_thread_mem->global_mem; - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); rwlock_api_test(&local_rwlock); rwlock_api_test(&per_thread_mem->per_thread_rwlock); @@ -384,17 +384,17 @@ static void *rwlock_api_tests(void *arg UNUSED) static void *no_lock_functional_test(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; uint32_t sync_failures, current_errs, lock_owner_delay; thread_num = odp_cpu_id() + 1; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; + mem = per_thread_mem->global_mem; + iterations = mem->g_iterations; - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); sync_failures = 0; current_errs = 0; @@ -403,20 +403,20 @@ static void *no_lock_functional_test(void *arg UNUSED) lock_owner_delay = BASE_DELAY; for (cnt = 1; cnt <= iterations; cnt++) { - global_mem->global_lock_owner = thread_num; + mem->global_lock_owner = thread_num; odp_sync_stores(); thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { + if (mem->global_lock_owner != thread_num) { current_errs++; sync_failures++; } - global_mem->global_lock_owner = 0; + mem->global_lock_owner = 0; odp_sync_stores(); thread_delay(per_thread_mem, MIN_DELAY); - if (global_mem->global_lock_owner == thread_num) { + if (mem->global_lock_owner == thread_num) { current_errs++; sync_failures++; } @@ -430,10 +430,10 @@ static void *no_lock_functional_test(void *arg UNUSED) /* Try to resync all of the threads to increase contention */ if ((rs_idx < NUM_RESYNC_BARRIERS) && ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); + odp_barrier_wait(&mem->barrier_array[rs_idx++]); } - if (global_mem->g_verbose) + if (mem->g_verbose) printf("\nThread %u (id=%d core=%d) had %u sync_failures" " in %u iterations\n", thread_num, per_thread_mem->thread_id, @@ -454,7 +454,7 @@ static void *no_lock_functional_test(void *arg UNUSED) static void *spinlock_functional_test(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; uint32_t sync_failures, is_locked_errs, current_errs; @@ -462,10 +462,10 @@ static void *spinlock_functional_test(void *arg UNUSED) thread_num = odp_cpu_id() + 1; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; + mem = per_thread_mem->global_mem; + iterations = mem->g_iterations; - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); sync_failures = 0; is_locked_errs = 0; @@ -476,13 +476,13 @@ static void *spinlock_functional_test(void *arg UNUSED) for (cnt = 1; cnt <= iterations; cnt++) { /* Acquire the shared global lock */ - odp_spinlock_lock(&global_mem->global_spinlock); + odp_spinlock_lock(&mem->global_spinlock); /* Make sure we have the lock AND didn't previously own it */ - if (odp_spinlock_is_locked(&global_mem->global_spinlock) != 1) + if (odp_spinlock_is_locked(&mem->global_spinlock) != 1) is_locked_errs++; - if (global_mem->global_lock_owner != 0) { + if (mem->global_lock_owner != 0) { current_errs++; sync_failures++; } @@ -491,19 +491,19 @@ static void *spinlock_functional_test(void *arg UNUSED) * then we see if anyone else has snuck in and changed the * global_lock_owner to be themselves */ - global_mem->global_lock_owner = thread_num; + mem->global_lock_owner = thread_num; odp_sync_stores(); thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { + if (mem->global_lock_owner != thread_num) { current_errs++; sync_failures++; } /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; + mem->global_lock_owner = 0; odp_sync_stores(); - odp_spinlock_unlock(&global_mem->global_spinlock); - if (global_mem->global_lock_owner == thread_num) { + odp_spinlock_unlock(&mem->global_spinlock); + if (mem->global_lock_owner == thread_num) { current_errs++; sync_failures++; } @@ -517,10 +517,10 @@ static void *spinlock_functional_test(void *arg UNUSED) /* Try to resync all of the threads to increase contention */ if ((rs_idx < NUM_RESYNC_BARRIERS) && ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); + odp_barrier_wait(&mem->barrier_array[rs_idx++]); } - if ((global_mem->g_verbose) && + if ((mem->g_verbose) && ((sync_failures != 0) || (is_locked_errs != 0))) printf("\nThread %u (id=%d core=%d) had %u sync_failures" " and %u is_locked_errs in %u iterations\n", thread_num, @@ -537,7 +537,7 @@ static void *spinlock_functional_test(void *arg UNUSED) static void *ticketlock_functional_test(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; uint32_t sync_failures, is_locked_errs, current_errs; @@ -545,11 +545,11 @@ static void *ticketlock_functional_test(void *arg UNUSED) thread_num = odp_cpu_id() + 1; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; + mem = per_thread_mem->global_mem; + iterations = mem->g_iterations; /* Wait here until all of the threads have also reached this point */ - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); sync_failures = 0; is_locked_errs = 0; @@ -560,14 +560,14 @@ static void *ticketlock_functional_test(void *arg UNUSED) for (cnt = 1; cnt <= iterations; cnt++) { /* Acquire the shared global lock */ - odp_ticketlock_lock(&global_mem->global_ticketlock); + odp_ticketlock_lock(&mem->global_ticketlock); /* Make sure we have the lock AND didn't previously own it */ - if (odp_ticketlock_is_locked(&global_mem->global_ticketlock) + if (odp_ticketlock_is_locked(&mem->global_ticketlock) != 1) is_locked_errs++; - if (global_mem->global_lock_owner != 0) { + if (mem->global_lock_owner != 0) { current_errs++; sync_failures++; } @@ -576,19 +576,19 @@ static void *ticketlock_functional_test(void *arg UNUSED) * then we see if anyone else has snuck in and changed the * global_lock_owner to be themselves */ - global_mem->global_lock_owner = thread_num; + mem->global_lock_owner = thread_num; odp_sync_stores(); thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { + if (mem->global_lock_owner != thread_num) { current_errs++; sync_failures++; } /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; + mem->global_lock_owner = 0; odp_sync_stores(); - odp_ticketlock_unlock(&global_mem->global_ticketlock); - if (global_mem->global_lock_owner == thread_num) { + odp_ticketlock_unlock(&mem->global_ticketlock); + if (mem->global_lock_owner == thread_num) { current_errs++; sync_failures++; } @@ -602,10 +602,10 @@ static void *ticketlock_functional_test(void *arg UNUSED) /* Try to resync all of the threads to increase contention */ if ((rs_idx < NUM_RESYNC_BARRIERS) && ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); + odp_barrier_wait(&mem->barrier_array[rs_idx++]); } - if ((global_mem->g_verbose) && + if ((mem->g_verbose) && ((sync_failures != 0) || (is_locked_errs != 0))) printf("\nThread %u (id=%d core=%d) had %u sync_failures" " and %u is_locked_errs in %u iterations\n", thread_num, @@ -622,18 +622,18 @@ static void *ticketlock_functional_test(void *arg UNUSED) static void *rwlock_functional_test(void *arg UNUSED) { - global_shared_mem_t *global_mem; + global_shared_mem_t *mem; per_thread_mem_t *per_thread_mem; uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; uint32_t sync_failures, current_errs, lock_owner_delay; thread_num = odp_cpu_id() + 1; per_thread_mem = thread_init(); - global_mem = per_thread_mem->global_mem; - iterations = global_mem->g_iterations; + mem = per_thread_mem->global_mem; + iterations = mem->g_iterations; /* Wait here until all of the threads have also reached this point */ - odp_barrier_wait(&global_mem->global_barrier); + odp_barrier_wait(&mem->global_barrier); sync_failures = 0; current_errs = 0; @@ -643,10 +643,10 @@ static void *rwlock_functional_test(void *arg UNUSED) for (cnt = 1; cnt <= iterations; cnt++) { /* Acquire the shared global lock */ - odp_rwlock_write_lock(&global_mem->global_rwlock); + odp_rwlock_write_lock(&mem->global_rwlock); /* Make sure we have lock now AND didn't previously own it */ - if (global_mem->global_lock_owner != 0) { + if (mem->global_lock_owner != 0) { current_errs++; sync_failures++; } @@ -655,19 +655,19 @@ static void *rwlock_functional_test(void *arg UNUSED) * then we see if anyone else has snuck in and changed the * global_lock_owner to be themselves */ - global_mem->global_lock_owner = thread_num; + mem->global_lock_owner = thread_num; odp_sync_stores(); thread_delay(per_thread_mem, lock_owner_delay); - if (global_mem->global_lock_owner != thread_num) { + if (mem->global_lock_owner != thread_num) { current_errs++; sync_failures++; } /* Release shared lock, and make sure we no longer have it */ - global_mem->global_lock_owner = 0; + mem->global_lock_owner = 0; odp_sync_stores(); - odp_rwlock_write_unlock(&global_mem->global_rwlock); - if (global_mem->global_lock_owner == thread_num) { + odp_rwlock_write_unlock(&mem->global_rwlock); + if (mem->global_lock_owner == thread_num) { current_errs++; sync_failures++; } @@ -681,10 +681,10 @@ static void *rwlock_functional_test(void *arg UNUSED) /* Try to resync all of the threads to increase contention */ if ((rs_idx < NUM_RESYNC_BARRIERS) && ((cnt % resync_cnt) == (resync_cnt - 1))) - odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); + odp_barrier_wait(&mem->barrier_array[rs_idx++]); } - if ((global_mem->g_verbose) && (sync_failures != 0)) + if ((mem->g_verbose) && (sync_failures != 0)) printf("\nThread %u (id=%d core=%d) had %u sync_failures" " in %u iterations\n", thread_num, per_thread_mem->thread_id,