From patchwork Sat Aug 20 07:46:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 74370 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp663232qga; Sat, 20 Aug 2016 01:02:33 -0700 (PDT) X-Received: by 10.55.100.21 with SMTP id y21mr12577084qkb.274.1471680153251; Sat, 20 Aug 2016 01:02:33 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id p92si5346489qtd.66.2016.08.20.01.02.33; Sat, 20 Aug 2016 01:02:33 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id E6AAE60A4C; Sat, 20 Aug 2016 08:02:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 7787361F60; Sat, 20 Aug 2016 07:48:41 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 8857F61890; Sat, 20 Aug 2016 07:48:12 +0000 (UTC) Received: from mail-lf0-f44.google.com (mail-lf0-f44.google.com [209.85.215.44]) by lists.linaro.org (Postfix) with ESMTPS id 5C72961890 for ; Sat, 20 Aug 2016 07:46:28 +0000 (UTC) Received: by mail-lf0-f44.google.com with SMTP id b199so46786063lfe.0 for ; Sat, 20 Aug 2016 00:46:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VXOH23QjuqwNjqR+0jAjEfFe4ZeH7Ms6zeXAy+KGNs8=; b=O6A1xL9kipq9RMScjCiCgzW5JragtdQjAX70Niz+DCYeCFBb6O7uAclUXVMMFwb/Qd Ld3GXpLqsYajvnyGrPw1ZBedk41lfQEMnCvgNquLnR+c0X/EUgUoUhSBYJ+eb5f+Hl8s L7I6btM5M3YWQf7Cs045afiZVdFY754/PfhOiIbTB2YXiTuD0Evamx/oojTemlX6ppVg ye4XXd0qGOv8Ck/pS5FMU/BhtsWdG6WeXx2SXrL9f9LH8AVjKxerxCV3QhvAAihltETD +eYA/gw+K0kblwGDjxU7mDt7CguTGr9SRAs7WfROcmaKPrqVP+SafcNpu7bv2ZbUHvHO RxlA== X-Gm-Message-State: AEkoout7LzY/Z/TyiZA+mPJZT7sRUBOknhuYqHpr4EwVKlmaMMYiBQa12mkh8FPIQK6QSZCpC5c= X-Received: by 10.25.30.76 with SMTP id e73mr3649937lfe.202.1471679186950; Sat, 20 Aug 2016 00:46:26 -0700 (PDT) Received: from localhost.localdomain (c-83-233-76-66.cust.bredband2.com. [83.233.76.66]) by smtp.gmail.com with ESMTPSA id 17sm1854092ljj.49.2016.08.20.00.46.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 20 Aug 2016 00:46:26 -0700 (PDT) From: Christophe Milard To: bill.fischofer@linaro.org, mike.holmes@linaro.org, lng-odp@lists.linaro.org Date: Sat, 20 Aug 2016 09:46:03 +0200 Message-Id: <1471679163-17240-14-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1471679163-17240-1-git-send-email-christophe.milard@linaro.org> References: <1471679163-17240-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv3 13/13] test: validation: drv: shmem: stress tests X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Stress tests, randomly allocating memory are added: the test is based on a group of odp threads allocating, mapping and freeing each-other memory. Signed-off-by: Christophe Milard --- .../common_plat/validation/drv/drvshmem/drvshmem.c | 222 +++++++++++++++++++++ .../common_plat/validation/drv/drvshmem/drvshmem.h | 1 + 2 files changed, 223 insertions(+) -- 2.7.4 diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.c b/test/common_plat/validation/drv/drvshmem/drvshmem.c index 9ca81fc..5e6d2e5 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.c +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.c @@ -17,6 +17,25 @@ #define SMALL_MEM 10 #define MEDIUM_MEM 4096 #define BIG_MEM 16777216 +#define STRESS_SIZE 32 /* power of 2 and <=256 */ +#define STRESS_RANDOM_SZ 5 +#define STRESS_ITERATION 5000 + +typedef enum { + STRESS_FREE, /* entry is free and can be allocated */ + STRESS_BUSY, /* entry is being processed: don't touch */ + STRESS_ALLOC /* entry is allocated and can be freed */ +} stress_state_t; + +typedef struct { + stress_state_t state; + odpdrv_shm_t shm; + void *address; + uint32_t flags; + uint32_t size; + uint64_t align; + uint8_t data_val; +} stress_data_t; typedef struct { odpdrv_barrier_t test_barrier1; @@ -29,6 +48,8 @@ typedef struct { uint32_t nb_threads; odpdrv_shm_t shm[MAX_WORKERS]; void *address[MAX_WORKERS]; + odp_spinlock_t stress_lock; + stress_data_t stress[STRESS_SIZE]; } shared_test_data_t; /* memory stuff expected to fit in a single page */ @@ -543,10 +564,211 @@ void drvshmem_test_singleva_after_fork(void) CU_ASSERT(odpdrv_shm_print_all("Test completion") == base); } +/* + * thread part for the drvshmem_test_stress + */ +static int run_test_stress(void *arg ODP_UNUSED) +{ + odpdrv_shm_t shm; + uint8_t *address; + shared_test_data_t *glob_data; + uint8_t random_bytes[STRESS_RANDOM_SZ]; + uint32_t index; + uint32_t size; + uint64_t align; + uint32_t flags; + uint8_t data; + uint32_t iter; + uint32_t i; + + shm = odpdrv_shm_lookup_by_name(MEM_NAME); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + /* wait for general GO! */ + odpdrv_barrier_wait(&glob_data->test_barrier1); + /* + + * at each iteration: pick up a random index for + * glob_data->stress[index]: If the entry is free, allocated mem + * randomly. If it is already allocated, make checks and free it: + * Note that different tread cann allocate or free a given block + */ + for (iter = 0; iter < STRESS_ITERATION; iter++) { + /* get 4 random bytes from which index, size ,align, flags + * and data will be derived: + */ + odp_random_data(random_bytes, STRESS_RANDOM_SZ, 0); + index = random_bytes[0] & (STRESS_SIZE - 1); + + odp_spinlock_lock(&glob_data->stress_lock); + + switch (glob_data->stress[index].state) { + case STRESS_FREE: + /* allocated a new block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + + size = (random_bytes[1] + 1) << 6; /* up to 16Kb */ + /* we just play with the VA flag. randomly setting + * the mlock flag may exceed user ulimit -l + */ + flags = random_bytes[2] & ODPDRV_SHM_SINGLE_VA; + align = (random_bytes[3] + 1) << 6;/* up to 16Kb */ + data = random_bytes[4]; + + shm = odpdrv_shm_reserve(NULL, size, align, flags); + glob_data->stress[index].shm = shm; + if (shm == ODPDRV_SHM_INVALID) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + address = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(address); + glob_data->stress[index].address = address; + glob_data->stress[index].flags = flags; + glob_data->stress[index].size = size; + glob_data->stress[index].align = align; + glob_data->stress[index].data_val = data; + + /* write some data: writing each byte would be a + * waste of time: just make sure each page is reached */ + for (i = 0; i < size; i += 256) + address[i] = (data++) & 0xFF; + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_ALLOC: + /* free the block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + shm = glob_data->stress[index].shm; + + if (shm == ODPDRV_SHM_INVALID) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + CU_ASSERT(odpdrv_shm_lookup_by_handle(shm) != 0); + + address = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(address); + + align = glob_data->stress[index].align; + if (align) { + align = glob_data->stress[index].align; + CU_ASSERT(((uintptr_t)address & (align - 1)) + == 0) + } + + flags = glob_data->stress[index].flags; + if (flags & ODPDRV_SHM_SINGLE_VA) + CU_ASSERT(glob_data->stress[index].address == + address) + + /* check that data is reachable and correct: */ + data = glob_data->stress[index].data_val; + size = glob_data->stress[index].size; + for (i = 0; i < size; i += 256) { + CU_ASSERT(address[i] == (data & 0xFF)); + data++; + } + + if (flags & ODPDRV_SHM_SINGLE_VA) { + CU_ASSERT(!odpdrv_shm_free_by_address(address)); + } else { + CU_ASSERT(!odpdrv_shm_free_by_handle(shm)); + } + + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_BUSY: + default: + odp_spinlock_unlock(&glob_data->stress_lock); + break; + } + } + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * stress tests + */ +void drvshmem_test_stress(void) +{ + pthrd_arg thrdarg; + odpdrv_shm_t shm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + int base; /* number of blocks already allocated at start of test */ + uint32_t i; + + base = odpdrv_shm_print_all("Before thread tests"); + + shm = odpdrv_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), + 0, ODPDRV_SHM_LOCK); + CU_ASSERT(ODPDRV_SHM_INVALID != shm); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + glob_data->nb_threads = thrdarg.numthrds; + odpdrv_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds); + odp_spinlock_init(&glob_data->stress_lock); + + /* before starting the threads, mark all entries as free: */ + for (i = 0; i < STRESS_SIZE; i++) + glob_data->stress[i].state = STRESS_FREE; + + /* create threads */ + odp_cunit_thread_create(run_test_stress, &thrdarg); + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + odpdrv_shm_print_all("Middle"); + + /* release left overs: */ + for (i = 0; i < STRESS_SIZE; i++) { + shm = glob_data->stress[i].shm; + if ((glob_data->stress[i].state == STRESS_ALLOC) && + (glob_data->stress[i].shm != ODPDRV_SHM_INVALID)) { + CU_ASSERT(odpdrv_shm_lookup_by_handle(shm) != + NULL); + CU_ASSERT(!odpdrv_shm_free_by_handle(shm)); + } + } + + CU_ASSERT(0 == odpdrv_shm_free_by_name(MEM_NAME)); + + /* check that no memory is left over: */ + CU_ASSERT(odpdrv_shm_print_all("After stress tests") == base); +} + odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_basic), ODP_TEST_INFO(drvshmem_test_reserve_after_fork), ODP_TEST_INFO(drvshmem_test_singleva_after_fork), + ODP_TEST_INFO(drvshmem_test_stress), ODP_TEST_INFO_NULL, }; diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.h b/test/common_plat/validation/drv/drvshmem/drvshmem.h index 3f9f96e..f4c26a1 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.h +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.h @@ -13,6 +13,7 @@ void drvshmem_test_basic(void); void drvshmem_test_reserve_after_fork(void); void drvshmem_test_singleva_after_fork(void); +void drvshmem_test_stress(void); /* test arrays: */ extern odp_testinfo_t drvshmem_suite[];