From patchwork Thu Dec 29 07:57:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 89249 Delivered-To: patch@linaro.org Received: by 10.140.20.101 with SMTP id 92csp5861224qgi; Wed, 28 Dec 2016 23:04:24 -0800 (PST) X-Received: by 10.200.42.106 with SMTP id l39mr36305133qtl.280.1482995064699; Wed, 28 Dec 2016 23:04:24 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id d41si17461121qtc.243.2016.12.28.23.04.24; Wed, 28 Dec 2016 23:04:24 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 64A1960AE3; Thu, 29 Dec 2016 07:04:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 93EE960E37; Thu, 29 Dec 2016 06:59:15 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5E64460E10; Thu, 29 Dec 2016 06:59:07 +0000 (UTC) Received: from mail-lf0-f46.google.com (mail-lf0-f46.google.com [209.85.215.46]) by lists.linaro.org (Postfix) with ESMTPS id 87BBC60E13 for ; Thu, 29 Dec 2016 06:58:00 +0000 (UTC) Received: by mail-lf0-f46.google.com with SMTP id c13so216418866lfg.0 for ; Wed, 28 Dec 2016 22:58:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tNqQ8XCNG7TTdJUM71yZ5raPSQNfjlihz9pnrXQ20V4=; b=TrIocZt+rUv/QtiVPtrgLS6t+/jP1q5OgB3pethmiXvDWaInaAh0bMFH+AfDJKMplu Ws175ruKS7QVDCphQe4spEpXRUBcHY7uSipHjkDOsjUNewFEHV8B4l0qnf1cIVUK4t2L DbaSBfoNtUaqOtcvwdNlgJSZUCFTUOXI1fRXtj5W0yv04rRnkbTNrxUvdeav6ffiw2OA 6ZMDRi6VH/f4WHyurbk68hiuLRPlzcr1tQVmlCFr/Aj/O+YHci24zWhcIptLT6a5YFSV E2KhcKMbPeL0XM/uZ4245bNz+4hC8vS7qYsFmhVowsf4jdF2Fxisw6lbwVJ9ug4QBO8N XWIQ== X-Gm-Message-State: AIkVDXJz2CesD6o7eneIhaOP/R/REATi5ggy3dLK408viGQZaoaC7V7ajMM0BpBjtcjWsPc/DrQ= X-Received: by 10.46.8.2 with SMTP id 2mr13836672lji.55.1482994679318; Wed, 28 Dec 2016 22:57:59 -0800 (PST) Received: from erachmi-ericsson.ki.sw.ericsson.se (c-83-233-76-66.cust.bredband2.com. [83.233.76.66]) by smtp.gmail.com with ESMTPSA id m129sm12698011lfe.6.2016.12.28.22.57.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Dec 2016 22:57:58 -0800 (PST) From: Christophe Milard To: mike.holmes@linaro.org, bill.fischofer@linaro.org, yi.he@linaro.org, forrest.shi@linaro.org, lng-odp@lists.linaro.org Date: Thu, 29 Dec 2016 08:57:17 +0100 Message-Id: <1482998237-36552-7-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1482998237-36552-1-git-send-email-christophe.milard@linaro.org> References: <1482998237-36552-1-git-send-email-christophe.milard@linaro.org> Subject: [lng-odp] [API-NEXT PATCHv2 6/6] test: drv: shm: adding buddy allocation stress tests X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Stress tests for the random size allocator (buddy allocator in linux-generic) are added here. Signed-off-by: Christophe Milard --- .../common_plat/validation/drv/drvshmem/drvshmem.c | 177 +++++++++++++++++++++ .../common_plat/validation/drv/drvshmem/drvshmem.h | 1 + 2 files changed, 178 insertions(+) -- 2.7.4 diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.c b/test/common_plat/validation/drv/drvshmem/drvshmem.c index d4dedea..0f882ae 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.c +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.c @@ -938,6 +938,182 @@ void drvshmem_test_slab_basic(void) odpdrv_shm_pool_destroy(pool); } +/* + * thread part for the drvshmem_test_buddy_stress + */ +static int run_test_buddy_stress(void *arg ODP_UNUSED) +{ + odpdrv_shm_t shm; + odpdrv_shm_pool_t pool; + uint8_t *address; + shared_test_data_t *glob_data; + uint8_t random_bytes[STRESS_RANDOM_SZ]; + uint32_t index; + uint32_t size; + uint8_t data; + uint32_t iter; + uint32_t i; + + shm = odpdrv_shm_lookup_by_name(MEM_NAME); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + /* get the pool to test */ + pool = odpdrv_shm_pool_lookup(POOL_NAME); + + /* wait for general GO! */ + odpdrv_barrier_wait(&glob_data->test_barrier1); + /* + + * at each iteration: pick up a random index for + * glob_data->stress[index]: If the entry is free, allocated small mem + * randomly. If it is already allocated, make checks and free it: + * Note that different tread can allocate or free a given block + */ + for (iter = 0; iter < STRESS_ITERATION; iter++) { + /* get 4 random bytes from which index, size ,align, flags + * and data will be derived: + */ + odp_random_data(random_bytes, STRESS_RANDOM_SZ, 0); + index = random_bytes[0] & (STRESS_SIZE - 1); + + odp_spinlock_lock(&glob_data->stress_lock); + + switch (glob_data->stress[index].state) { + case STRESS_FREE: + /* allocated a new block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + + size = (random_bytes[1] + 1) << 4; /* up to 4Kb */ + data = random_bytes[2]; + + address = odpdrv_shm_pool_alloc(pool, size); + glob_data->stress[index].address = address; + if (address == NULL) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + glob_data->stress[index].size = size; + glob_data->stress[index].data_val = data; + + /* write some data: */ + for (i = 0; i < size; i++) + address[i] = (data++) & 0xFF; + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_ALLOC: + /* free the block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + address = glob_data->stress[index].address; + + if (shm == NULL) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + /* check that data is reachable and correct: */ + data = glob_data->stress[index].data_val; + size = glob_data->stress[index].size; + for (i = 0; i < size; i++) { + CU_ASSERT(address[i] == (data & 0xFF)); + data++; + } + + odpdrv_shm_pool_free(pool, address); + + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_BUSY: + default: + odp_spinlock_unlock(&glob_data->stress_lock); + break; + } + } + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * stress tests + */ +void drvshmem_test_buddy_stress(void) +{ + odpdrv_shm_pool_param_t pool_params; + odpdrv_shm_pool_t pool; + pthrd_arg thrdarg; + odpdrv_shm_t shm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + uint32_t i; + uint8_t *address; + + /* create a pool and check that it can be looked up */ + pool_params.pool_size = POOL_SZ; + pool_params.min_alloc = 0; + pool_params.max_alloc = POOL_SZ; + pool = odpdrv_shm_pool_create(POOL_NAME, &pool_params); + odpdrv_shm_pool_print("Stress test start", pool); + + shm = odpdrv_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), + 0, ODPDRV_SHM_LOCK); + CU_ASSERT(ODPDRV_SHM_INVALID != shm); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + glob_data->nb_threads = thrdarg.numthrds; + odpdrv_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds); + odp_spinlock_init(&glob_data->stress_lock); + + /* before starting the threads, mark all entries as free: */ + for (i = 0; i < STRESS_SIZE; i++) + glob_data->stress[i].state = STRESS_FREE; + + /* create threads */ + odp_cunit_thread_create(run_test_buddy_stress, &thrdarg); + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + odpdrv_shm_pool_print("Stress test all thread finished", pool); + + /* release left overs: */ + for (i = 0; i < STRESS_SIZE; i++) { + address = glob_data->stress[i].address; + if (glob_data->stress[i].state == STRESS_ALLOC) + odpdrv_shm_pool_free(pool, address); + } + + CU_ASSERT(0 == odpdrv_shm_free_by_name(MEM_NAME)); + + /* check that no memory is left over: */ + odpdrv_shm_pool_print("Stress test all released", pool); + + /* destroy pool: */ + odpdrv_shm_pool_destroy(pool); +} + odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_basic), ODP_TEST_INFO(drvshmem_test_reserve_after_fork), @@ -945,6 +1121,7 @@ odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_stress), ODP_TEST_INFO(drvshmem_test_buddy_basic), ODP_TEST_INFO(drvshmem_test_slab_basic), + ODP_TEST_INFO(drvshmem_test_buddy_stress), ODP_TEST_INFO_NULL, }; diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.h b/test/common_plat/validation/drv/drvshmem/drvshmem.h index fdc1080..817b3d5 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.h +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.h @@ -16,6 +16,7 @@ void drvshmem_test_singleva_after_fork(void); void drvshmem_test_stress(void); void drvshmem_test_buddy_basic(void); void drvshmem_test_slab_basic(void); +void drvshmem_test_buddy_stress(void); /* test arrays: */ extern odp_testinfo_t drvshmem_suite[];