From patchwork Thu Sep 3 00:07:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 53003 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by patches.linaro.org (Postfix) with ESMTPS id 3E44323002 for ; Thu, 3 Sep 2015 00:12:00 +0000 (UTC) Received: by wicmn1 with SMTP id mn1sf11796556wic.1 for ; Wed, 02 Sep 2015 17:11:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=PpUuUU3yov2DXHweD2j1hZHTEie6X/CxQc5DqmRD30k=; b=kEfUctw1JmL7CzeonN+1Uql3wfXuVW1+vfJbeMlI5p/WW9WsiQOJKxxlgEUO7HjHYN /PlYoqXTwuuFKgmPBC4a3CqnTqD7432kM8541MwgRCA9PJcrOBTjSx0LuTAIgJepzMn1 iAk6B9W6x6Mxi+d098MUJ8U5E+HuUyddyXg91978yiSGUHjf+TByUTw53h75K9m+HhaQ vLPFVaX88nHP3FqBIwNnpxGi3xQONPnm1BQwTMZVDZi2EVk0nxXkyw0x6vHwujGDG82h +HVxNAbD/xFFMtvC0xWhOrHC/6fR/u0DU+XFLaIBRy67OTdqrkLSwV/4mTsQR/nzyA4D wpvw== X-Gm-Message-State: ALoCoQk52jerA2fNOa8efEiN4RpwFn0uNR5L9FVO69sn2bmrgsnG9z/dLDvW9GBqIsWBsAR9fcqg X-Received: by 10.180.100.71 with SMTP id ew7mr1770165wib.0.1441239119385; Wed, 02 Sep 2015 17:11:59 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.6 with SMTP id m6ls159509laj.32.gmail; Wed, 02 Sep 2015 17:11:59 -0700 (PDT) X-Received: by 10.152.5.40 with SMTP id p8mr19221380lap.10.1441239119211; Wed, 02 Sep 2015 17:11:59 -0700 (PDT) Received: from mail-la0-f48.google.com (mail-la0-f48.google.com. [209.85.215.48]) by mx.google.com with ESMTPS id ai2si21396130lbc.68.2015.09.02.17.11.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 02 Sep 2015 17:11:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) client-ip=209.85.215.48; Received: by lanb10 with SMTP id b10so17829055lan.3 for ; Wed, 02 Sep 2015 17:11:58 -0700 (PDT) X-Received: by 10.112.131.98 with SMTP id ol2mr18727503lbb.56.1441239118921; Wed, 02 Sep 2015 17:11:58 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.164.42 with SMTP id yn10csp910836lbb; Wed, 2 Sep 2015 17:11:57 -0700 (PDT) X-Received: by 10.140.152.144 with SMTP id 138mr66868645qhy.19.1441239117772; Wed, 02 Sep 2015 17:11:57 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id i184si27801030qhc.87.2015.09.02.17.11.36; Wed, 02 Sep 2015 17:11:57 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 2BE3161A07; Thu, 3 Sep 2015 00:11:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 08E2061A24; Thu, 3 Sep 2015 00:08:04 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 17A4561A01; Thu, 3 Sep 2015 00:07:58 +0000 (UTC) Received: from mail-ob0-f179.google.com (mail-ob0-f179.google.com [209.85.214.179]) by lists.linaro.org (Postfix) with ESMTPS id 0A6C961A01 for ; Thu, 3 Sep 2015 00:07:31 +0000 (UTC) Received: by obqa2 with SMTP id a2so22298431obq.3 for ; Wed, 02 Sep 2015 17:07:30 -0700 (PDT) X-Received: by 10.182.240.135 with SMTP id wa7mr24397308obc.63.1441238850375; Wed, 02 Sep 2015 17:07:30 -0700 (PDT) Received: from Ubuntu15.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by smtp.gmail.com with ESMTPSA id k5sm15601814oev.16.2015.09.02.17.07.29 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 02 Sep 2015 17:07:30 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org, maxim.uvarov@linaro.org Date: Wed, 2 Sep 2015 19:07:21 -0500 Message-Id: <1441238841-25105-8-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1441238841-25105-1-git-send-email-bill.fischofer@linaro.org> References: <1441238841-25105-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCH 7/7] validation: schedule: add coverage for new scheduler APIs X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.48 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Add validation test coverage for the following APIs: odp_schedule_prefetch() odp_schedule_group_destroy() odp_schedule_order_lock_init() odp_schedule_order_lock() odp_schedule_order_unlock() Signed-off-by: Bill Fischofer --- test/validation/scheduler/scheduler.c | 138 +++++++++++++++++++++++++++++++++- 1 file changed, 136 insertions(+), 2 deletions(-) diff --git a/test/validation/scheduler/scheduler.c b/test/validation/scheduler/scheduler.c index 2ca8b21..1874889 100644 --- a/test/validation/scheduler/scheduler.c +++ b/test/validation/scheduler/scheduler.c @@ -20,6 +20,7 @@ #define GLOBALS_SHM_NAME "test_globals" #define MSG_POOL_NAME "msg_pool" +#define QUEUE_CTX_POOL_NAME "queue_ctx_pool" #define SHM_MSG_POOL_NAME "shm_msg_pool" #define SHM_THR_ARGS_NAME "shm_thr_args" @@ -59,7 +60,19 @@ typedef struct { int enable_excl_atomic; } thread_args_t; +typedef struct { + uint64_t sequence; +} buf_contents; + +typedef struct { + odp_buffer_t ctx_handle; + uint64_t sequence; + uint64_t lock_sequence; + odp_schedule_order_lock_t order_lock; +} queue_context; + odp_pool_t pool; +odp_pool_t queue_ctx_pool; static int exit_schedule_loop(void) { @@ -268,7 +281,7 @@ void scheduler_test_groups(void) qp.sched.group = mygrp1; /* Create and populate a group in group 1 */ - queue_grp1 = odp_queue_create("sched_group_test_queue 1", + queue_grp1 = odp_queue_create("sched_group_test_queue_1", ODP_QUEUE_TYPE_SCHED, &qp); CU_ASSERT_FATAL(queue_grp1 != ODP_QUEUE_INVALID); CU_ASSERT_FATAL(odp_queue_sched_group(queue_grp1) == mygrp1); @@ -327,6 +340,12 @@ void scheduler_test_groups(void) rc = odp_schedule_group_join(mygrp1, &mymask); CU_ASSERT_FATAL(rc == 0); + /* Tell scheduler we're about to request an event. + * Not needed, but a convenient place to test this API. + */ + odp_schedule_prefetch(1); + + /* Now get the event from Queue 1 */ ev = odp_schedule(&from, ODP_SCHED_WAIT); CU_ASSERT_FATAL(ev != ODP_EVENT_INVALID); CU_ASSERT_FATAL(from == queue_grp1); @@ -348,8 +367,16 @@ void scheduler_test_groups(void) /* Done with queues for this round */ CU_ASSERT_FATAL(odp_queue_destroy(queue_grp1) == 0); CU_ASSERT_FATAL(odp_queue_destroy(queue_grp2) == 0); + + /* Verify we can no longer find our queues */ + CU_ASSERT_FATAL(odp_queue_lookup("sched_group_test_queue_1") == + ODP_QUEUE_INVALID); + CU_ASSERT_FATAL(odp_queue_lookup("sched_group_test_queue_2") == + ODP_QUEUE_INVALID); } + CU_ASSERT_FATAL(odp_schedule_group_destroy(mygrp1) == 0); + CU_ASSERT_FATAL(odp_schedule_group_destroy(mygrp2) == 0); CU_ASSERT_FATAL(odp_pool_destroy(p) == 0); } @@ -358,6 +385,8 @@ static void *schedule_common_(void *arg) thread_args_t *args = (thread_args_t *)arg; odp_schedule_sync_t sync; test_globals_t *globals; + queue_context *qctx; + buf_contents *bctx; globals = args->globals; sync = args->sync; @@ -389,6 +418,17 @@ static void *schedule_common_(void *arg) if (num == 0) continue; + if (sync == ODP_SCHED_SYNC_ORDERED) { + qctx = odp_queue_context(from); + bctx = odp_buffer_addr( + odp_buffer_from_event(events[0])); + odp_schedule_order_lock(&qctx->order_lock); + CU_ASSERT(bctx->sequence == + qctx->lock_sequence); + qctx->lock_sequence += num; + odp_schedule_order_unlock(&qctx->order_lock); + } + for (j = 0; j < num; j++) odp_event_free(events[j]); } else { @@ -397,6 +437,15 @@ static void *schedule_common_(void *arg) if (buf == ODP_BUFFER_INVALID) continue; num = 1; + if (sync == ODP_SCHED_SYNC_ORDERED) { + qctx = odp_queue_context(from); + bctx = odp_buffer_addr(buf); + odp_schedule_order_lock(&qctx->order_lock); + CU_ASSERT(bctx->sequence == + qctx->lock_sequence); + qctx->lock_sequence += num; + odp_schedule_order_unlock(&qctx->order_lock); + } odp_buffer_free(buf); } @@ -484,6 +533,13 @@ static void fill_queues(thread_args_t *args) buf = odp_buffer_alloc(pool); CU_ASSERT_FATAL(buf != ODP_BUFFER_INVALID); ev = odp_buffer_to_event(buf); + if (sync == ODP_SCHED_SYNC_ORDERED) { + queue_context *qctx = + odp_queue_context(queue); + buf_contents *bctx = + odp_buffer_addr(buf); + bctx->sequence = qctx->sequence++; + } if (!(CU_ASSERT(odp_queue_enq(queue, ev) == 0))) odp_buffer_free(buf); else @@ -495,6 +551,32 @@ static void fill_queues(thread_args_t *args) globals->buf_count = buf_count; } +static void reset_queues(thread_args_t *args) +{ + int i, j, k; + int num_prio = args->num_prio; + int num_queues = args->num_queues; + char name[32]; + + for (i = 0; i < num_prio; i++) { + for (j = 0; j < num_queues; j++) { + odp_queue_t queue; + + snprintf(name, sizeof(name), + "sched_%d_%d_o", i, j); + queue = odp_queue_lookup(name); + CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID); + + for (k = 0; k < args->num_bufs; k++) { + queue_context *qctx = + odp_queue_context(queue); + qctx->sequence = 0; + qctx->lock_sequence = 0; + } + } + } +} + static void schedule_common(odp_schedule_sync_t sync, int num_queues, int num_prio, int enable_schd_multi) { @@ -519,6 +601,8 @@ static void schedule_common(odp_schedule_sync_t sync, int num_queues, fill_queues(&args); schedule_common_(&args); + if (sync == ODP_SCHED_SYNC_ORDERED) + reset_queues(&args); } static void parallel_execute(odp_schedule_sync_t sync, int num_queues, @@ -559,6 +643,10 @@ static void parallel_execute(odp_schedule_sync_t sync, int num_queues, /* Wait for worker threads to terminate */ odp_cunit_thread_exit(&args->cu_thr); + + /* Cleanup ordered queues for next pass */ + if (sync == ODP_SCHED_SYNC_ORDERED) + reset_queues(args); } /* 1 queue 1 thread ODP_SCHED_SYNC_NONE */ @@ -810,9 +898,23 @@ void scheduler_test_pause_resume(void) static int create_queues(void) { - int i, j, prios; + int i, j, prios, rc; + odp_pool_param_t params; + odp_buffer_t queue_ctx_buf; + queue_context *qctx; prios = odp_schedule_num_prio(); + odp_pool_param_init(¶ms); + params.buf.size = sizeof(queue_context); + params.buf.num = prios * QUEUES_PER_PRIO; + params.type = ODP_POOL_BUFFER; + + queue_ctx_pool = odp_pool_create(QUEUE_CTX_POOL_NAME, ¶ms); + + if (queue_ctx_pool == ODP_POOL_INVALID) { + printf("Pool creation failed (queue ctx).\n"); + return -1; + } for (i = 0; i < prios; i++) { odp_queue_param_t p; @@ -850,6 +952,31 @@ static int create_queues(void) printf("Schedule queue create failed.\n"); return -1; } + + queue_ctx_buf = odp_buffer_alloc(queue_ctx_pool); + + if (queue_ctx_buf == ODP_BUFFER_INVALID) { + printf("Cannot allocate queue ctx buf\n"); + return -1; + } + + qctx = odp_buffer_addr(queue_ctx_buf); + qctx->ctx_handle = queue_ctx_buf; + qctx->sequence = 0; + qctx->lock_sequence = 0; + rc = odp_schedule_order_lock_init(&qctx->order_lock, q); + + if (rc != 0) { + printf("Ordered lock init failed\n"); + return -1; + } + + rc = odp_queue_context_set(q, qctx); + + if (rc != 0) { + printf("Cannot set queue context\n"); + return -1; + } } } @@ -919,11 +1046,15 @@ int scheduler_suite_init(void) static int destroy_queue(const char *name) { odp_queue_t q; + queue_context *qctx; q = odp_queue_lookup(name); if (q == ODP_QUEUE_INVALID) return -1; + qctx = odp_queue_context(q); + if (qctx) + odp_buffer_free(qctx->ctx_handle); return odp_queue_destroy(q); } @@ -952,6 +1083,9 @@ static int destroy_queues(void) } } + if (odp_pool_destroy(queue_ctx_pool) != 0) + return -1; + return 0; }