From patchwork Tue Jun 30 16:14:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 50486 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B0399229DF for ; Tue, 30 Jun 2015 16:16:54 +0000 (UTC) Received: by lagh6 with SMTP id h6sf5260255lag.0 for ; Tue, 30 Jun 2015 09:16:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:cc:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=tz5Swp91YVUooyRXsHxN26J/aLJxrBL6QO9XWGUUGoA=; b=Ol76ZuIGgEaIT1AW/TF9s1/IaXRdSk9yjYVG32QN/m32LXBwy2KI3QobWY5wzJBFSU qHcrqobZVabt43hvWpUkafh6x9D38oEGAxAs17nsZ4T7tMURVXylVSoP2jyog99esZS3 6A/gXB23bQZl+sXDZqZ9ffoYD/C/X6r+k+vXVE/xYW2pAI3Y8k4RxECzuduBI/3qxp25 ZaNBZR2kTHyJ8orQpbYhtMR81qSRkXL0YBxkkc3VJYWqkwETHezwp9rxReKLMc5mtuMt glOm9D5JmH6eB1oW5DoqHkkFCkAG4/1hssYH4y6j5FSQmcxdQrpmRg8EBD3koY+0e81v qGmA== X-Gm-Message-State: ALoCoQlXS6URpRjtihkpRdgKEXkrZqr6zIEtlsb1unA5BNoZ/YwpDv8A8zsAZPviRcTGD38shpCK X-Received: by 10.195.18.70 with SMTP id gk6mr13032818wjd.6.1435681013495; Tue, 30 Jun 2015 09:16:53 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.69 with SMTP id w5ls68844laj.107.gmail; Tue, 30 Jun 2015 09:16:53 -0700 (PDT) X-Received: by 10.152.6.105 with SMTP id z9mr20191195laz.98.1435681013322; Tue, 30 Jun 2015 09:16:53 -0700 (PDT) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com. [209.85.215.50]) by mx.google.com with ESMTPS id lq12si32881881lbb.64.2015.06.30.09.16.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 30 Jun 2015 09:16:52 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) client-ip=209.85.215.50; Received: by laar3 with SMTP id r3so20259694laa.0 for ; Tue, 30 Jun 2015 09:16:52 -0700 (PDT) X-Received: by 10.152.6.69 with SMTP id y5mr20352542lay.72.1435681012865; Tue, 30 Jun 2015 09:16:52 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2573517lbb; Tue, 30 Jun 2015 09:16:51 -0700 (PDT) X-Received: by 10.55.19.225 with SMTP id 94mr44556506qkt.37.1435681011315; Tue, 30 Jun 2015 09:16:51 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id i64si45617774qge.99.2015.06.30.09.16.50; Tue, 30 Jun 2015 09:16:51 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id A2BC761F18; Tue, 30 Jun 2015 16:16:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id C23F461F22; Tue, 30 Jun 2015 16:15:29 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id BC31B61F24; Tue, 30 Jun 2015 16:15:18 +0000 (UTC) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com [209.85.215.49]) by lists.linaro.org (Postfix) with ESMTPS id 04DCF61EFB for ; Tue, 30 Jun 2015 16:15:09 +0000 (UTC) Received: by lagh6 with SMTP id h6so20059203lag.2 for ; Tue, 30 Jun 2015 09:15:07 -0700 (PDT) X-Received: by 10.152.42.177 with SMTP id p17mr20380176lal.29.1435680907841; Tue, 30 Jun 2015 09:15:07 -0700 (PDT) Received: from erachmi-VirtualBox.ki.sw.ericsson.se (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by mx.google.com with ESMTPSA id x4sm9024476lag.40.2015.06.30.09.15.06 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 30 Jun 2015 09:15:07 -0700 (PDT) From: Christophe Milard To: anders.roxell@linaro.org, mike.holmes@linaro.org, stuart.haslam@linaro.org, maxim.uvarov@linaro.org Date: Tue, 30 Jun 2015 18:14:53 +0200 Message-Id: <1435680896-11924-3-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1435680896-11924-1-git-send-email-christophe.milard@linaro.org> References: <1435680896-11924-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Cc: lng-odp@lists.linaro.org Subject: [lng-odp] [PATCH 2/5] validation: renaming in odp_scheduler.c X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: christophe.milard@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Renaming of things which may be, one day, exported in a lib. This renaming is important, as it creates consistency between test symbols, which is needed if things get eventually exported in the lib. Also, tests are often created from other tests: Fixing the first exemples will help geting future tests better. Things that are candidate to be exported in the lib in the future have been named as follows: -Tests, i.e. functions which are used in CUNIT testsuites are named: _test_* -Test arrays, i.e. arrays of CU_TestInfo, listing the test functions belonging to a suite, are called: _suite[_*] where the possible suffix can be used if many suites are declared. -CUNIT suite init and termination functions are called: _suite[_*]_init() and _suite[_*]_term() respectively. -Suite arrays, i.e. arrays of CU_SuiteInfo used in executables are called: _suites[_*] where the possible suffix identifies the executable using it, if many. -Main function(s), are called: _main[_*] where the possible suffix identifies the executable using it Signed-off-by: Christophe Milard --- test/validation/odp_scheduler.c | 129 ++++++++++++++++++++-------------------- 1 file changed, 65 insertions(+), 64 deletions(-) diff --git a/test/validation/odp_scheduler.c b/test/validation/odp_scheduler.c index 3b98202..e0aa8e6 100644 --- a/test/validation/odp_scheduler.c +++ b/test/validation/odp_scheduler.c @@ -77,7 +77,7 @@ static int exit_schedule_loop(void) return ret; } -static void test_schedule_wait_time(void) +static void scheduler_test_wait_time(void) { uint64_t wait_time; @@ -90,7 +90,7 @@ static void test_schedule_wait_time(void) CU_ASSERT(wait_time > 0); } -static void test_schedule_num_prio(void) +static void scheduler_test_num_prio(void) { int prio; @@ -100,7 +100,7 @@ static void test_schedule_num_prio(void) CU_ASSERT(prio == odp_schedule_num_prio()); } -static void test_schedule_queue_destroy(void) +static void scheduler_test_queue_destroy(void) { odp_pool_t p; odp_pool_param_t params; @@ -369,25 +369,25 @@ static void parallel_execute(odp_schedule_sync_t sync, int num_queues, } /* 1 queue 1 thread ODP_SCHED_SYNC_NONE */ -static void test_schedule_1q_1t_n(void) +static void scheduler_test_1q_1t_n(void) { schedule_common(ODP_SCHED_SYNC_NONE, ONE_Q, ONE_PRIO, SCHD_ONE); } /* 1 queue 1 thread ODP_SCHED_SYNC_ATOMIC */ -static void test_schedule_1q_1t_a(void) +static void scheduler_test_1q_1t_a(void) { schedule_common(ODP_SCHED_SYNC_ATOMIC, ONE_Q, ONE_PRIO, SCHD_ONE); } /* 1 queue 1 thread ODP_SCHED_SYNC_ORDERED */ -static void test_schedule_1q_1t_o(void) +static void scheduler_test_1q_1t_o(void) { schedule_common(ODP_SCHED_SYNC_ORDERED, ONE_Q, ONE_PRIO, SCHD_ONE); } /* Many queues 1 thread ODP_SCHED_SYNC_NONE */ -static void test_schedule_mq_1t_n(void) +static void scheduler_test_mq_1t_n(void) { /* Only one priority involved in these tests, but use the same number of queues the more general case uses */ @@ -395,40 +395,40 @@ static void test_schedule_mq_1t_n(void) } /* Many queues 1 thread ODP_SCHED_SYNC_ATOMIC */ -static void test_schedule_mq_1t_a(void) +static void scheduler_test_mq_1t_a(void) { schedule_common(ODP_SCHED_SYNC_ATOMIC, MANY_QS, ONE_PRIO, SCHD_ONE); } /* Many queues 1 thread ODP_SCHED_SYNC_ORDERED */ -static void test_schedule_mq_1t_o(void) +static void scheduler_test_mq_1t_o(void) { schedule_common(ODP_SCHED_SYNC_ORDERED, MANY_QS, ONE_PRIO, SCHD_ONE); } /* Many queues 1 thread check priority ODP_SCHED_SYNC_NONE */ -static void test_schedule_mq_1t_prio_n(void) +static void scheduler_test_mq_1t_prio_n(void) { int prio = odp_schedule_num_prio(); schedule_common(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_ONE); } /* Many queues 1 thread check priority ODP_SCHED_SYNC_ATOMIC */ -static void test_schedule_mq_1t_prio_a(void) +static void scheduler_test_mq_1t_prio_a(void) { int prio = odp_schedule_num_prio(); schedule_common(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_ONE); } /* Many queues 1 thread check priority ODP_SCHED_SYNC_ORDERED */ -static void test_schedule_mq_1t_prio_o(void) +static void scheduler_test_mq_1t_prio_o(void) { int prio = odp_schedule_num_prio(); schedule_common(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_ONE); } /* Many queues many threads check priority ODP_SCHED_SYNC_NONE */ -static void test_schedule_mq_mt_prio_n(void) +static void scheduler_test_mq_mt_prio_n(void) { int prio = odp_schedule_num_prio(); parallel_execute(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_ONE, @@ -436,7 +436,7 @@ static void test_schedule_mq_mt_prio_n(void) } /* Many queues many threads check priority ODP_SCHED_SYNC_ATOMIC */ -static void test_schedule_mq_mt_prio_a(void) +static void scheduler_test_mq_mt_prio_a(void) { int prio = odp_schedule_num_prio(); parallel_execute(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_ONE, @@ -444,7 +444,7 @@ static void test_schedule_mq_mt_prio_a(void) } /* Many queues many threads check priority ODP_SCHED_SYNC_ORDERED */ -static void test_schedule_mq_mt_prio_o(void) +static void scheduler_test_mq_mt_prio_o(void) { int prio = odp_schedule_num_prio(); parallel_execute(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_ONE, @@ -452,32 +452,32 @@ static void test_schedule_mq_mt_prio_o(void) } /* 1 queue many threads check exclusive access on ATOMIC queues */ -static void test_schedule_1q_mt_a_excl(void) +static void scheduler_test_1q_mt_a_excl(void) { parallel_execute(ODP_SCHED_SYNC_ATOMIC, ONE_Q, ONE_PRIO, SCHD_ONE, ENABLE_EXCL_ATOMIC); } /* 1 queue 1 thread ODP_SCHED_SYNC_NONE multi */ -static void test_schedule_multi_1q_1t_n(void) +static void scheduler_test_multi_1q_1t_n(void) { schedule_common(ODP_SCHED_SYNC_NONE, ONE_Q, ONE_PRIO, SCHD_MULTI); } /* 1 queue 1 thread ODP_SCHED_SYNC_ATOMIC multi */ -static void test_schedule_multi_1q_1t_a(void) +static void scheduler_test_multi_1q_1t_a(void) { schedule_common(ODP_SCHED_SYNC_ATOMIC, ONE_Q, ONE_PRIO, SCHD_MULTI); } /* 1 queue 1 thread ODP_SCHED_SYNC_ORDERED multi */ -static void test_schedule_multi_1q_1t_o(void) +static void scheduler_test_multi_1q_1t_o(void) { schedule_common(ODP_SCHED_SYNC_ORDERED, ONE_Q, ONE_PRIO, SCHD_MULTI); } /* Many queues 1 thread ODP_SCHED_SYNC_NONE multi */ -static void test_schedule_multi_mq_1t_n(void) +static void scheduler_test_multi_mq_1t_n(void) { /* Only one priority involved in these tests, but use the same number of queues the more general case uses */ @@ -485,67 +485,67 @@ static void test_schedule_multi_mq_1t_n(void) } /* Many queues 1 thread ODP_SCHED_SYNC_ATOMIC multi */ -static void test_schedule_multi_mq_1t_a(void) +static void scheduler_test_multi_mq_1t_a(void) { schedule_common(ODP_SCHED_SYNC_ATOMIC, MANY_QS, ONE_PRIO, SCHD_MULTI); } /* Many queues 1 thread ODP_SCHED_SYNC_ORDERED multi */ -static void test_schedule_multi_mq_1t_o(void) +static void scheduler_test_multi_mq_1t_o(void) { schedule_common(ODP_SCHED_SYNC_ORDERED, MANY_QS, ONE_PRIO, SCHD_MULTI); } /* Many queues 1 thread check priority ODP_SCHED_SYNC_NONE multi */ -static void test_schedule_multi_mq_1t_prio_n(void) +static void scheduler_test_multi_mq_1t_prio_n(void) { int prio = odp_schedule_num_prio(); schedule_common(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_MULTI); } /* Many queues 1 thread check priority ODP_SCHED_SYNC_ATOMIC multi */ -static void test_schedule_multi_mq_1t_prio_a(void) +static void scheduler_test_multi_mq_1t_prio_a(void) { int prio = odp_schedule_num_prio(); schedule_common(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_MULTI); } /* Many queues 1 thread check priority ODP_SCHED_SYNC_ORDERED multi */ -static void test_schedule_multi_mq_1t_prio_o(void) +static void scheduler_test_multi_mq_1t_prio_o(void) { int prio = odp_schedule_num_prio(); schedule_common(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_MULTI); } /* Many queues many threads check priority ODP_SCHED_SYNC_NONE multi */ -static void test_schedule_multi_mq_mt_prio_n(void) +static void scheduler_test_multi_mq_mt_prio_n(void) { int prio = odp_schedule_num_prio(); parallel_execute(ODP_SCHED_SYNC_NONE, MANY_QS, prio, SCHD_MULTI, 0); } /* Many queues many threads check priority ODP_SCHED_SYNC_ATOMIC multi */ -static void test_schedule_multi_mq_mt_prio_a(void) +static void scheduler_test_multi_mq_mt_prio_a(void) { int prio = odp_schedule_num_prio(); parallel_execute(ODP_SCHED_SYNC_ATOMIC, MANY_QS, prio, SCHD_MULTI, 0); } /* Many queues many threads check priority ODP_SCHED_SYNC_ORDERED multi */ -static void test_schedule_multi_mq_mt_prio_o(void) +static void scheduler_test_multi_mq_mt_prio_o(void) { int prio = odp_schedule_num_prio(); parallel_execute(ODP_SCHED_SYNC_ORDERED, MANY_QS, prio, SCHD_MULTI, 0); } /* 1 queue many threads check exclusive access on ATOMIC queues multi */ -static void test_schedule_multi_1q_mt_a_excl(void) +static void scheduler_test_multi_1q_mt_a_excl(void) { parallel_execute(ODP_SCHED_SYNC_ATOMIC, ONE_Q, ONE_PRIO, SCHD_MULTI, ENABLE_EXCL_ATOMIC); } -static void test_schedule_pause_resume(void) +static void scheduler_test_pause_resume(void) { odp_queue_t queue; odp_buffer_t buf; @@ -651,7 +651,7 @@ static int create_queues(void) return 0; } -static int schd_suite_init(void) +static int scheduler_suite_init(void) { odp_shm_t shm; odp_pool_t pool; @@ -748,7 +748,7 @@ static int destroy_queues(void) return 0; } -static int schd_suite_term(void) +static int scheduler_suite_term(void) { odp_pool_t pool; @@ -764,43 +764,44 @@ static int schd_suite_term(void) return 0; } -static struct CU_TestInfo schd_tests[] = { - {"schedule_wait_time", test_schedule_wait_time}, - {"schedule_num_prio", test_schedule_num_prio}, - {"schedule_queue_destroy", test_schedule_queue_destroy}, - {"schedule_1q_1t_n", test_schedule_1q_1t_n}, - {"schedule_1q_1t_a", test_schedule_1q_1t_a}, - {"schedule_1q_1t_o", test_schedule_1q_1t_o}, - {"schedule_mq_1t_n", test_schedule_mq_1t_n}, - {"schedule_mq_1t_a", test_schedule_mq_1t_a}, - {"schedule_mq_1t_o", test_schedule_mq_1t_o}, - {"schedule_mq_1t_prio_n", test_schedule_mq_1t_prio_n}, - {"schedule_mq_1t_prio_a", test_schedule_mq_1t_prio_a}, - {"schedule_mq_1t_prio_o", test_schedule_mq_1t_prio_o}, - {"schedule_mq_mt_prio_n", test_schedule_mq_mt_prio_n}, - {"schedule_mq_mt_prio_a", test_schedule_mq_mt_prio_a}, - {"schedule_mq_mt_prio_o", test_schedule_mq_mt_prio_o}, - {"schedule_1q_mt_a_excl", test_schedule_1q_mt_a_excl}, - {"schedule_multi_1q_1t_n", test_schedule_multi_1q_1t_n}, - {"schedule_multi_1q_1t_a", test_schedule_multi_1q_1t_a}, - {"schedule_multi_1q_1t_o", test_schedule_multi_1q_1t_o}, - {"schedule_multi_mq_1t_n", test_schedule_multi_mq_1t_n}, - {"schedule_multi_mq_1t_a", test_schedule_multi_mq_1t_a}, - {"schedule_multi_mq_1t_o", test_schedule_multi_mq_1t_o}, - {"schedule_multi_mq_1t_prio_n", test_schedule_multi_mq_1t_prio_n}, - {"schedule_multi_mq_1t_prio_a", test_schedule_multi_mq_1t_prio_a}, - {"schedule_multi_mq_1t_prio_o", test_schedule_multi_mq_1t_prio_o}, - {"schedule_multi_mq_mt_prio_n", test_schedule_multi_mq_mt_prio_n}, - {"schedule_multi_mq_mt_prio_a", test_schedule_multi_mq_mt_prio_a}, - {"schedule_multi_mq_mt_prio_o", test_schedule_multi_mq_mt_prio_o}, - {"schedule_multi_1q_mt_a_excl", test_schedule_multi_1q_mt_a_excl}, - {"schedule_pause_resume", test_schedule_pause_resume}, +static struct CU_TestInfo scheduler_suite[] = { + {"schedule_wait_time", scheduler_test_wait_time}, + {"schedule_num_prio", scheduler_test_num_prio}, + {"schedule_queue_destroy", scheduler_test_queue_destroy}, + {"schedule_1q_1t_n", scheduler_test_1q_1t_n}, + {"schedule_1q_1t_a", scheduler_test_1q_1t_a}, + {"schedule_1q_1t_o", scheduler_test_1q_1t_o}, + {"schedule_mq_1t_n", scheduler_test_mq_1t_n}, + {"schedule_mq_1t_a", scheduler_test_mq_1t_a}, + {"schedule_mq_1t_o", scheduler_test_mq_1t_o}, + {"schedule_mq_1t_prio_n", scheduler_test_mq_1t_prio_n}, + {"schedule_mq_1t_prio_a", scheduler_test_mq_1t_prio_a}, + {"schedule_mq_1t_prio_o", scheduler_test_mq_1t_prio_o}, + {"schedule_mq_mt_prio_n", scheduler_test_mq_mt_prio_n}, + {"schedule_mq_mt_prio_a", scheduler_test_mq_mt_prio_a}, + {"schedule_mq_mt_prio_o", scheduler_test_mq_mt_prio_o}, + {"schedule_1q_mt_a_excl", scheduler_test_1q_mt_a_excl}, + {"schedule_multi_1q_1t_n", scheduler_test_multi_1q_1t_n}, + {"schedule_multi_1q_1t_a", scheduler_test_multi_1q_1t_a}, + {"schedule_multi_1q_1t_o", scheduler_test_multi_1q_1t_o}, + {"schedule_multi_mq_1t_n", scheduler_test_multi_mq_1t_n}, + {"schedule_multi_mq_1t_a", scheduler_test_multi_mq_1t_a}, + {"schedule_multi_mq_1t_o", scheduler_test_multi_mq_1t_o}, + {"schedule_multi_mq_1t_prio_n", scheduler_test_multi_mq_1t_prio_n}, + {"schedule_multi_mq_1t_prio_a", scheduler_test_multi_mq_1t_prio_a}, + {"schedule_multi_mq_1t_prio_o", scheduler_test_multi_mq_1t_prio_o}, + {"schedule_multi_mq_mt_prio_n", scheduler_test_multi_mq_mt_prio_n}, + {"schedule_multi_mq_mt_prio_a", scheduler_test_multi_mq_mt_prio_a}, + {"schedule_multi_mq_mt_prio_o", scheduler_test_multi_mq_mt_prio_o}, + {"schedule_multi_1q_mt_a_excl", scheduler_test_multi_1q_mt_a_excl}, + {"schedule_pause_resume", scheduler_test_pause_resume}, CU_TEST_INFO_NULL, }; static CU_SuiteInfo scheduler_suites[] = { {"Scheduler", - schd_suite_init, schd_suite_term, NULL, NULL, schd_tests}, + scheduler_suite_init, scheduler_suite_term, NULL, NULL, scheduler_suite + }, CU_SUITE_INFO_NULL, };