From patchwork Sun Aug 2 23:21:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 51810 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by patches.linaro.org (Postfix) with ESMTPS id 6151F229FD for ; Sun, 2 Aug 2015 23:24:26 +0000 (UTC) Received: by lbcjf8 with SMTP id jf8sf38700786lbc.0 for ; Sun, 02 Aug 2015 16:24:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=xGwCYfDzzWLPj3VAmKwLeNEJGsNgleePq8WvoUPAsKI=; b=ODQkw9bRIyZrEJlY1UArwiVtL7QMsZJw9HAK3WAI5e/WbkYns9yVBPPyRTstjP8Asr Zpg7OUpvXO77EmnaVsRJ5utPsw3lnTh6D9UJexjoeSeX0pY9xJ5AhS+T4VzTIhjRpxvi PtgqdjP8+BOPQ2nD2nVwn6cdIGBfGLJOhFTqTqlWteK8FWt3tT65vPzSrsxhLYfPtk+u PEOckYrsgMS9++fhU3EMeWk1dsyRWLkY7oF20f0vww9eOiuqDgqmdjxtneht3V22jMPo oO2p/DHeBqQjMjfm+VNdq1Q9rB+K3VUGqa/D3n/GvdTOC9l3PFRTt95BuZZbswbOn1Hk DUyA== X-Gm-Message-State: ALoCoQnw6VNTEfv21uFX3TV4T8BaSA+KqYV2IPitN+Hxa2DraUYnB82M1bLzsQhoiYv/RKDB1xO2 X-Received: by 10.180.96.137 with SMTP id ds9mr4902944wib.2.1438557865313; Sun, 02 Aug 2015 16:24:25 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.170.195 with SMTP id ao3ls554806lac.53.gmail; Sun, 02 Aug 2015 16:24:25 -0700 (PDT) X-Received: by 10.152.27.134 with SMTP id t6mr14262770lag.100.1438557865026; Sun, 02 Aug 2015 16:24:25 -0700 (PDT) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com. [209.85.215.42]) by mx.google.com with ESMTPS id ak6si10456186lbc.59.2015.08.02.16.24.24 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 02 Aug 2015 16:24:24 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) client-ip=209.85.215.42; Received: by labow3 with SMTP id ow3so5554644lab.1 for ; Sun, 02 Aug 2015 16:24:24 -0700 (PDT) X-Received: by 10.112.199.133 with SMTP id jk5mr14442126lbc.32.1438557864572; Sun, 02 Aug 2015 16:24:24 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1569643lba; Sun, 2 Aug 2015 16:24:23 -0700 (PDT) X-Received: by 10.55.19.136 with SMTP id 8mr47656qkt.103.1438557863070; Sun, 02 Aug 2015 16:24:23 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id i35si15234365qgi.6.2015.08.02.16.24.22; Sun, 02 Aug 2015 16:24:23 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 57EE161D95; Sun, 2 Aug 2015 23:24:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id BD54A61D7E; Sun, 2 Aug 2015 23:23:43 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id E9F4061D86; Sun, 2 Aug 2015 23:23:38 +0000 (UTC) Received: from mail-qk0-f182.google.com (mail-qk0-f182.google.com [209.85.220.182]) by lists.linaro.org (Postfix) with ESMTPS id 0F8DD61C82 for ; Sun, 2 Aug 2015 23:22:51 +0000 (UTC) Received: by qkfc129 with SMTP id c129so45966702qkf.1 for ; Sun, 02 Aug 2015 16:22:50 -0700 (PDT) X-Received: by 10.55.31.85 with SMTP id f82mr19922652qkf.104.1438557770807; Sun, 02 Aug 2015 16:22:50 -0700 (PDT) Received: from localhost.localdomain ([64.88.227.134]) by smtp.gmail.com with ESMTPSA id b126sm5444557qhc.46.2015.08.02.16.22.46 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 02 Aug 2015 16:22:50 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Sun, 2 Aug 2015 16:21:44 -0700 Message-Id: <1438557709-30355-4-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1438557709-30355-1-git-send-email-bill.fischofer@linaro.org> References: <1438557709-30355-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv4 3/8] linux-generic: schedule: implement scheduler groups X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- include/odp/api/config.h | 5 + .../include/odp/plat/schedule_types.h | 4 + platform/linux-generic/odp_schedule.c | 158 +++++++++++++++++++++ platform/linux-generic/odp_thread.c | 25 +++- 4 files changed, 186 insertions(+), 6 deletions(-) diff --git a/include/odp/api/config.h b/include/odp/api/config.h index b5c8fdd..302eaf5 100644 --- a/include/odp/api/config.h +++ b/include/odp/api/config.h @@ -44,6 +44,11 @@ extern "C" { #define ODP_CONFIG_SCHED_PRIOS 8 /** + * Number of scheduling groups + */ +#define ODP_CONFIG_SCHED_GRPS 16 + +/** * Maximum number of packet IO resources */ #define ODP_CONFIG_PKTIO_ENTRIES 64 diff --git a/platform/linux-generic/include/odp/plat/schedule_types.h b/platform/linux-generic/include/odp/plat/schedule_types.h index 91e62e7..f13bfab 100644 --- a/platform/linux-generic/include/odp/plat/schedule_types.h +++ b/platform/linux-generic/include/odp/plat/schedule_types.h @@ -43,8 +43,12 @@ typedef int odp_schedule_sync_t; typedef int odp_schedule_group_t; +/* These must be kept in sync with thread_globals_t in odp_thread.c */ +#define ODP_SCHED_GROUP_INVALID -1 #define ODP_SCHED_GROUP_ALL 0 #define ODP_SCHED_GROUP_WORKER 1 +#define ODP_SCHED_GROUP_CONTROL 2 +#define ODP_SCHED_GROUP_NAMED 3 #define ODP_SCHED_GROUP_NAME_LEN 32 diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 5d32c81..20dd850 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -23,6 +23,8 @@ #include #include +odp_thrmask_t sched_mask_all; + /* Number of schedule commands. * One per scheduled queue and packet interface */ #define NUM_SCHED_CMD (ODP_CONFIG_QUEUES + ODP_CONFIG_PKTIO_ENTRIES) @@ -48,6 +50,11 @@ typedef struct { odp_pool_t pool; odp_shm_t shm; uint32_t pri_count[ODP_CONFIG_SCHED_PRIOS][QUEUES_PER_PRIO]; + odp_spinlock_t grp_lock; + struct { + char name[ODP_SCHED_GROUP_NAME_LEN]; + odp_thrmask_t *mask; + } sched_grp[ODP_CONFIG_SCHED_GRPS]; } sched_t; /* Schedule command */ @@ -87,6 +94,9 @@ static sched_t *sched; /* Thread local scheduler context */ static __thread sched_local_t sched_local; +/* Internal routine to get scheduler thread mask addrs */ +odp_thrmask_t *thread_sched_grp_mask(int index); + static void sched_local_init(void) { int i; @@ -163,6 +173,15 @@ int odp_schedule_init_global(void) } } + odp_spinlock_init(&sched->grp_lock); + + for (i = 0; i < ODP_CONFIG_SCHED_GRPS; i++) { + memset(&sched->sched_grp[i].name, 0, ODP_SCHED_GROUP_NAME_LEN); + sched->sched_grp[i].mask = thread_sched_grp_mask(i); + } + + odp_thrmask_setall(&sched_mask_all); + ODP_DBG("done\n"); return 0; @@ -466,6 +485,18 @@ static int schedule(odp_queue_t *out_queue, odp_event_t out_ev[], } qe = sched_cmd->qe; + if (qe->s.param.sched.group > ODP_SCHED_GROUP_ALL && + !odp_thrmask_isset(sched->sched_grp + [qe->s.param.sched.group].mask, + thr)) { + /* This thread is not eligible for work from + * this queue, so continue scheduling it. + */ + if (odp_queue_enq(pri_q, ev)) + ODP_ABORT("schedule failed\n"); + continue; + } + num = queue_deq_multi(qe, sched_local.buf_hdr, max_deq); if (num < 0) { @@ -587,3 +618,130 @@ int odp_schedule_num_prio(void) { return ODP_CONFIG_SCHED_PRIOS; } + +odp_schedule_group_t odp_schedule_group_create(const char *name, + const odp_thrmask_t *mask) +{ + odp_schedule_group_t group = ODP_SCHED_GROUP_INVALID; + int i; + + odp_spinlock_lock(&sched->grp_lock); + + for (i = ODP_SCHED_GROUP_NAMED; i < ODP_CONFIG_SCHED_GRPS; i++) { + if (sched->sched_grp[i].name[0] == 0) { + strncpy(sched->sched_grp[i].name, name, + ODP_SCHED_GROUP_NAME_LEN - 1); + sched->sched_grp[i].name[ODP_SCHED_GROUP_NAME_LEN - 1] + = 0; + odp_thrmask_copy(sched->sched_grp[i].mask, mask); + group = (odp_schedule_group_t)i; + break; + } + } + + odp_spinlock_unlock(&sched->grp_lock); + return group; +} + +int odp_schedule_group_destroy(odp_schedule_group_t group) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group > ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) { + odp_thrmask_zero(sched->sched_grp[group].mask); + memset(&sched->sched_grp[group].name, 0, + ODP_SCHED_GROUP_NAME_LEN); + ret = 0; + } else { + ret = -1; + } + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} + +odp_schedule_group_t odp_schedule_group_lookup(const char *name) +{ + odp_schedule_group_t group = ODP_SCHED_GROUP_INVALID; + int i; + + odp_spinlock_lock(&sched->grp_lock); + + for (i = ODP_SCHED_GROUP_NAMED; i < ODP_CONFIG_SCHED_GRPS; i++) { + if (strcmp(name, sched->sched_grp[i].name) == 0) { + group = (odp_schedule_group_t)i; + break; + } + } + + odp_spinlock_unlock(&sched->grp_lock); + return group; +} + +int odp_schedule_group_join(odp_schedule_group_t group, + const odp_thrmask_t *mask) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group >= ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) { + odp_thrmask_or(sched->sched_grp[group].mask, + sched->sched_grp[group].mask, + mask); + ret = 0; + } else { + ret = -1; + } + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} + +int odp_schedule_group_leave(odp_schedule_group_t group, + const odp_thrmask_t *mask) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group >= ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) { + odp_thrmask_t leavemask; + + odp_thrmask_xor(&leavemask, mask, &sched_mask_all); + odp_thrmask_and(sched->sched_grp[group].mask, + sched->sched_grp[group].mask, + &leavemask); + ret = 0; + } else { + ret = -1; + } + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} + +int odp_schedule_group_count(odp_schedule_group_t group) +{ + int ret; + + odp_spinlock_lock(&sched->grp_lock); + + if (group < ODP_CONFIG_SCHED_GRPS && + group >= ODP_SCHED_GROUP_NAMED && + sched->sched_grp[group].name[0] != 0) + ret = odp_thrmask_count(sched->sched_grp[group].mask); + else + ret = -1; + + odp_spinlock_unlock(&sched->grp_lock); + return ret; +} diff --git a/platform/linux-generic/odp_thread.c b/platform/linux-generic/odp_thread.c index 9905c78..770c64e 100644 --- a/platform/linux-generic/odp_thread.c +++ b/platform/linux-generic/odp_thread.c @@ -32,9 +32,15 @@ typedef struct { typedef struct { thread_state_t thr[ODP_CONFIG_MAX_THREADS]; - odp_thrmask_t all; - odp_thrmask_t worker; - odp_thrmask_t control; + union { + /* struct order must be kept in sync with schedule_types.h */ + struct { + odp_thrmask_t all; + odp_thrmask_t worker; + odp_thrmask_t control; + }; + odp_thrmask_t sched_grp_mask[ODP_CONFIG_SCHED_GRPS]; + }; uint32_t num; uint32_t num_worker; uint32_t num_control; @@ -53,6 +59,7 @@ static __thread thread_state_t *this_thread; int odp_thread_init_global(void) { odp_shm_t shm; + int i; shm = odp_shm_reserve("odp_thread_globals", sizeof(thread_globals_t), @@ -65,13 +72,19 @@ int odp_thread_init_global(void) memset(thread_globals, 0, sizeof(thread_globals_t)); odp_spinlock_init(&thread_globals->lock); - odp_thrmask_zero(&thread_globals->all); - odp_thrmask_zero(&thread_globals->worker); - odp_thrmask_zero(&thread_globals->control); + + for (i = 0; i < ODP_CONFIG_SCHED_GRPS; i++) + odp_thrmask_zero(&thread_globals->sched_grp_mask[i]); return 0; } +odp_thrmask_t *thread_sched_grp_mask(int index); +odp_thrmask_t *thread_sched_grp_mask(int index) +{ + return &thread_globals->sched_grp_mask[index]; +} + int odp_thread_term_global(void) { int ret;