From patchwork Wed Sep 6 13:00:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 111771 Delivered-To: patch@linaro.org Received: by 10.140.94.166 with SMTP id g35csp837722qge; Wed, 6 Sep 2017 06:05:43 -0700 (PDT) X-Google-Smtp-Source: ADKCNb5gNikgtLVbstafassLF8qbYr5HuhD6V8FdUUsbAW1O5+J2lPa5ZY+W8seXDji6tBZ/mrHp X-Received: by 10.200.10.12 with SMTP id b12mr3341325qti.152.1504703143260; Wed, 06 Sep 2017 06:05:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1504703143; cv=none; d=google.com; s=arc-20160816; b=Yfo+xuUWH7WZNOxQKOQSwC5kjHlcvpcwZ4lKYXxN99YeggS/NjulqI6hU6HUq0K6EX OofHOdrPABvrtwvMMLCYKI55c+NkUM34fi8M66g7eXHkNMwwGrqd3hZAlyzKoE1illYo HvOBdBXN3DZWqwVjBYCSnkqgEFRBJElJG8JXy9Fbmqb2uYkotUxLXuaT/M0yqFYpOEtY y+AFlKTe6VSg0Vucj0bngcgkUQuQA8zhVza/88JQoDHHskYm0LCgF96uat+XxthgVNlZ 7VhGVUxcjOO9OV2eoHMg6qIQgTw6dEl9yLMpYIV74ldZGnC5TRUJufFCGoi0ALBnzxGA tVbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=aQhAI3Ds8XBZLKhQvy6H1shJ47CI8GkkXUR2dS9hygM=; b=f4EVIDhV3UY/IqAvff13+b/QbbMhn5a8vp4LT9i9V8/UYmVgOAZqJCQSw0QJhN39V6 cMWS7jcOnQocWWghr4FWIA9g7V5VH/b5rVygwASkAWIAzYfz92WeogiUEeB6+/yVmqDn Y4qbpF5z12BKPSAHUmHRBwHwpNwtusKIHSayQQGHvcJq8Wq8f3VXESOV/WmpkKFVSgYe KcrEWJLBZzIucCdqYMEPrflBHxc2dWIYdial/rkPLcfyxiTPrpcLAbtbiigNdL5UL6Px KIBA0Cosgto2lUkx8DWqvb13MaIyhAJNv/waXfpBFuwRZfjLlp9a4c724XAmsshkBA9H Z0zg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id p52si3126983qtf.195.2017.09.06.06.05.43; Wed, 06 Sep 2017 06:05:43 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id E3E7C64478; Wed, 6 Sep 2017 13:05:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 513A464476; Wed, 6 Sep 2017 13:01:54 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5CA3262C60; Wed, 6 Sep 2017 13:01:42 +0000 (UTC) Received: from forward104j.mail.yandex.net (forward104j.mail.yandex.net [5.45.198.247]) by lists.linaro.org (Postfix) with ESMTPS id B7DDA60826 for ; Wed, 6 Sep 2017 13:00:34 +0000 (UTC) Received: from mxback15g.mail.yandex.net (mxback15g.mail.yandex.net [IPv6:2a02:6b8:0:1472:2741:0:8b7:94]) by forward104j.mail.yandex.net (Yandex) with ESMTP id 8866C43CE8 for ; Wed, 6 Sep 2017 16:00:33 +0300 (MSK) Received: from smtp1j.mail.yandex.net (smtp1j.mail.yandex.net [2a02:6b8:0:801::ab]) by mxback15g.mail.yandex.net (nwsmtp/Yandex) with ESMTP id ubNa3vTE0M-0Xduwgfs; Wed, 06 Sep 2017 16:00:33 +0300 Received: by smtp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id SIMie6tNRS-0WU86vXJ; Wed, 06 Sep 2017 16:00:32 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Wed, 6 Sep 2017 16:00:22 +0300 Message-Id: <1504702822-13861-3-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1504702822-13861-1-git-send-email-odpbot@yandex.ru> References: <1504702822-13861-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 160 Subject: [lng-odp] [PATCH API-NEXT v3 2/2] linux-generic: api schedule unlock lock X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Balasubramanian Manoharan Signed-off-by: Balasubramanian Manoharan --- /** Email created from pull request 160 (bala-manoharan:api_sched_order_lock) ** https://github.com/Linaro/odp/pull/160 ** Patch: https://github.com/Linaro/odp/pull/160.patch ** Base sha: 4eae04e80a634c17ac276bb06bce468cbe28cde0 ** Merge commit sha: c9c66447de67e07c36638143516df6a14743a749 **/ platform/linux-generic/include/odp_schedule_if.h | 1 + .../include/odp_schedule_scalable_ordered.h | 1 + platform/linux-generic/odp_schedule.c | 13 ++++++++++++- platform/linux-generic/odp_schedule_if.c | 5 +++++ platform/linux-generic/odp_schedule_iquery.c | 13 ++++++++++++- platform/linux-generic/odp_schedule_scalable.c | 17 +++++++++++++++++ platform/linux-generic/odp_schedule_sp.c | 8 +++++++- 7 files changed, 55 insertions(+), 3 deletions(-) diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 657993b1..b0db67ab 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -95,6 +95,7 @@ typedef struct { odp_schedule_group_info_t *); void (*schedule_order_lock)(unsigned); void (*schedule_order_unlock)(unsigned); + void (*schedule_order_unlock_lock)(unsigned); } schedule_api_t; diff --git a/platform/linux-generic/include/odp_schedule_scalable_ordered.h b/platform/linux-generic/include/odp_schedule_scalable_ordered.h index 1c365a2b..493a4a78 100644 --- a/platform/linux-generic/include/odp_schedule_scalable_ordered.h +++ b/platform/linux-generic/include/odp_schedule_scalable_ordered.h @@ -79,6 +79,7 @@ typedef struct reorder_window { uint32_t tail; uint32_t turn; uint32_t olock[CONFIG_QUEUE_MAX_ORD_LOCKS]; + uint32_t lock_index; uint16_t lock_count; /* Reorder contexts in this window */ reorder_context_t *ring[RWIN_SIZE]; diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 5b940762..8de0af35 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -163,6 +163,7 @@ typedef struct { int stash_num; /**< Number of stashed enqueue operations */ uint8_t in_order; /**< Order status */ lock_called_t lock_called; /**< States of ordered locks */ + uint32_t lock_index; /** Storage for stashed enqueue operations */ ordered_stash_t stash[MAX_ORDERED_STASH]; } ordered; @@ -1121,6 +1122,7 @@ static void schedule_order_lock(unsigned lock_index) if (lock_seq == sched_local.ordered.ctx) { sched_local.ordered.lock_called.u8[lock_index] = 1; + sched_local.ordered.lock_index = lock_index; return; } odp_cpu_pause(); @@ -1141,9 +1143,17 @@ static void schedule_order_unlock(unsigned lock_index) ODP_ASSERT(sched_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); + sched_local.ordered.lock_index = sched->queue[queue_index]. + order_lock_count + 1; odp_atomic_store_rel_u64(ord_lock, sched_local.ordered.ctx + 1); } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + schedule_order_unlock(sched_local.ordered.lock_index); + schedule_order_lock(lock_index); +} + static void schedule_pause(void) { sched_local.pause = 1; @@ -1429,5 +1439,6 @@ const schedule_api_t schedule_default_api = { .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock }; diff --git a/platform/linux-generic/odp_schedule_if.c b/platform/linux-generic/odp_schedule_if.c index e56e3722..858c1949 100644 --- a/platform/linux-generic/odp_schedule_if.c +++ b/platform/linux-generic/odp_schedule_if.c @@ -129,3 +129,8 @@ void odp_schedule_order_unlock(unsigned lock_index) { return sched_api->schedule_order_unlock(lock_index); } + +void odp_schedule_order_unlock_lock(uint32_t lock_index) +{ + sched_api->schedule_order_unlock_lock(lock_index); +} diff --git a/platform/linux-generic/odp_schedule_iquery.c b/platform/linux-generic/odp_schedule_iquery.c index b81e5dab..d810ae58 100644 --- a/platform/linux-generic/odp_schedule_iquery.c +++ b/platform/linux-generic/odp_schedule_iquery.c @@ -223,6 +223,7 @@ struct sched_thread_local { int stash_num; /**< Number of stashed enqueue operations */ uint8_t in_order; /**< Order status */ lock_called_t lock_called; /**< States of ordered locks */ + uint32_t lock_index; /** Storage for stashed enqueue operations */ ordered_stash_t stash[MAX_ORDERED_STASH]; } ordered; @@ -1273,6 +1274,7 @@ static void schedule_order_lock(unsigned lock_index) if (lock_seq == thread_local.ordered.ctx) { thread_local.ordered.lock_called.u8[lock_index] = 1; + thread_local.ordered.lock_index = lock_index; return; } odp_cpu_pause(); @@ -1293,9 +1295,17 @@ static void schedule_order_unlock(unsigned lock_index) ODP_ASSERT(thread_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); + thread_local.ordered.lock_index = sched->queues[queue_index]. + lock_count + 1; odp_atomic_store_rel_u64(ord_lock, thread_local.ordered.ctx + 1); } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + schedule_order_unlock(thread_local.ordered.lock_index); + schedule_order_lock(lock_index); +} + static unsigned schedule_max_ordered_locks(void) { return CONFIG_QUEUE_MAX_ORD_LOCKS; @@ -1368,7 +1378,8 @@ const schedule_api_t schedule_iquery_api = { .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock }; static void thread_set_interest(sched_thread_local_t *thread, diff --git a/platform/linux-generic/odp_schedule_scalable.c b/platform/linux-generic/odp_schedule_scalable.c index 765326e8..f8b17578 100644 --- a/platform/linux-generic/odp_schedule_scalable.c +++ b/platform/linux-generic/odp_schedule_scalable.c @@ -1007,6 +1007,8 @@ static void schedule_order_lock(unsigned lock_index) monitor32(&rctx->rwin->olock[lock_index], __ATOMIC_ACQUIRE) != rctx->sn) doze(); + rctx->rwin->lock_index = lock_index; + } } @@ -1025,9 +1027,23 @@ static void schedule_order_unlock(unsigned lock_index) atomic_store_release(&rctx->rwin->olock[lock_index], rctx->sn + 1, /*readonly=*/false); + rctx->rwin->lock_index = rctx->rwin->lock_count + 1; rctx->olock_flags |= 1U << lock_index; } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + struct reorder_context *rctx; + + rctx = sched_ts->rctx; + if (odp_unlikely(rctx == NULL || rctx->rwin == NULL)) { + ODP_ERR("Invalid call to odp_schedule_order_unlock_lock\n"); + return; + } + schedule_order_unlock(rctx->rwin->lock_index); + schedule_order_lock(lock_index); +} + static void schedule_release_atomic(void) { sched_scalable_thread_state_t *ts; @@ -1978,4 +1994,5 @@ const schedule_api_t schedule_scalable_api = { .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock, }; diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 05241275..d4dfbcaf 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -819,6 +819,11 @@ static void schedule_order_unlock(unsigned lock_index) (void)lock_index; } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + (void)lock_index; +} + static void order_lock(void) { } @@ -868,5 +873,6 @@ const schedule_api_t schedule_sp_api = { .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock };