From patchwork Tue Sep 5 15:00:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 111697 Delivered-To: patch@linaro.org Received: by 10.140.94.166 with SMTP id g35csp2966663qge; Tue, 5 Sep 2017 08:09:08 -0700 (PDT) X-Google-Smtp-Source: ADKCNb5JfdoSmmS/Q3CpPo73Gn1sRZnWqSVEPh6Q+VG3ffRRsGzwdgQ8TK5loDMeTrRGodth0QJg X-Received: by 10.237.33.211 with SMTP id m19mr5520173qtc.272.1504624148529; Tue, 05 Sep 2017 08:09:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1504624148; cv=none; d=google.com; s=arc-20160816; b=gwK8GTTQRoHKE4wjTXbk8G7gsEXvqw9ZSWn/SBvyaz1hutoamZIxUgD3rbD4jQv1gg ofcjh3bvb9pdEJh0EZn95YMSrPD8pBaI89HW0clC9iO5s9Mig0HaN+8eYCSBxIKmEjLk L22VOWrKOfRfE5909LiLMcmuE9h+uXwBkWa/KHmcUV5/kOELeq1hXb2XuNDnV15ssTMf Y/DzWePXNIa43Rl1/ak6HlX2MglfupKg17tjOnlk/8QpKPQiItUbJuGOT4o1YvFe8KO8 nr7no52Sk8SgSocHSkP/qnudhUPl6b8oVIiPOpYFsiEiNfFGbUYB0i9oZjUb3MIXUXG0 5fpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=ayMuJ1Ve0/RZPuUJMYjVYIeNjnDdhcHWUkt5IEbVlx8=; b=Nhj2Na7JnlUdvzdooUmzptpqQalQOhiV4By+QfkZFxyMX/i1E5l/OG+NBnqA1408su nQG9vx2B7G82hPkCnNAgsJPcd+j8j8B/vzLafluNCRkMTqZA9P154DV86VDv2I34Rbmq cxy0nx0AM7iUrsA9tl5O/Td0zEerzKKBaKuGfBNqdpTTv/s3jlu/2DBZz58l3aPrjiwW GV5GS58hw5bi+0edbtVcuY+igVLhWoueA6LfeDD2D10hAGn6gcNfYqagFsQNoGl3BFH4 FitrZF9rZeZ5uTiZkX410Sthepx4mtw0qt+OCEJzXMWg88rbAjAjJDehbQyWHqrOxfMz Zc/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id q48si429670qta.408.2017.09.05.08.09.08; Tue, 05 Sep 2017 08:09:08 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 0897161E64; Tue, 5 Sep 2017 15:09:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id D5EA262AE8; Tue, 5 Sep 2017 15:02:39 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 5D0C861BBA; Tue, 5 Sep 2017 15:02:17 +0000 (UTC) Received: from forward101o.mail.yandex.net (forward101o.mail.yandex.net [37.140.190.181]) by lists.linaro.org (Postfix) with ESMTPS id 10C3361CBC for ; Tue, 5 Sep 2017 15:00:39 +0000 (UTC) Received: from mxback10j.mail.yandex.net (mxback10j.mail.yandex.net [IPv6:2a02:6b8:0:1619::113]) by forward101o.mail.yandex.net (Yandex) with ESMTP id B3D641340E83 for ; Tue, 5 Sep 2017 18:00:36 +0300 (MSK) Received: from smtp3o.mail.yandex.net (smtp3o.mail.yandex.net [2a02:6b8:0:1a2d::27]) by mxback10j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id c9C6EMgoLY-0aBu7vCs; Tue, 05 Sep 2017 18:00:36 +0300 Received: by smtp3o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id LWjBxinodf-0ZEWlNDl; Tue, 05 Sep 2017 18:00:35 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Tue, 5 Sep 2017 18:00:28 +0300 Message-Id: <1504623628-29246-3-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1504623628-29246-1-git-send-email-odpbot@yandex.ru> References: <1504623628-29246-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 160 Subject: [lng-odp] [PATCH API-NEXT v2 2/2] linux-generic: api schedule unlock lock X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Balasubramanian Manoharan Signed-off-by: Balasubramanian Manoharan --- /** Email created from pull request 160 (bala-manoharan:api_sched_order_lock) ** https://github.com/Linaro/odp/pull/160 ** Patch: https://github.com/Linaro/odp/pull/160.patch ** Base sha: 4eae04e80a634c17ac276bb06bce468cbe28cde0 ** Merge commit sha: 0fdc4843f1c46de41bffe74270ffe252bdce1521 **/ platform/linux-generic/include/odp_schedule_if.h | 1 + .../include/odp_schedule_scalable_ordered.h | 1 + platform/linux-generic/odp_schedule.c | 25 +++++++++++++++++++++- platform/linux-generic/odp_schedule_if.c | 5 +++++ platform/linux-generic/odp_schedule_iquery.c | 24 ++++++++++++++++++++- platform/linux-generic/odp_schedule_scalable.c | 19 ++++++++++++++++ platform/linux-generic/odp_schedule_sp.c | 8 ++++++- 7 files changed, 80 insertions(+), 3 deletions(-) diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 657993b1..b0db67ab 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -95,6 +95,7 @@ typedef struct { odp_schedule_group_info_t *); void (*schedule_order_lock)(unsigned); void (*schedule_order_unlock)(unsigned); + void (*schedule_order_unlock_lock)(unsigned); } schedule_api_t; diff --git a/platform/linux-generic/include/odp_schedule_scalable_ordered.h b/platform/linux-generic/include/odp_schedule_scalable_ordered.h index 1c365a2b..cdc2ab49 100644 --- a/platform/linux-generic/include/odp_schedule_scalable_ordered.h +++ b/platform/linux-generic/include/odp_schedule_scalable_ordered.h @@ -79,6 +79,7 @@ typedef struct reorder_window { uint32_t tail; uint32_t turn; uint32_t olock[CONFIG_QUEUE_MAX_ORD_LOCKS]; + uint32_t lock_acquired; uint16_t lock_count; /* Reorder contexts in this window */ reorder_context_t *ring[RWIN_SIZE]; diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 5b940762..35f69bd7 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -214,6 +214,8 @@ typedef struct { /* Array of ordered locks */ odp_atomic_u64_t lock[CONFIG_QUEUE_MAX_ORD_LOCKS]; + odp_atomic_u64_t lock_acquired; + } order_context_t ODP_ALIGNED_CACHE; typedef struct { @@ -1103,6 +1105,7 @@ static void order_unlock(void) static void schedule_order_lock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; + odp_atomic_u64_t *lock_acquired; uint32_t queue_index; queue_index = sched_local.ordered.src_queue; @@ -1112,6 +1115,7 @@ static void schedule_order_lock(unsigned lock_index) !sched_local.ordered.lock_called.u8[lock_index]); ord_lock = &sched->order[queue_index].lock[lock_index]; + lock_acquired = &sched->order[queue_index].lock_acquired; /* Busy loop to synchronize ordered processing */ while (1) { @@ -1121,6 +1125,7 @@ static void schedule_order_lock(unsigned lock_index) if (lock_seq == sched_local.ordered.ctx) { sched_local.ordered.lock_called.u8[lock_index] = 1; + odp_atomic_store_rel_u64(lock_acquired, lock_index); return; } odp_cpu_pause(); @@ -1130,6 +1135,7 @@ static void schedule_order_lock(unsigned lock_index) static void schedule_order_unlock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; + odp_atomic_u64_t *lock_acquired; uint32_t queue_index; queue_index = sched_local.ordered.src_queue; @@ -1138,12 +1144,28 @@ static void schedule_order_unlock(unsigned lock_index) lock_index <= sched->queue[queue_index].order_lock_count); ord_lock = &sched->order[queue_index].lock[lock_index]; + lock_acquired = &sched->order[queue_index].lock_acquired; ODP_ASSERT(sched_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); + odp_atomic_store_rel_u64(lock_acquired, + sched->queue[queue_index]. + order_lock_count + 1); odp_atomic_store_rel_u64(ord_lock, sched_local.ordered.ctx + 1); } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + odp_atomic_u64_t *lock_acquired; + uint32_t queue_index; + + queue_index = sched_local.ordered.src_queue; + + lock_acquired = &sched->order[queue_index].lock_acquired; + schedule_order_unlock(odp_atomic_load_u64(lock_acquired)); + schedule_order_lock(lock_index); +} + static void schedule_pause(void) { sched_local.pause = 1; @@ -1429,5 +1451,6 @@ const schedule_api_t schedule_default_api = { .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock }; diff --git a/platform/linux-generic/odp_schedule_if.c b/platform/linux-generic/odp_schedule_if.c index e56e3722..858c1949 100644 --- a/platform/linux-generic/odp_schedule_if.c +++ b/platform/linux-generic/odp_schedule_if.c @@ -129,3 +129,8 @@ void odp_schedule_order_unlock(unsigned lock_index) { return sched_api->schedule_order_unlock(lock_index); } + +void odp_schedule_order_unlock_lock(uint32_t lock_index) +{ + sched_api->schedule_order_unlock_lock(lock_index); +} diff --git a/platform/linux-generic/odp_schedule_iquery.c b/platform/linux-generic/odp_schedule_iquery.c index b81e5dab..d8154358 100644 --- a/platform/linux-generic/odp_schedule_iquery.c +++ b/platform/linux-generic/odp_schedule_iquery.c @@ -135,6 +135,8 @@ typedef struct { /* Array of ordered locks */ odp_atomic_u64_t lock[CONFIG_QUEUE_MAX_ORD_LOCKS]; + odp_atomic_u64_t lock_acquired; + } order_context_t ODP_ALIGNED_CACHE; typedef struct { @@ -1255,6 +1257,7 @@ static void order_unlock(void) static void schedule_order_lock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; + odp_atomic_u64_t *lock_acquired; uint32_t queue_index; queue_index = thread_local.ordered.src_queue; @@ -1264,6 +1267,7 @@ static void schedule_order_lock(unsigned lock_index) !thread_local.ordered.lock_called.u8[lock_index]); ord_lock = &sched->order[queue_index].lock[lock_index]; + lock_acquired = &sched->order[queue_index].lock_acquired; /* Busy loop to synchronize ordered processing */ while (1) { @@ -1273,6 +1277,7 @@ static void schedule_order_lock(unsigned lock_index) if (lock_seq == thread_local.ordered.ctx) { thread_local.ordered.lock_called.u8[lock_index] = 1; + odp_atomic_store_rel_u64(lock_acquired, lock_index); return; } odp_cpu_pause(); @@ -1282,6 +1287,7 @@ static void schedule_order_lock(unsigned lock_index) static void schedule_order_unlock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; + odp_atomic_u64_t *lock_acquired; uint32_t queue_index; queue_index = thread_local.ordered.src_queue; @@ -1290,12 +1296,27 @@ static void schedule_order_unlock(unsigned lock_index) lock_index <= sched->queues[queue_index].lock_count); ord_lock = &sched->order[queue_index].lock[lock_index]; + lock_acquired = &sched->order[queue_index].lock_acquired; ODP_ASSERT(thread_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); + odp_atomic_store_rel_u64(lock_acquired, + sched->queues[queue_index].lock_count + 1); odp_atomic_store_rel_u64(ord_lock, thread_local.ordered.ctx + 1); } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + uint32_t queue_index; + odp_atomic_u64_t *lock_acquired; + + queue_index = thread_local.ordered.src_queue; + + lock_acquired = &sched->order[queue_index].lock_acquired; + schedule_order_unlock(odp_atomic_load_u64(lock_acquired)); + schedule_order_lock(lock_index); +} + static unsigned schedule_max_ordered_locks(void) { return CONFIG_QUEUE_MAX_ORD_LOCKS; @@ -1368,7 +1389,8 @@ const schedule_api_t schedule_iquery_api = { .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock }; static void thread_set_interest(sched_thread_local_t *thread, diff --git a/platform/linux-generic/odp_schedule_scalable.c b/platform/linux-generic/odp_schedule_scalable.c index 765326e8..0baedb06 100644 --- a/platform/linux-generic/odp_schedule_scalable.c +++ b/platform/linux-generic/odp_schedule_scalable.c @@ -1007,6 +1007,8 @@ static void schedule_order_lock(unsigned lock_index) monitor32(&rctx->rwin->olock[lock_index], __ATOMIC_ACQUIRE) != rctx->sn) doze(); + atomic_store_release(&rctx->rwin->lock_acquired, lock_index, false); + } } @@ -1025,9 +1027,25 @@ static void schedule_order_unlock(unsigned lock_index) atomic_store_release(&rctx->rwin->olock[lock_index], rctx->sn + 1, /*readonly=*/false); + atomic_store_release(&rctx->rwin->lock_acquired, + rctx->rwin->lock_count + 1, false); rctx->olock_flags |= 1U << lock_index; } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + struct reorder_context *rctx; + + rctx = sched_ts->rctx; + if (odp_unlikely(rctx == NULL || rctx->rwin == NULL)) { + ODP_ERR("Invalid call to odp_schedule_order_unlock_lock\n"); + return; + } + schedule_order_unlock(__atomic_load_n(&rctx->rwin->lock_acquired, + __ATOMIC_ACQUIRE)); + schedule_order_lock(lock_index); +} + static void schedule_release_atomic(void) { sched_scalable_thread_state_t *ts; @@ -1978,4 +1996,5 @@ const schedule_api_t schedule_scalable_api = { .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock, }; diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 05241275..d4dfbcaf 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -819,6 +819,11 @@ static void schedule_order_unlock(unsigned lock_index) (void)lock_index; } +static void schedule_order_unlock_lock(unsigned lock_index) +{ + (void)lock_index; +} + static void order_lock(void) { } @@ -868,5 +873,6 @@ const schedule_api_t schedule_sp_api = { .schedule_group_thrmask = schedule_group_thrmask, .schedule_group_info = schedule_group_info, .schedule_order_lock = schedule_order_lock, - .schedule_order_unlock = schedule_order_unlock + .schedule_order_unlock = schedule_order_unlock, + .schedule_order_unlock_lock = schedule_order_unlock_lock };