From patchwork Sun Aug 9 16:54:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 52122 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by patches.linaro.org (Postfix) with ESMTPS id B2FC32152A for ; Sun, 9 Aug 2015 17:06:10 +0000 (UTC) Received: by wicul11 with SMTP id ul11sf31846731wic.1 for ; Sun, 09 Aug 2015 10:06:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=yYZK5GaqlIB5UAx/X6wgm1c6Y9JsN6wWFigWZF3I0OU=; b=h38H43OCKlREqEZTUmmQcvGZ8Dyg84ywh/yYVyiWuoCi6+GWCPoTYcqZi8Qvmnvikj gwiK9zs6x9rHL0Qs+Bvy6Ksbr3Id8j1zgzf4LiUlvwXmbgc/G1B7mz7R+RxaIi52Jz6z Y4B+XbdorF0pE0KModzbinhqJAoiyWMqxZRxlCUuaBe5UIr9c9P03948azAweT6whcau apOlt0I7EXLjrLKU8CkuDLKbZizVSyHq3VmgUC2y3ZcMr1N8BrT/Hzm0wcEIIC/K77+J E9fAw55jVG2stpJbkCXZjWyA3oq1UKPTXSGBpWNtK3F0qQM7WblMT/OkrWVk4+/W/FPg EPng== X-Gm-Message-State: ALoCoQkeyO8Kc2MYQaSNKq0eio8pBEufXd3MYtRsu0AFTniEJnSLysfo1DkTYWKTcGWo/SJ0k69l X-Received: by 10.180.83.33 with SMTP id n1mr2474044wiy.4.1439139970076; Sun, 09 Aug 2015 10:06:10 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.26.65 with SMTP id j1ls97956lag.39.gmail; Sun, 09 Aug 2015 10:06:09 -0700 (PDT) X-Received: by 10.152.18.232 with SMTP id z8mr17146983lad.66.1439139969783; Sun, 09 Aug 2015 10:06:09 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id dk7si9389583lad.58.2015.08.09.10.06.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 09 Aug 2015 10:06:09 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by lalv9 with SMTP id v9so680116lal.0 for ; Sun, 09 Aug 2015 10:06:09 -0700 (PDT) X-Received: by 10.112.140.68 with SMTP id re4mr16846026lbb.72.1439139969337; Sun, 09 Aug 2015 10:06:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1514434lba; Sun, 9 Aug 2015 10:06:08 -0700 (PDT) X-Received: by 10.140.82.180 with SMTP id h49mr30916227qgd.29.1439139967735; Sun, 09 Aug 2015 10:06:07 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id j80si136369qhc.98.2015.08.09.10.06.07; Sun, 09 Aug 2015 10:06:07 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id 119B561E0A; Sun, 9 Aug 2015 17:06:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 5CC3E61E18; Sun, 9 Aug 2015 17:00:46 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 418AA61E0C; Sun, 9 Aug 2015 16:56:30 +0000 (UTC) Received: from mail-ob0-f178.google.com (mail-ob0-f178.google.com [209.85.214.178]) by lists.linaro.org (Postfix) with ESMTPS id 2E91461E10 for ; Sun, 9 Aug 2015 16:54:50 +0000 (UTC) Received: by obbhe7 with SMTP id he7so13033421obb.0 for ; Sun, 09 Aug 2015 09:54:49 -0700 (PDT) X-Received: by 10.182.86.9 with SMTP id l9mr15855701obz.61.1439139289635; Sun, 09 Aug 2015 09:54:49 -0700 (PDT) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by smtp.gmail.com with ESMTPSA id x69sm199357oif.5.2015.08.09.09.54.49 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 09 Aug 2015 09:54:49 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Sun, 9 Aug 2015 11:54:33 -0500 Message-Id: <1439139273-22438-14-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1439139273-22438-1-git-send-email-bill.fischofer@linaro.org> References: <1439139273-22438-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv14 13/13] linux-generic: schedule: implement ordered locks X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Implement the odp_schedule_order_lock() and odp_schedule_order_unlock() routines to enable ordered synchronization within parallel processing of ordered flows. Signed-off-by: Bill Fischofer --- .../linux-generic/include/odp_queue_internal.h | 1 + platform/linux-generic/odp_queue.c | 34 ++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index 534cc57..69de4fd 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -178,6 +178,7 @@ static inline void reorder_enq(queue_entry_t *queue, static inline void order_release(queue_entry_t *origin_qe, int count) { origin_qe->s.order_out += count; + odp_atomic_fetch_add_u64(&origin_qe->s.sync_out, count); } static inline void reorder_deq(queue_entry_t *queue, diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 6f96304..fe18866 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -123,6 +123,8 @@ int odp_queue_init_global(void) /* init locks */ queue_entry_t *queue = get_qentry(i); LOCK_INIT(&queue->s.lock); + odp_atomic_init_u64(&queue->s.sync_in, 0); + odp_atomic_init_u64(&queue->s.sync_out, 0); queue->s.handle = queue_from_id(i); } @@ -613,6 +615,7 @@ odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) if (queue_is_ordered(queue)) { buf_hdr->origin_qe = queue; buf_hdr->order = queue->s.order_in++; + buf_hdr->sync = odp_atomic_fetch_inc_u64(&queue->s.sync_in); buf_hdr->flags.sustain = 0; } else { buf_hdr->origin_qe = NULL; @@ -660,6 +663,8 @@ int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) if (queue_is_ordered(queue)) { buf_hdr[i]->origin_qe = queue; buf_hdr[i]->order = queue->s.order_in++; + buf_hdr[i]->sync = + odp_atomic_fetch_inc_u64(&queue->s.sync_in); buf_hdr[i]->flags.sustain = 0; } else { buf_hdr[i]->origin_qe = NULL; @@ -1012,3 +1017,32 @@ int odp_schedule_order_copy(odp_event_t src_event, odp_event_t dst_event) UNLOCK(&origin_qe->s.lock); return 0; } + +void odp_schedule_order_lock(odp_event_t ev) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + queue_entry_t *origin_qe = buf_hdr->origin_qe; + + /* Wait until we are in order. Note that sync_out will be incremented + * both by unlocks as well as order resolution, so we're OK if only + * some events in the ordered flow need to lock. + */ + while (buf_hdr->sync > odp_atomic_load_u64(&origin_qe->s.sync_out)) + odp_spin(); +} + +void odp_schedule_order_unlock(odp_event_t ev) +{ + odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + queue_entry_t *origin_qe = buf_hdr->origin_qe; + + /* Get a new sync order for reusability, and release the lock. Note + * that this must be done in this sequence to prevent race conditions + * where the next waiter could lock and unlock before we're able to + * get a new sync order since that would cause order inversion on + * subsequent locks we may perform on this event in this ordered + * context. + */ + buf_hdr->sync = odp_atomic_fetch_inc_u64(&origin_qe->s.sync_in); + odp_atomic_fetch_inc_u64(&origin_qe->s.sync_out); +}