From patchwork Tue Nov 10 14:46:59 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 56334 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp816148lbb; Tue, 10 Nov 2015 06:55:18 -0800 (PST) X-Received: by 10.140.155.75 with SMTP id b72mr4742867qhb.29.1447167315630; Tue, 10 Nov 2015 06:55:15 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id f128si3188570qhe.29.2015.11.10.06.55.13; Tue, 10 Nov 2015 06:55:15 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: by lists.linaro.org (Postfix, from userid 109) id 1C45061D0F; Tue, 10 Nov 2015 14:55:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_BL_SPAMCOP_NET, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 62A0B61D34; Tue, 10 Nov 2015 14:49:50 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 9DB5C61D16; Tue, 10 Nov 2015 14:49:41 +0000 (UTC) Received: from mail-pa0-f42.google.com (mail-pa0-f42.google.com [209.85.220.42]) by lists.linaro.org (Postfix) with ESMTPS id E15A261D22 for ; Tue, 10 Nov 2015 14:47:12 +0000 (UTC) Received: by pacdm15 with SMTP id dm15so212136399pac.3 for ; Tue, 10 Nov 2015 06:47:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tdLPEYIxwckugkjO2tzXXo7e4VgQ79lVGPvBqWMNPMU=; b=1bA2OEhhWagguTXAUT9FlWylyKtApdHar4Nqo+gryi4jNP8+wwmXtFi6wDlGsJLRp1 S7CWLz0nqjjMpvWFVOkQRRNP2pxVQ0sScC98/cY9v0RbHPfFNKjAPAoh5hVtkQ2XVy76 I+/JrT211CWrUSFXdfyaTWtuoWOrGOX7ZH6AYs3qNcIWs/Ggh6Ib0HbioBE6mbVS4rpe iSqaS3hhQziTuOb+x9lqe7mGAm3H++U6qBjTJn6AISE/WkbsRtUNQGKGVz1zDSGy2zmf mjc+lcFV/fCGNVBYjiWSBkubCyEIrQaqKKkynR66ZNqCKBAN8yFDvip8X3sKwjrGTJyX K3DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tdLPEYIxwckugkjO2tzXXo7e4VgQ79lVGPvBqWMNPMU=; b=VOc9lOM9Jn+qx/tjWxzuwt8x1AVQWo27Bgud6fN/dhRwRVi0/pkAaq8sHx+dgJKlQZ uf42d79q/MRrHBJqVYtpG74zycS96WAQvLogBcKIsKjEZtRKHEFwkIUray5AMYWJFtfT u58HuXD/g0YyuBCvq0V1IXDLGGnA400JPoJ8jcqAivjEhw6EXHMW2GwoYg21IK8ybUuz xV55oLjM74XyPsByrmp6JFl0tzWS+ficr/4xLhW5m99/q+1qO46aHN6sHUBjysfHnTy3 4sGJwZDiUswta/blZ+yLsZTmOMYnQUizo2MiXTM5nZZluUEeHg6uPPYpOW35+0DHW6eA Rljg== X-Gm-Message-State: ALoCoQmPlCkhf9fGhJVFzzC1yPVYM7o9Ll7ZBsL0T5GXi5ELkJnSIUxDNnVTnap2pSnxlI6mDK0T X-Received: by 10.68.135.199 with SMTP id pu7mr6290409pbb.98.1447166832170; Tue, 10 Nov 2015 06:47:12 -0800 (PST) Received: from Ubuntu15.localdomain ([40.139.248.3]) by smtp.gmail.com with ESMTPSA id w11sm4531237pbs.87.2015.11.10.06.47.11 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 10 Nov 2015 06:47:11 -0800 (PST) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Tue, 10 Nov 2015 06:46:59 -0800 Message-Id: <1447166821-24585-7-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1447166821-24585-1-git-send-email-bill.fischofer@linaro.org> References: <1447166821-24585-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv4 6/8] linux-generic: queue: streamline and correct release_order() routine X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Resolve the corner case of releasing order for an order that still has events on the reorder queue. This also allows the reorder_complete() routine to be streamlined. This patch resolves Bug https://bugs.linaro.org/show_bug.cgi?id=1879 Signed-off-by: Bill Fischofer --- .../linux-generic/include/odp_queue_internal.h | 5 +- platform/linux-generic/odp_queue.c | 57 +++++++++++++++++----- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index 6120740..a70044b 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -335,8 +335,7 @@ static inline int reorder_deq(queue_entry_t *queue, static inline void reorder_complete(queue_entry_t *origin_qe, odp_buffer_hdr_t **reorder_buf_return, odp_buffer_hdr_t **placeholder_buf, - int placeholder_append, - int order_released) + int placeholder_append) { odp_buffer_hdr_t *reorder_buf = origin_qe->s.reorder_head; odp_buffer_hdr_t *next_buf; @@ -356,7 +355,7 @@ static inline void reorder_complete(queue_entry_t *origin_qe, reorder_buf = next_buf; order_release(origin_qe, 1); - } else if (!order_released && reorder_buf->flags.sustain) { + } else if (reorder_buf->flags.sustain) { reorder_buf = next_buf; } else { *reorder_buf_return = origin_qe->s.reorder_head; diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 9cab9b2..a5e60d7 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -39,6 +39,11 @@ #include +#define RESOLVE_ORDER 0 +#define SUSTAIN_ORDER 1 + +#define NOAPPEND 0 +#define APPEND 1 typedef struct queue_table_t { queue_entry_t queue[ODP_CONFIG_QUEUES]; @@ -521,8 +526,7 @@ int ordered_queue_enq(queue_entry_t *queue, if (sched && schedule_queue(queue)) ODP_ABORT("schedule_queue failed\n"); - reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, - 1, 0); + reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, APPEND); UNLOCK(&origin_qe->s.lock); if (reorder_buf) @@ -606,7 +610,8 @@ int odp_queue_enq_multi(odp_queue_t handle, const odp_event_t ev[], int num) for (i = 0; i < num; i++) buf_hdr[i] = odp_buf_to_hdr(odp_buffer_from_event(ev[i])); - return num == 0 ? 0 : queue->s.enqueue_multi(queue, buf_hdr, num, 1); + return num == 0 ? 0 : queue->s.enqueue_multi(queue, buf_hdr, + num, SUSTAIN_ORDER); } int odp_queue_enq(odp_queue_t handle, odp_event_t ev) @@ -620,7 +625,7 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) /* No chains via this entry */ buf_hdr->link = NULL; - return queue->s.enqueue(queue, buf_hdr, 1); + return queue->s.enqueue(queue, buf_hdr, SUSTAIN_ORDER); } int queue_enq_internal(odp_buffer_hdr_t *buf_hdr) @@ -660,7 +665,7 @@ odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) buf_hdr->sync[i] = odp_atomic_fetch_inc_u64(&queue->s.sync_in[i]); } - buf_hdr->flags.sustain = 0; + buf_hdr->flags.sustain = SUSTAIN_ORDER; } else { buf_hdr->origin_qe = NULL; } @@ -713,7 +718,7 @@ int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) odp_atomic_fetch_inc_u64 (&queue->s.sync_in[j]); } - buf_hdr[i]->flags.sustain = 0; + buf_hdr[i]->flags.sustain = SUSTAIN_ORDER; } else { buf_hdr[i]->origin_qe = NULL; } @@ -879,7 +884,7 @@ int queue_pktout_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, order_release(origin_qe, release_count + placeholder_count); /* Now handle sends to other queues that are ready to go */ - reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, 1, 0); + reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, APPEND); /* We're fully done with the origin_qe at last */ UNLOCK(&origin_qe->s.lock); @@ -947,13 +952,43 @@ int release_order(queue_entry_t *origin_qe, uint64_t order, odp_buffer_t placeholder_buf; odp_buffer_hdr_t *placeholder_buf_hdr, *reorder_buf, *next_buf; - /* Must tlock the origin queue to process the release */ + /* Must lock the origin queue to process the release */ LOCK(&origin_qe->s.lock); - /* If we are in the order we can release immediately since there can - * be no confusion about intermediate elements + /* If we are in order we can release immediately since there can be no + * confusion about intermediate elements */ if (order <= origin_qe->s.order_out) { + reorder_buf = origin_qe->s.reorder_head; + + /* We're in order, however there may be one or more events on + * the reorder queue that are part of this order. If that is + * the case, remove them and let ordered_queue_enq() handle + * them and resolve the order for us. + */ + if (reorder_buf && reorder_buf->order == order) { + odp_buffer_hdr_t *reorder_head = reorder_buf; + + next_buf = reorder_buf->next; + + while (next_buf && next_buf->order == order) { + reorder_buf = next_buf; + next_buf = next_buf->next; + } + + origin_qe->s.reorder_head = reorder_buf->next; + reorder_buf->next = NULL; + + UNLOCK(&origin_qe->s.lock); + reorder_head->link = reorder_buf->next; + return ordered_queue_enq(reorder_head->target_qe, + reorder_head, RESOLVE_ORDER, + origin_qe, order); + } + + /* Reorder queue has no elements for this order, so it's safe + * to resolve order here + */ order_release(origin_qe, 1); /* Check if this release allows us to unblock waiters. At the @@ -965,7 +1000,7 @@ int release_order(queue_entry_t *origin_qe, uint64_t order, * element(s) on the reorder queue */ reorder_complete(origin_qe, &reorder_buf, - &placeholder_buf_hdr, 0, 1); + &placeholder_buf_hdr, NOAPPEND); /* Now safe to unlock */ UNLOCK(&origin_qe->s.lock);