From patchwork Sun Aug 9 01:55:11 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 52081 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by patches.linaro.org (Postfix) with ESMTPS id D1B6C20A60 for ; Sun, 9 Aug 2015 02:05:24 +0000 (UTC) Received: by lbbyj8 with SMTP id yj8sf44561280lbb.3 for ; Sat, 08 Aug 2015 19:05:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:in-reply-to:references:subject:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=KuAmtlF+ONeOAzJs/Aj258tHzX+hiyoaCEnj86yMz5o=; b=ELwEUL2qvJA6WOcD2dyZdGS9Jsef4NrNhhB2lSblH61txEP+FxX/OZn9PgqqUTZu1y xCK5kEdd0PXeCe0utFVMM/kZrdhuos+Im7V/D7b9x2Fu7+lgn1jGnlLk9eaVuDJ43cpe juDgG/VbS+LyMgg5hel02Tc3LP3SEesSeTLR10AnVClL7ggVi+oa9uwB8Jsvdm9QsIIr sllRW2H5Mwn9ngN/evnrHJfU955MZ04wEVEjNt4CSHHgCk8e+j5rzc77lbp7XccDge35 SP6d/uKUQCZ7s4k57LQgxycZWXWXdpBaE3+4QfUyogitcb5e6cOW47J/yPhZs7WY8hBz AKNg== X-Gm-Message-State: ALoCoQkCIlmQtdr6sqRO5HM+uKBh4KxUzyc+C5QRRv40SXsOR62B+h1J78MVwo8KlMtUxtf3Q0n9 X-Received: by 10.152.28.100 with SMTP id a4mr4292993lah.4.1439085923768; Sat, 08 Aug 2015 19:05:23 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.39 with SMTP id p7ls562828lap.104.gmail; Sat, 08 Aug 2015 19:05:23 -0700 (PDT) X-Received: by 10.112.65.35 with SMTP id u3mr14712315lbs.103.1439085923356; Sat, 08 Aug 2015 19:05:23 -0700 (PDT) Received: from mail-lb0-f169.google.com (mail-lb0-f169.google.com. [209.85.217.169]) by mx.google.com with ESMTPS id xs6si11214541lac.46.2015.08.08.19.05.23 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 08 Aug 2015 19:05:23 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.169 as permitted sender) client-ip=209.85.217.169; Received: by lbbpo9 with SMTP id po9so78942688lbb.2 for ; Sat, 08 Aug 2015 19:05:22 -0700 (PDT) X-Received: by 10.152.22.133 with SMTP id d5mr8336257laf.112.1439085922902; Sat, 08 Aug 2015 19:05:22 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1189112lba; Sat, 8 Aug 2015 19:05:21 -0700 (PDT) X-Received: by 10.140.16.74 with SMTP id 68mr26326643qga.99.1439085920808; Sat, 08 Aug 2015 19:05:20 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id l66si26970138qgl.9.2015.08.08.19.05.19; Sat, 08 Aug 2015 19:05:20 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id C5C6B61D5E; Sun, 9 Aug 2015 02:05:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id E143361F37; Sun, 9 Aug 2015 02:02:15 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 04BBB61EDB; Sun, 9 Aug 2015 01:55:35 +0000 (UTC) Received: from mail-oi0-f54.google.com (mail-oi0-f54.google.com [209.85.218.54]) by lists.linaro.org (Postfix) with ESMTPS id D7E9661E31 for ; Sun, 9 Aug 2015 01:55:29 +0000 (UTC) Received: by oip136 with SMTP id 136so71027456oip.1 for ; Sat, 08 Aug 2015 18:55:29 -0700 (PDT) X-Received: by 10.202.104.164 with SMTP id o36mr13123470oik.57.1439085329316; Sat, 08 Aug 2015 18:55:29 -0700 (PDT) Received: from localhost.localdomain (cpe-24-28-70-239.austin.res.rr.com. [24.28.70.239]) by smtp.gmail.com with ESMTPSA id d184sm9449191oif.9.2015.08.08.18.55.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 08 Aug 2015 18:55:28 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org Date: Sat, 8 Aug 2015 20:55:11 -0500 Message-Id: <1439085315-6957-10-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1439085315-6957-1-git-send-email-bill.fischofer@linaro.org> References: <1439085315-6957-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCHv11 09/13] linux-generic: queue: implement ordered queues X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: bill.fischofer@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Bill Fischofer --- platform/linux-generic/odp_pool.c | 3 + platform/linux-generic/odp_queue.c | 421 +++++++++++++++++++++++++++++++--- platform/linux-generic/odp_schedule.c | 2 - 3 files changed, 391 insertions(+), 35 deletions(-) diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 14221fd..30d4b2b 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -514,6 +514,9 @@ odp_buffer_t buffer_alloc(odp_pool_t pool_hdl, size_t size) /* By default, buffers inherit their pool's zeroization setting */ buf->buf.flags.zeroized = pool->s.flags.zeroized; + /* By default, buffers are not associated with an ordered queue */ + buf->buf.origin_qe = NULL; + if (buf->buf.type == ODP_EVENT_PACKET) packet_init(pool, &buf->pkt, size); diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 4a0df11..4d3c548 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -21,17 +22,20 @@ #include #include #include +#include #ifdef USE_TICKETLOCK #include #define LOCK(a) odp_ticketlock_lock(a) #define UNLOCK(a) odp_ticketlock_unlock(a) #define LOCK_INIT(a) odp_ticketlock_init(a) +#define LOCK_TRY(a) odp_ticketlock_trylock(a) #else #include #define LOCK(a) odp_spinlock_lock(a) #define UNLOCK(a) odp_spinlock_unlock(a) #define LOCK_INIT(a) odp_spinlock_init(a) +#define LOCK_TRY(a) odp_spinlock_trylock(a) #endif #include @@ -73,9 +77,9 @@ static void queue_init(queue_entry_t *queue, const char *name, queue->s.dequeue_multi = pktin_deq_multi; break; case ODP_QUEUE_TYPE_PKTOUT: - queue->s.enqueue = pktout_enqueue; + queue->s.enqueue = queue_pktout_enq; queue->s.dequeue = pktout_dequeue; - queue->s.enqueue_multi = pktout_enq_multi; + queue->s.enqueue_multi = queue_pktout_enq_multi; queue->s.dequeue_multi = pktout_deq_multi; break; default: @@ -89,6 +93,9 @@ static void queue_init(queue_entry_t *queue, const char *name, queue->s.head = NULL; queue->s.tail = NULL; + queue->s.reorder_head = NULL; + queue->s.reorder_tail = NULL; + queue->s.pri_queue = ODP_QUEUE_INVALID; queue->s.cmd_ev = ODP_EVENT_INVALID; } @@ -329,30 +336,134 @@ odp_queue_t odp_queue_lookup(const char *name) int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) { int sched = 0; + queue_entry_t *origin_qe = buf_hdr->origin_qe; + odp_buffer_hdr_t *buf_tail; + + /* Need two locks for enq operations from ordered queues */ + if (origin_qe) { + LOCK(&origin_qe->s.lock); + while (!LOCK_TRY(&queue->s.lock)) { + UNLOCK(&origin_qe->s.lock); + LOCK(&origin_qe->s.lock); + } + if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&queue->s.lock); + UNLOCK(&origin_qe->s.lock); + ODP_ERR("Bad origin queue status\n"); + return -1; + } + } else { + LOCK(&queue->s.lock); + } - LOCK(&queue->s.lock); if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { UNLOCK(&queue->s.lock); + if (origin_qe) + UNLOCK(&origin_qe->s.lock); ODP_ERR("Bad queue status\n"); return -1; } - if (queue->s.head == NULL) { + /* We can only complete the enq if we're in order */ + if (origin_qe) { + if (buf_hdr->order > origin_qe->s.order_out) { + reorder_enq(queue, origin_qe, buf_hdr); + + /* This enq can't complete until order is restored, so + * we're done here. + */ + UNLOCK(&queue->s.lock); + UNLOCK(&origin_qe->s.lock); + return 0; + } + + /* We're in order, so account for this and proceed with enq */ + if (!buf_hdr->flags.sustain) + order_release(origin_qe, 1); + + /* if this element is linked, restore the linked chain */ + buf_tail = buf_hdr->link; + + if (buf_tail) { + buf_hdr->next = buf_tail; + buf_hdr->link = NULL; + + /* find end of the chain */ + while (buf_tail->next) + buf_tail = buf_tail->next; + } else { + buf_tail = buf_hdr; + } + } else { + buf_tail = buf_hdr; + } + + if (!queue->s.head) { /* Empty queue */ queue->s.head = buf_hdr; - queue->s.tail = buf_hdr; - buf_hdr->next = NULL; + queue->s.tail = buf_tail; + buf_tail->next = NULL; } else { queue->s.tail->next = buf_hdr; - queue->s.tail = buf_hdr; - buf_hdr->next = NULL; + queue->s.tail = buf_tail; + buf_tail->next = NULL; } if (queue->s.status == QUEUE_STATUS_NOTSCHED) { queue->s.status = QUEUE_STATUS_SCHED; sched = 1; /* retval: schedule queue */ } - UNLOCK(&queue->s.lock); + + /* + * If we came from an ordered queue, check to see if our successful + * enq has unblocked other buffers in the origin's reorder queue. + */ + if (origin_qe) { + odp_buffer_hdr_t *reorder_buf; + odp_buffer_hdr_t *next_buf; + odp_buffer_hdr_t *reorder_prev; + odp_buffer_hdr_t *placeholder_buf; + uint32_t release_count; + uint32_t placeholder_count; + + reorder_deq(queue, origin_qe, + &reorder_buf, &reorder_prev, &placeholder_buf, + &release_count, &placeholder_count); + + /* Add released buffers to the queue as well */ + if (release_count > 0) { + queue->s.tail->next = origin_qe->s.reorder_head; + queue->s.tail = reorder_prev; + origin_qe->s.reorder_head = reorder_prev->next; + reorder_prev->next = NULL; + } + + /* Reflect the above two in the output sequence */ + order_release(origin_qe, release_count + placeholder_count); + + /* Now handle any unblocked buffers destined for other queues */ + UNLOCK(&queue->s.lock); + + if (reorder_buf && + reorder_buf->order <= origin_qe->s.order_out) + origin_qe->s.reorder_head = reorder_buf->next; + else + reorder_buf = NULL; + UNLOCK(&origin_qe->s.lock); + + if (reorder_buf) + odp_queue_enq(reorder_buf->target_qe->s.handle, + (odp_event_t)reorder_buf->handle.handle); + + /* Free all placeholder bufs that are now released */ + while (placeholder_buf) { + next_buf = placeholder_buf->next; + odp_buffer_free(placeholder_buf->handle.handle); + placeholder_buf = next_buf; + } + } else { + UNLOCK(&queue->s.lock); + } /* Add queue to scheduling */ if (sched && schedule_queue(queue)) @@ -364,41 +475,83 @@ int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) { int sched = 0; - int i; + int i, rc, ret_count = 0; + int ordered_head[num]; + int ordered_count = 0; odp_buffer_hdr_t *tail; - for (i = 0; i < num - 1; i++) - buf_hdr[i]->next = buf_hdr[i+1]; + /* Identify ordered chains in the input buffer list */ + for (i = 0; i < num; i++) { + if (buf_hdr[i]->origin_qe) + ordered_head[ordered_count++] = i; - tail = buf_hdr[num-1]; - buf_hdr[num-1]->next = NULL; + buf_hdr[i]->next = i < num - 1 ? buf_hdr[i + 1] : NULL; + } - LOCK(&queue->s.lock); - if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { - UNLOCK(&queue->s.lock); - ODP_ERR("Bad queue status\n"); - return -1; + if (ordered_count) { + if (ordered_head[0] > 0) { + tail = buf_hdr[ordered_head[0] - 1]; + tail->next = NULL; + ret_count = ordered_head[0]; + } else { + tail = NULL; + ret_count = 0; + } + } else { + tail = buf_hdr[num - 1]; + ret_count = num; } - /* Empty queue */ - if (queue->s.head == NULL) - queue->s.head = buf_hdr[0]; - else - queue->s.tail->next = buf_hdr[0]; + /* Handle regular enq's at start of list */ + if (tail) { + LOCK(&queue->s.lock); + if (odp_unlikely(queue->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&queue->s.lock); + ODP_ERR("Bad queue status\n"); + return -1; + } - queue->s.tail = tail; + /* Handle empty queue */ + if (queue->s.head) + queue->s.tail->next = buf_hdr[0]; + else + queue->s.head = buf_hdr[0]; - if (queue->s.status == QUEUE_STATUS_NOTSCHED) { - queue->s.status = QUEUE_STATUS_SCHED; - sched = 1; /* retval: schedule queue */ + queue->s.tail = tail; + + if (queue->s.status == QUEUE_STATUS_NOTSCHED) { + queue->s.status = QUEUE_STATUS_SCHED; + sched = 1; /* retval: schedule queue */ + } + UNLOCK(&queue->s.lock); + + /* Add queue to scheduling */ + if (sched && schedule_queue(queue)) + ODP_ABORT("schedule_queue failed\n"); } - UNLOCK(&queue->s.lock); - /* Add queue to scheduling */ - if (sched && schedule_queue(queue)) - ODP_ABORT("schedule_queue failed\n"); + /* Handle ordered chains in the list */ + for (i = 0; i < ordered_count; i++) { + int eol = i < ordered_count - 1 ? ordered_head[i + 1] : num; + int list_count = eol - i; - return num; /* All events enqueued */ + if (i < ordered_count - 1) + buf_hdr[eol - 1]->next = NULL; + + buf_hdr[ordered_head[i]]->link = + list_count > 1 ? buf_hdr[ordered_head[i] + 1] : NULL; + + rc = queue_enq(queue, buf_hdr[ordered_head[i]]); + if (rc < 0) + return ret_count; + + if (rc < list_count) + return ret_count + rc; + + ret_count += rc; + } + + return ret_count; } int odp_queue_enq_multi(odp_queue_t handle, const odp_event_t ev[], int num) @@ -427,6 +580,9 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) queue = queue_to_qentry(handle); buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + /* No chains via this entry */ + buf_hdr->link = NULL; + return queue->s.enqueue(queue, buf_hdr); } @@ -449,6 +605,13 @@ odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) buf_hdr = queue->s.head; queue->s.head = buf_hdr->next; buf_hdr->next = NULL; + if (queue->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED) { + buf_hdr->origin_qe = queue; + buf_hdr->order = queue->s.order_in++; + buf_hdr->flags.sustain = 0; + } else { + buf_hdr->origin_qe = NULL; + } if (queue->s.head == NULL) { /* Queue is now empty */ @@ -489,6 +652,13 @@ int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) buf_hdr[i] = hdr; hdr = hdr->next; buf_hdr[i]->next = NULL; + if (queue->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED) { + buf_hdr[i]->origin_qe = queue; + buf_hdr[i]->order = queue->s.order_in++; + buf_hdr[i]->flags.sustain = 0; + } else { + buf_hdr[i]->origin_qe = NULL; + } } queue->s.head = hdr; @@ -537,6 +707,191 @@ odp_event_t odp_queue_deq(odp_queue_t handle) return ODP_EVENT_INVALID; } +int queue_pktout_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) +{ + queue_entry_t *origin_qe = buf_hdr->origin_qe; + int rc, sustain; + + /* Special processing needed only if we came from an ordered queue */ + if (!origin_qe) + return pktout_enqueue(queue, buf_hdr); + + /* Must lock origin_qe for ordered processing */ + LOCK(&origin_qe->s.lock); + if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&origin_qe->s.lock); + ODP_ERR("Bad origin queue status\n"); + return -1; + } + + /* We can only complete the enq if we're in order */ + if (buf_hdr->order > origin_qe->s.order_out) { + reorder_enq(queue, origin_qe, buf_hdr); + + /* This enq can't complete until order is restored, so + * we're done here. + */ + UNLOCK(&origin_qe->s.lock); + return 0; + } + + /* Perform our enq since we're in order. + * Note: Don't hold the origin_qe lock across an I/O operation! + * Note that we also cache the sustain flag since the buffer may + * be freed by the I/O operation so we can't reference it afterwards. + */ + UNLOCK(&origin_qe->s.lock); + sustain = buf_hdr->flags.sustain; + + /* Handle any chained buffers (internal calls) */ + if (buf_hdr->link) { + odp_buffer_hdr_t *buf_hdrs[QUEUE_MULTI_MAX]; + odp_buffer_hdr_t *next_buf; + int num = 0; + + next_buf = buf_hdr->link; + buf_hdr->link = NULL; + + while (next_buf) { + buf_hdrs[num++] = next_buf; + next_buf = next_buf->next; + } + + rc = pktout_enq_multi(queue, buf_hdrs, num); + if (rc < num) + return -1; + } else { + rc = pktout_enqueue(queue, buf_hdr); + if (!rc) + return rc; + } + + /* Reacquire the lock following the I/O send. Note that we're still + * guaranteed to be in order here since we haven't released + * order yet. + */ + LOCK(&origin_qe->s.lock); + if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&origin_qe->s.lock); + ODP_ERR("Bad origin queue status\n"); + return -1; + } + + /* Account for this ordered enq */ + if (!sustain) + order_release(origin_qe, 1); + + /* Now check to see if our successful enq has unblocked other buffers + * in the origin's reorder queue. + */ + odp_buffer_hdr_t *reorder_buf; + odp_buffer_hdr_t *next_buf; + odp_buffer_hdr_t *reorder_prev; + odp_buffer_hdr_t *xmit_buf; + odp_buffer_hdr_t *placeholder_buf; + uint32_t release_count; + uint32_t placeholder_count; + + reorder_deq(queue, origin_qe, + &reorder_buf, &reorder_prev, &placeholder_buf, + &release_count, &placeholder_count); + + /* Send released buffers as well */ + if (release_count > 0) { + xmit_buf = origin_qe->s.reorder_head; + origin_qe->s.reorder_head = reorder_prev->next; + reorder_prev->next = NULL; + UNLOCK(&origin_qe->s.lock); + + do { + next_buf = xmit_buf->next; + pktout_enqueue(queue, xmit_buf); + xmit_buf = next_buf; + } while (xmit_buf); + + /* Reacquire the origin_qe lock to continue */ + LOCK(&origin_qe->s.lock); + if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { + UNLOCK(&origin_qe->s.lock); + ODP_ERR("Bad origin queue status\n"); + return -1; + } + } + + /* Update the order sequence to reflect the deq'd elements */ + order_release(origin_qe, release_count + placeholder_count); + + /* Now handle sends to other queues that are ready to go */ + if (reorder_buf && reorder_buf->order <= origin_qe->s.order_out) + origin_qe->s.reorder_head = reorder_buf->next; + else + reorder_buf = NULL; + + /* We're fully done with the origin_qe at last */ + UNLOCK(&origin_qe->s.lock); + + /* Now send the next buffer to its target queue */ + if (reorder_buf) + odp_queue_enq(reorder_buf->target_qe->s.handle, + (odp_event_t)reorder_buf->handle.handle); + + /* Free all placeholder bufs that are now released */ + while (placeholder_buf) { + next_buf = placeholder_buf->next; + odp_buffer_free(placeholder_buf->handle.handle); + placeholder_buf = next_buf; + } + + return 0; +} + +int queue_pktout_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], + int num) +{ + int i, rc, ret_count = 0; + int ordered_head[num]; + int ordered_count = 0; + + /* Identify ordered chains in the input buffer list */ + for (i = 0; i < num; i++) { + if (buf_hdr[i]->origin_qe) + ordered_head[ordered_count++] = i; + + buf_hdr[i]->next = i < num - 1 ? buf_hdr[i + 1] : NULL; + } + + ret_count = ordered_count ? ordered_head[0] : num; + + /* Handle regular enq's at start of list */ + if (ret_count) { + rc = pktout_enq_multi(queue, buf_hdr, ret_count); + if (rc < ret_count) + return rc; + } + + /* Handle ordered chains in the list */ + for (i = 0; i < ordered_count; i++) { + int eol = i < ordered_count - 1 ? ordered_head[i + 1] : num; + int list_count = eol - i; + + if (i < ordered_count - 1) + buf_hdr[eol - 1]->next = NULL; + + buf_hdr[ordered_head[i]]->link = + list_count > 1 ? buf_hdr[ordered_head[i] + 1] : NULL; + + rc = queue_pktout_enq(queue, buf_hdr[ordered_head[i]]); + if (rc < 0) + return ret_count; + + if (rc < list_count) + return ret_count + rc; + + ret_count += rc; + } + + return ret_count; +} void queue_lock(queue_entry_t *queue) { diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 2a2cc1d..d595375 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -411,8 +411,6 @@ static inline int copy_events(odp_event_t out_ev[], unsigned int max) /* * Schedule queues - * - * TODO: SYNC_ORDERED not implemented yet */ static int schedule(odp_queue_t *out_queue, odp_event_t out_ev[], unsigned int max_num, unsigned int max_deq)