From patchwork Thu Mar 1 16:00:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 130318 Delivered-To: patch@linaro.org Received: by 10.80.172.228 with SMTP id x91csp2983214edc; Thu, 1 Mar 2018 08:06:00 -0800 (PST) X-Google-Smtp-Source: AG47ELuF9PC7uFnux1tqiaXKsQUYoQtwP/uqkrKeSGUtsMbVZyHwf2WGqJ7Al75k3dvwtO+DlBDt X-Received: by 10.55.43.220 with SMTP id r89mr3521666qkr.152.1519920360783; Thu, 01 Mar 2018 08:06:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519920360; cv=none; d=google.com; s=arc-20160816; b=ad1J0AjgZ6brE9IR1J5a5HLo+AQBkn58d9RhRtx0Y1+xDM9/Ih0i1gtCePFtvzJ+ZB cxFL4MkOrNXIqylrNjI/g59v41VuuySdjhHZxj98bV2goFkKJ7cQFNHJs+pRpxH9Xavz sqK238TvqEosKKGg9wbTIJzf0IcnWhmW2wzu7B72RINAbsX3wkEwj7hBKMvWzb/nF4zA QX50rjz8+TBhFEZqC0MWm7cQOHy8sifYn4TwagoG5njMCRWOKuc1ypccxWeSZJfoBVhF HgiDiiSo/bwOUJu+nWCmhO8f/hbu+fbJCDY6Qs3QlAkLU4Wl6kxWsa3k1ILm0YRVmLTJ zFRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=R8sEWvtnjwUELEl+G8RKqNSAIXy8adMlMviDfTHnQKI=; b=iQ0lAC39kE+dfxsh4EFHunHiMJrq1Rpyv0YSd6aLiXeH9GARt1l2/Q3dyXphSo+WnC KUtbX5+SCIRkSAmxLMuPrmUFjOxWkJI8MWBdHuc8cg8nS9sPPbWERstURhCoq5gPcSlx 3pKftKtB+2E48M4/+luJIaModjRF430xlM/+ri+8P5UebTV5GQXjfZTUicIdyfWfvI+c uvCZCVDFNWnJL5Kavtcb7SoHskhCmjliJ3rFOhqn7exwInvdaWGRzJcGC80OhkqWCM70 9FCkghOKQ/TEync37kiIoIKC68vX/IOjur3IdJh3hq2A4gmm/Gz7jMPtMnie9o/vy83i ZU9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id 67si4902736qks.245.2018.03.01.08.06.00; Thu, 01 Mar 2018 08:06:00 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 5ECA8607A1; Thu, 1 Mar 2018 16:06:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 6EA2060920; Thu, 1 Mar 2018 16:04:53 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id AABCF60958; Thu, 1 Mar 2018 16:00:22 +0000 (UTC) Received: from forward101j.mail.yandex.net (forward101j.mail.yandex.net [5.45.198.241]) by lists.linaro.org (Postfix) with ESMTPS id 7BFC9608F6 for ; Thu, 1 Mar 2018 16:00:13 +0000 (UTC) Received: from mxback6g.mail.yandex.net (mxback6g.mail.yandex.net [IPv6:2a02:6b8:0:1472:2741:0:8b7:167]) by forward101j.mail.yandex.net (Yandex) with ESMTP id E7AB21245DEC for ; Thu, 1 Mar 2018 19:00:11 +0300 (MSK) Received: from smtp3o.mail.yandex.net (smtp3o.mail.yandex.net [2a02:6b8:0:1a2d::27]) by mxback6g.mail.yandex.net (nwsmtp/Yandex) with ESMTP id dOqfKj2sGe-0BuWa4b6; Thu, 01 Mar 2018 19:00:11 +0300 Received: by smtp3o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id co35rdYWSN-0A2SAVRg; Thu, 01 Mar 2018 19:00:10 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 1 Mar 2018 19:00:07 +0300 Message-Id: <1519920009-23406-2-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519920009-23406-1-git-send-email-odpbot@yandex.ru> References: <1519920009-23406-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 504 Subject: [lng-odp] [PATCH v1 1/3] linux-gen: sched: optimize packet input polling X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen Optimize scheduler throughput with packet IO interfaces. Special pktio poll commands are removed and event queue is used instead to trigger packet input polling. Packet input is polled when those queues are empty. Thus, these queues connected to packet input are not removed from scheduling when empty. Signed-off-by: Petri Savolainen --- /** Email created from pull request 504 (psavol:master-sched-optim-2) ** https://github.com/Linaro/odp/pull/504 ** Patch: https://github.com/Linaro/odp/pull/504.patch ** Base sha: 284f52d72ec19df3774c7409780f1f9eea33b8e6 ** Merge commit sha: 081cdd2f01d5a4e5b5b4d254f689e55d60a489cd **/ platform/linux-generic/include/odp_schedule_if.h | 4 +- platform/linux-generic/odp_queue_basic.c | 24 +- platform/linux-generic/odp_schedule_basic.c | 331 +++++++---------------- platform/linux-generic/odp_schedule_iquery.c | 4 +- platform/linux-generic/odp_schedule_sp.c | 2 +- 5 files changed, 129 insertions(+), 236 deletions(-) diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 66e050438..dd2097bfb 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -82,7 +82,9 @@ int sched_cb_pktin_poll_one(int pktio_index, int rx_queue, odp_event_t evts[]); void sched_cb_pktio_stop_finalize(int pktio_index); odp_queue_t sched_cb_queue_handle(uint32_t queue_index); void sched_cb_queue_destroy_finalize(uint32_t queue_index); -int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int num); +void sched_cb_queue_set_status(uint32_t queue_index, int status); +int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int num, + int update_status); int sched_cb_queue_empty(uint32_t queue_index); /* API functions */ diff --git a/platform/linux-generic/odp_queue_basic.c b/platform/linux-generic/odp_queue_basic.c index e4f6fd820..d9e5fdcea 100644 --- a/platform/linux-generic/odp_queue_basic.c +++ b/platform/linux-generic/odp_queue_basic.c @@ -299,6 +299,17 @@ void sched_cb_queue_destroy_finalize(uint32_t queue_index) UNLOCK(queue); } +void sched_cb_queue_set_status(uint32_t queue_index, int status) +{ + queue_entry_t *queue = get_qentry(queue_index); + + LOCK(queue); + + queue->s.status = status; + + UNLOCK(queue); +} + static int queue_destroy(odp_queue_t handle) { queue_entry_t *queue; @@ -493,7 +504,7 @@ static int queue_enq(odp_queue_t handle, odp_event_t ev) } static inline int deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], - int num) + int num, int update_status) { int status_sync = sched_fn->status_sync; int num_deq; @@ -515,7 +526,7 @@ static inline int deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], if (num_deq == 0) { /* Already empty queue */ - if (queue->s.status == QUEUE_STATUS_SCHED) { + if (update_status && queue->s.status == QUEUE_STATUS_SCHED) { queue->s.status = QUEUE_STATUS_NOTSCHED; if (status_sync) @@ -542,7 +553,7 @@ static int queue_int_deq_multi(queue_t q_int, odp_buffer_hdr_t *buf_hdr[], { queue_entry_t *queue = qentry_from_int(q_int); - return deq_multi(queue, buf_hdr, num); + return deq_multi(queue, buf_hdr, num, 0); } static odp_buffer_hdr_t *queue_int_deq(queue_t q_int) @@ -551,7 +562,7 @@ static odp_buffer_hdr_t *queue_int_deq(queue_t q_int) odp_buffer_hdr_t *buf_hdr = NULL; int ret; - ret = deq_multi(queue, &buf_hdr, 1); + ret = deq_multi(queue, &buf_hdr, 1, 0); if (ret == 1) return buf_hdr; @@ -661,11 +672,12 @@ static int queue_info(odp_queue_t handle, odp_queue_info_t *info) return 0; } -int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int num) +int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int num, + int update_status) { queue_entry_t *qe = get_qentry(queue_index); - return deq_multi(qe, (odp_buffer_hdr_t **)ev, num); + return deq_multi(qe, (odp_buffer_hdr_t **)ev, num, update_status); } int sched_cb_queue_empty(uint32_t queue_index) diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index e5e1fe60e..b3847ab91 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -50,39 +50,15 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && /* Size of poll weight table */ #define WEIGHT_TBL_SIZE ((QUEUES_PER_PRIO - 1) * PREFER_RATIO) -/* Packet input poll cmd queues */ -#define PKTIO_CMD_QUEUES 4 - -/* Mask for wrapping command queues */ -#define PKTIO_CMD_QUEUE_MASK (PKTIO_CMD_QUEUES - 1) - -/* Maximum number of packet input queues per command */ -#define MAX_PKTIN 16 - /* Maximum number of packet IO interfaces */ #define NUM_PKTIO ODP_CONFIG_PKTIO_ENTRIES -/* Maximum number of pktio poll commands */ -#define NUM_PKTIO_CMD (MAX_PKTIN * NUM_PKTIO) +/* Maximum pktin index. Needs to fit into 8 bits. */ +#define MAX_PKTIN_INDEX 255 /* Not a valid index */ #define NULL_INDEX ((uint32_t)-1) -/* Not a valid poll command */ -#define PKTIO_CMD_INVALID NULL_INDEX - -/* Pktio command is free */ -#define PKTIO_CMD_FREE PKTIO_CMD_INVALID - -/* Packet IO poll queue ring size. In worst case, all pktios have all pktins - * enabled and one poll command is created per pktin queue. The ring size must - * be larger than or equal to NUM_PKTIO_CMD / PKTIO_CMD_QUEUES, so that it can - * hold all poll commands in the worst case. */ -#define PKTIO_RING_SIZE (NUM_PKTIO_CMD / PKTIO_CMD_QUEUES) - -/* Mask for wrapping around pktio poll command index */ -#define PKTIO_RING_MASK (PKTIO_RING_SIZE - 1) - /* Priority queue ring size. In worst case, all event queues are scheduled * queues and have the same priority. The ring size must be larger than or * equal to ODP_CONFIG_QUEUES / QUEUES_PER_PRIO, so that it can hold all @@ -90,7 +66,7 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && #define PRIO_QUEUE_RING_SIZE (ODP_CONFIG_QUEUES / QUEUES_PER_PRIO) /* Mask for wrapping around priority queue index */ -#define PRIO_QUEUE_MASK (PRIO_QUEUE_RING_SIZE - 1) +#define RING_MASK (PRIO_QUEUE_RING_SIZE - 1) /* Priority queue empty, not a valid queue index. */ #define PRIO_QUEUE_EMPTY NULL_INDEX @@ -103,15 +79,6 @@ ODP_STATIC_ASSERT(CHECK_IS_POWER2(ODP_CONFIG_QUEUES), ODP_STATIC_ASSERT(CHECK_IS_POWER2(PRIO_QUEUE_RING_SIZE), "Ring_size_is_not_power_of_two"); -/* Ring size must be power of two, so that PKTIO_RING_MASK can be used. */ -ODP_STATIC_ASSERT(CHECK_IS_POWER2(PKTIO_RING_SIZE), - "pktio_ring_size_is_not_power_of_two"); - -/* Number of commands queues must be power of two, so that PKTIO_CMD_QUEUE_MASK - * can be used. */ -ODP_STATIC_ASSERT(CHECK_IS_POWER2(PKTIO_CMD_QUEUES), - "pktio_cmd_queues_is_not_power_of_two"); - /* Mask of queues per priority */ typedef uint8_t pri_mask_t; @@ -150,7 +117,6 @@ typedef struct { int index; int pause; uint16_t round; - uint16_t pktin_polls; uint32_t queue_index; odp_queue_t queue; odp_event_t ev_stash[MAX_DEQ]; @@ -183,24 +149,6 @@ typedef struct ODP_ALIGNED_CACHE { } prio_queue_t; -/* Packet IO queue */ -typedef struct ODP_ALIGNED_CACHE { - /* Ring header */ - ring_t ring; - - /* Ring data: pktio poll command indexes */ - uint32_t cmd_index[PKTIO_RING_SIZE]; - -} pktio_queue_t; - -/* Packet IO poll command */ -typedef struct { - int pktio_index; - int num_pktin; - int pktin[MAX_PKTIN]; - uint32_t cmd_index; -} pktio_cmd_t; - /* Order context of a queue */ typedef struct ODP_ALIGNED_CACHE { /* Current ordered context id */ @@ -220,16 +168,6 @@ typedef struct { prio_queue_t prio_q[NUM_SCHED_GRPS][NUM_PRIO][QUEUES_PER_PRIO]; - odp_spinlock_t poll_cmd_lock; - /* Number of commands in a command queue */ - uint16_t num_pktio_cmd[PKTIO_CMD_QUEUES]; - - /* Packet IO command queues */ - pktio_queue_t pktio_q[PKTIO_CMD_QUEUES]; - - /* Packet IO poll commands */ - pktio_cmd_t pktio_cmd[NUM_PKTIO_CMD]; - odp_shm_t shm; uint32_t pri_count[NUM_PRIO][QUEUES_PER_PRIO]; @@ -244,22 +182,34 @@ typedef struct { } sched_grp[NUM_SCHED_GRPS]; struct { - int grp; - int prio; - int queue_per_prio; - int sync; - uint32_t order_lock_count; + uint8_t grp; + uint8_t prio; + uint8_t queue_per_prio; + uint8_t sync; + uint8_t order_lock_count; + uint8_t poll_pktin; + uint8_t pktio_index; + uint8_t pktin_index; } queue[ODP_CONFIG_QUEUES]; struct { - /* Number of active commands for a pktio interface */ - int num_cmd; + int num_pktin; } pktio[NUM_PKTIO]; + odp_spinlock_t pktio_lock; order_context_t order[ODP_CONFIG_QUEUES]; } sched_global_t; +/* Check that queue[] variables are large enough */ +ODP_STATIC_ASSERT(NUM_SCHED_GRPS <= 256, "Group_does_not_fit_8_bits"); +ODP_STATIC_ASSERT(NUM_PRIO <= 256, "Prio_does_not_fit_8_bits"); +ODP_STATIC_ASSERT(QUEUES_PER_PRIO <= 256, + "Queues_per_prio_does_not_fit_8_bits"); +ODP_STATIC_ASSERT(CONFIG_QUEUE_MAX_ORD_LOCKS <= 256, + "Ordered_lock_count_does_not_fit_8_bits"); +ODP_STATIC_ASSERT(NUM_PKTIO <= 256, "Pktio_index_does_not_fit_8_bits"); + /* Global scheduler context */ static sched_global_t *sched; @@ -337,16 +287,9 @@ static int schedule_init_global(void) } } - odp_spinlock_init(&sched->poll_cmd_lock); - for (i = 0; i < PKTIO_CMD_QUEUES; i++) { - ring_init(&sched->pktio_q[i].ring); - - for (j = 0; j < PKTIO_RING_SIZE; j++) - sched->pktio_q[i].cmd_index[j] = PKTIO_CMD_INVALID; - } - - for (i = 0; i < NUM_PKTIO_CMD; i++) - sched->pktio_cmd[i].cmd_index = PKTIO_CMD_FREE; + odp_spinlock_init(&sched->pktio_lock); + for (i = 0; i < NUM_PKTIO; i++) + sched->pktio[i].num_pktin = 0; odp_spinlock_init(&sched->grp_lock); odp_atomic_init_u32(&sched->grp_epoch, 0); @@ -384,14 +327,14 @@ static int schedule_term_global(void) ring_t *ring = &sched->prio_q[grp][i][j].ring; uint32_t qi; - while ((qi = ring_deq(ring, PRIO_QUEUE_MASK)) != + while ((qi = ring_deq(ring, RING_MASK)) != RING_EMPTY) { odp_event_t events[1]; int num; num = sched_cb_queue_deq_multi(qi, events, - 1); + 1, 1); if (num < 0) queue_destroy_finalize(qi); @@ -519,6 +462,9 @@ static int schedule_init_queue(uint32_t queue_index, sched->queue[queue_index].queue_per_prio = queue_per_prio(queue_index); sched->queue[queue_index].sync = sched_param->sync; sched->queue[queue_index].order_lock_count = sched_param->lock_count; + sched->queue[queue_index].poll_pktin = 0; + sched->queue[queue_index].pktio_index = 0; + sched->queue[queue_index].pktin_index = 0; odp_atomic_init_u64(&sched->order[queue_index].ctx, 0); odp_atomic_init_u64(&sched->order[queue_index].next_ctx, 0); @@ -554,88 +500,39 @@ static void schedule_destroy_queue(uint32_t queue_index) ODP_ERR("queue reorder incomplete\n"); } -static int poll_cmd_queue_idx(int pktio_index, int pktin_idx) -{ - return PKTIO_CMD_QUEUE_MASK & (pktio_index ^ pktin_idx); -} - -static inline pktio_cmd_t *alloc_pktio_cmd(void) -{ - int i; - pktio_cmd_t *cmd = NULL; - - odp_spinlock_lock(&sched->poll_cmd_lock); - - /* Find next free command */ - for (i = 0; i < NUM_PKTIO_CMD; i++) { - if (sched->pktio_cmd[i].cmd_index == PKTIO_CMD_FREE) { - cmd = &sched->pktio_cmd[i]; - cmd->cmd_index = i; - break; - } - } - - odp_spinlock_unlock(&sched->poll_cmd_lock); - - return cmd; -} - -static inline void free_pktio_cmd(pktio_cmd_t *cmd) +static int schedule_sched_queue(uint32_t queue_index) { - odp_spinlock_lock(&sched->poll_cmd_lock); - - cmd->cmd_index = PKTIO_CMD_FREE; + int grp = sched->queue[queue_index].grp; + int prio = sched->queue[queue_index].prio; + int queue_per_prio = sched->queue[queue_index].queue_per_prio; + ring_t *ring = &sched->prio_q[grp][prio][queue_per_prio].ring; - odp_spinlock_unlock(&sched->poll_cmd_lock); + ring_enq(ring, RING_MASK, queue_index); + return 0; } static void schedule_pktio_start(int pktio_index, int num_pktin, - int pktin_idx[], odp_queue_t odpq[] ODP_UNUSED) + int pktin_idx[], odp_queue_t queue[]) { - int i, idx; - pktio_cmd_t *cmd; - - if (num_pktin > MAX_PKTIN) - ODP_ABORT("Too many input queues for scheduler\n"); + int i; + uint32_t qi; - sched->pktio[pktio_index].num_cmd = num_pktin; + sched->pktio[pktio_index].num_pktin = num_pktin; - /* Create a pktio poll command per queue */ for (i = 0; i < num_pktin; i++) { + qi = queue_to_index(queue[i]); + sched->queue[qi].poll_pktin = 1; + sched->queue[qi].pktio_index = pktio_index; + sched->queue[qi].pktin_index = pktin_idx[i]; - cmd = alloc_pktio_cmd(); - - if (cmd == NULL) - ODP_ABORT("Scheduler out of pktio commands\n"); - - idx = poll_cmd_queue_idx(pktio_index, pktin_idx[i]); + ODP_ASSERT(pktin_idx[i] <= MAX_PKTIN_INDEX); - odp_spinlock_lock(&sched->poll_cmd_lock); - sched->num_pktio_cmd[idx]++; - odp_spinlock_unlock(&sched->poll_cmd_lock); - - cmd->pktio_index = pktio_index; - cmd->num_pktin = 1; - cmd->pktin[0] = pktin_idx[i]; - ring_enq(&sched->pktio_q[idx].ring, PKTIO_RING_MASK, - cmd->cmd_index); + /* Start polling */ + sched_cb_queue_set_status(qi, QUEUE_STATUS_SCHED); + schedule_sched_queue(qi); } } -static int schedule_pktio_stop(int pktio_index, int first_pktin) -{ - int num; - int idx = poll_cmd_queue_idx(pktio_index, first_pktin); - - odp_spinlock_lock(&sched->poll_cmd_lock); - sched->num_pktio_cmd[idx]--; - sched->pktio[pktio_index].num_cmd--; - num = sched->pktio[pktio_index].num_cmd; - odp_spinlock_unlock(&sched->poll_cmd_lock); - - return num; -} - static void schedule_release_atomic(void) { uint32_t qi = sched_local.queue_index; @@ -647,7 +544,7 @@ static void schedule_release_atomic(void) ring_t *ring = &sched->prio_q[grp][prio][queue_per_prio].ring; /* Release current atomic queue */ - ring_enq(ring, PRIO_QUEUE_MASK, qi); + ring_enq(ring, RING_MASK, qi); sched_local.queue_index = PRIO_QUEUE_EMPTY; } } @@ -797,6 +694,37 @@ static int schedule_ord_enq_multi(queue_t q_int, void *buf_hdr[], return 1; } +static inline int queue_is_pktin(uint32_t queue_index) +{ + return sched->queue[queue_index].poll_pktin; +} + +static inline int poll_pktin(uint32_t qi) +{ + int pktio_index, pktin_index, num, num_pktin; + + pktio_index = sched->queue[qi].pktio_index; + pktin_index = sched->queue[qi].pktin_index; + + num = sched_cb_pktin_poll(pktio_index, 1, &pktin_index); + + /* Pktio stopped or closed. Call stop_finalize when we have stopped + * polling all pktin queues of the pktio. */ + if (odp_unlikely(num < 0)) { + odp_spinlock_lock(&sched->pktio_lock); + sched->pktio[pktio_index].num_pktin--; + num_pktin = sched->pktio[pktio_index].num_pktin; + odp_spinlock_unlock(&sched->pktio_lock); + + sched_cb_queue_set_status(qi, QUEUE_STATUS_NOTSCHED); + + if (num_pktin == 0) + sched_cb_pktio_stop_finalize(pktio_index); + } + + return num; +} + static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], unsigned int max_num, int grp, int first) { @@ -820,6 +748,7 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], int ordered; odp_queue_t handle; ring_t *ring; + int pktin; if (id >= QUEUES_PER_PRIO) id = 0; @@ -834,7 +763,7 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], /* Get queue index from the priority queue */ ring = &sched->prio_q[grp][prio][id].ring; - qi = ring_deq(ring, PRIO_QUEUE_MASK); + qi = ring_deq(ring, RING_MASK); /* Priority queue empty */ if (qi == RING_EMPTY) { @@ -857,8 +786,10 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], if (ordered && max_num < MAX_DEQ) max_deq = max_num; + pktin = queue_is_pktin(qi); + num = sched_cb_queue_deq_multi(qi, sched_local.ev_stash, - max_deq); + max_deq, !pktin); if (num < 0) { /* Destroyed queue. Continue scheduling the same @@ -868,6 +799,20 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], } if (num == 0) { + /* Poll packet input. Continue scheduling queue + * connected to a packet input. Move to the next + * priority to avoid starvation of other + * priorities. Stop scheduling queue when pktio + * has been stopped. */ + if (pktin) { + int num_pkt = poll_pktin(qi); + + if (odp_likely(num_pkt >= 0)) { + ring_enq(ring, RING_MASK, qi); + break; + } + } + /* Remove empty queue from scheduling. Continue * scheduling the same priority queue. */ continue; @@ -890,14 +835,14 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], sched_local.ordered.src_queue = qi; /* Continue scheduling ordered queues */ - ring_enq(ring, PRIO_QUEUE_MASK, qi); + ring_enq(ring, RING_MASK, qi); } else if (queue_is_atomic(qi)) { /* Hold queue during atomic access */ sched_local.queue_index = qi; } else { /* Continue scheduling the queue */ - ring_enq(ring, PRIO_QUEUE_MASK, qi); + ring_enq(ring, RING_MASK, qi); } /* Output the source queue handle */ @@ -919,7 +864,7 @@ static inline int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], { int i, num_grp; int ret; - int id, first, grp_id; + int first, grp_id; uint16_t round; uint32_t epoch; @@ -972,66 +917,11 @@ static inline int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], grp_id = 0; } - /* - * Poll packet input when there are no events - * * Each thread starts the search for a poll command from its - * preferred command queue. If the queue is empty, it moves to other - * queues. - * * Most of the times, the search stops on the first command found to - * optimize multi-threaded performance. A small portion of polls - * have to do full iteration to avoid packet input starvation when - * there are less threads than command queues. - */ - id = sched_local.thr & PKTIO_CMD_QUEUE_MASK; - - for (i = 0; i < PKTIO_CMD_QUEUES; i++, id = ((id + 1) & - PKTIO_CMD_QUEUE_MASK)) { - ring_t *ring; - uint32_t cmd_index; - pktio_cmd_t *cmd; - - if (odp_unlikely(sched->num_pktio_cmd[id] == 0)) - continue; - - ring = &sched->pktio_q[id].ring; - cmd_index = ring_deq(ring, PKTIO_RING_MASK); - - if (odp_unlikely(cmd_index == RING_EMPTY)) - continue; - - cmd = &sched->pktio_cmd[cmd_index]; - - /* Poll packet input */ - if (odp_unlikely(sched_cb_pktin_poll(cmd->pktio_index, - cmd->num_pktin, - cmd->pktin))){ - /* Pktio stopped or closed. Remove poll command and call - * stop_finalize when all commands of the pktio has - * been removed. */ - if (schedule_pktio_stop(cmd->pktio_index, - cmd->pktin[0]) == 0) - sched_cb_pktio_stop_finalize(cmd->pktio_index); - - free_pktio_cmd(cmd); - } else { - /* Continue scheduling the pktio */ - ring_enq(ring, PKTIO_RING_MASK, cmd_index); - - /* Do not iterate through all pktin poll command queues - * every time. */ - if (odp_likely(sched_local.pktin_polls & 0xf)) - break; - } - } - - sched_local.pktin_polls++; return 0; } - -static int schedule_loop(odp_queue_t *out_queue, uint64_t wait, - odp_event_t out_ev[], - unsigned int max_num) +static inline int schedule_loop(odp_queue_t *out_queue, uint64_t wait, + odp_event_t out_ev[], unsigned int max_num) { odp_time_t next, wtime; int first = 1; @@ -1387,17 +1277,6 @@ static void schedule_prefetch(int num ODP_UNUSED) { } -static int schedule_sched_queue(uint32_t queue_index) -{ - int grp = sched->queue[queue_index].grp; - int prio = sched->queue[queue_index].prio; - int queue_per_prio = sched->queue[queue_index].queue_per_prio; - ring_t *ring = &sched->prio_q[grp][prio][queue_per_prio].ring; - - ring_enq(ring, PRIO_QUEUE_MASK, queue_index); - return 0; -} - static int schedule_num_grps(void) { return NUM_SCHED_GRPS; diff --git a/platform/linux-generic/odp_schedule_iquery.c b/platform/linux-generic/odp_schedule_iquery.c index ea62c3645..1a82f48da 100644 --- a/platform/linux-generic/odp_schedule_iquery.c +++ b/platform/linux-generic/odp_schedule_iquery.c @@ -291,7 +291,7 @@ static int schedule_term_global(void) odp_event_t events[1]; if (sched->availables[i]) - count = sched_cb_queue_deq_multi(i, events, 1); + count = sched_cb_queue_deq_multi(i, events, 1, 1); if (count < 0) sched_cb_queue_destroy_finalize(i); @@ -1527,7 +1527,7 @@ static inline int consume_queue(int prio, unsigned int queue_index) max = 1; count = sched_cb_queue_deq_multi( - queue_index, cache->stash, max); + queue_index, cache->stash, max, 1); if (count < 0) { DO_SCHED_UNLOCK(); diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 007d673f0..50390274b 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -559,7 +559,7 @@ static int schedule_multi(odp_queue_t *from, uint64_t wait, } qi = cmd->s.index; - num = sched_cb_queue_deq_multi(qi, events, 1); + num = sched_cb_queue_deq_multi(qi, events, 1, 1); if (num > 0) { sched_local.cmd = cmd; From patchwork Thu Mar 1 16:00:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 130320 Delivered-To: patch@linaro.org Received: by 10.80.172.228 with SMTP id x91csp2985982edc; Thu, 1 Mar 2018 08:08:14 -0800 (PST) X-Google-Smtp-Source: AG47ELvR/4X2+k+O6rjrKakIeVWrQyM6qUVnMOu+i4Zq+5eh2jmXqJzac4VW9RwJrKGeWlaUhccA X-Received: by 2002:a25:c544:: with SMTP id v65-v6mr1488789ybe.378.1519920494125; Thu, 01 Mar 2018 08:08:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519920494; cv=none; d=google.com; s=arc-20160816; b=C45Rlgl3cGkdpsqZZXNdEXQd0fBo1dFYM3FSM8MzKcGiS7k8tz9wBa2jlD4FcSBC8q 1Gnrcx5m4GJrYbsC3MhXceKDrNJ1EKJ48kmjc3ostgJBJS5pW4gCbucMO8zUThXRmj6s zlvZCbZ0A8sY8aEokK/ZaBzZMdtOivhqVLn3fDS/+3Gt3ykkn3GPXvQLl8l0PueqIm/Z V4huMKHN9te6D82DGIQH5uU9YX7oy1Vq8ZB7b3WHh6I4maiarfOgV8NcOqEVggk8PTRT HjhAgho5Qb4kg5RiOnb7b8F00wtzqh7H8H+R9s3n32NfJy3gmFFb6Lln5ikzuUF6cfHy +gZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=lg9rdqBrxARoxucbupF5VP66iqyjqP6gQBlQwOCrn1o=; b=I1G9XPfw71mI7w/kk2gJDLwI9aRqXswQngZbWaSjTSX7fNH+hfuHVCoSo+pa5L6tEE v8kN4o26fGXk0EDoRAO6ZuzgIJjcsmnkg9W6Fqoi3l4W/4QQEzvDpHLoSO+itDO3OO+r kkHOY+nZqTGPb91VpbrWDco9fqA2G2htF1tBEWYLM+0YfPjs2dh10wugO6TesB0Ui+aJ IAdYWVONJHUx3FnKZuw11SiG1SZXzvFOaAPRVAiRPwXWuRmDKPH4/BU4XF1FGrU1QbOF dSkJi/g0ghyaQHvxZgZe1wG4s6ZRJmWFPqh9b/ymf6FPLXMVEygteRM8qByga1MSf2rb AgUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id b19si4612552qkg.31.2018.03.01.08.08.13; Thu, 01 Mar 2018 08:08:14 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 9C38A616FC; Thu, 1 Mar 2018 16:08:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 59E5961778; Thu, 1 Mar 2018 16:05:12 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 18F95608F6; Thu, 1 Mar 2018 16:00:26 +0000 (UTC) Received: from forward106p.mail.yandex.net (forward106p.mail.yandex.net [77.88.28.109]) by lists.linaro.org (Postfix) with ESMTPS id 77303608F8 for ; Thu, 1 Mar 2018 16:00:14 +0000 (UTC) Received: from mxback18j.mail.yandex.net (mxback18j.mail.yandex.net [IPv6:2a02:6b8:0:1619::94]) by forward106p.mail.yandex.net (Yandex) with ESMTP id 7226A2D857A8 for ; Thu, 1 Mar 2018 19:00:13 +0300 (MSK) Received: from smtp3o.mail.yandex.net (smtp3o.mail.yandex.net [2a02:6b8:0:1a2d::27]) by mxback18j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id AG5Y0BPsVl-0D5Sw19w; Thu, 01 Mar 2018 19:00:13 +0300 Received: by smtp3o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id co35rdYWSN-0B2SIin2; Thu, 01 Mar 2018 19:00:12 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 1 Mar 2018 19:00:08 +0300 Message-Id: <1519920009-23406-3-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519920009-23406-1-git-send-email-odpbot@yandex.ru> References: <1519920009-23406-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 504 Subject: [lng-odp] [PATCH v1 2/3] linux-gen: sched: optimize atomic packet input queue throughput X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen When packet input queue is atomic, packets received from packet input are passed directly to the application. Other queue types may have events stashed on other threads, so for those incoming packets are always enqueued (to maintain packet order). Signed-off-by: Petri Savolainen --- /** Email created from pull request 504 (psavol:master-sched-optim-2) ** https://github.com/Linaro/odp/pull/504 ** Patch: https://github.com/Linaro/odp/pull/504.patch ** Base sha: 284f52d72ec19df3774c7409780f1f9eea33b8e6 ** Merge commit sha: 081cdd2f01d5a4e5b5b4d254f689e55d60a489cd **/ .../linux-generic/include/odp_buffer_inlines.h | 5 ++ .../linux-generic/include/odp_queue_internal.h | 22 ++++++++ platform/linux-generic/include/odp_queue_lf.h | 1 - platform/linux-generic/include/odp_schedule_if.h | 4 +- platform/linux-generic/odp_packet_io.c | 46 +++++++++++++---- platform/linux-generic/odp_queue_basic.c | 17 +------ platform/linux-generic/odp_queue_lf.c | 2 +- platform/linux-generic/odp_schedule_basic.c | 59 ++++++++++++++++++---- platform/linux-generic/odp_schedule_iquery.c | 6 +-- platform/linux-generic/odp_schedule_sp.c | 5 +- 10 files changed, 122 insertions(+), 45 deletions(-) diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h index 9abe79fc7..9e3b6ef70 100644 --- a/platform/linux-generic/include/odp_buffer_inlines.h +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -28,6 +28,11 @@ static inline odp_buffer_t buf_from_buf_hdr(odp_buffer_hdr_t *hdr) return (odp_buffer_t)hdr; } +static inline odp_event_t event_from_buf_hdr(odp_buffer_hdr_t *hdr) +{ + return (odp_event_t)hdr; +} + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index 0540bf72d..3aec3fe9d 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -30,6 +30,7 @@ extern "C" { #include #include #include +#include #define QUEUE_STATUS_FREE 0 #define QUEUE_STATUS_DESTROYED 1 @@ -62,6 +63,27 @@ union queue_entry_u { uint8_t pad[ROUNDUP_CACHE_LINE(sizeof(struct queue_entry_s))]; }; +typedef struct ODP_ALIGNED_CACHE { + /* Storage space for ring data */ + uint32_t data[CONFIG_QUEUE_SIZE]; +} queue_ring_data_t; + +typedef struct queue_global_t { + queue_entry_t queue[ODP_CONFIG_QUEUES]; + queue_ring_data_t ring_data[ODP_CONFIG_QUEUES]; + uint32_t queue_lf_num; + uint32_t queue_lf_size; + queue_lf_func_t queue_lf_func; + +} queue_global_t; + +extern queue_global_t *queue_glb; + +static inline queue_t queue_index_to_qint(uint32_t queue_id) +{ + return (queue_t)&queue_glb->queue[queue_id]; +} + static inline uint32_t queue_to_index(odp_queue_t handle) { return _odp_typeval(handle) - 1; diff --git a/platform/linux-generic/include/odp_queue_lf.h b/platform/linux-generic/include/odp_queue_lf.h index ef178e07b..8a973a859 100644 --- a/platform/linux-generic/include/odp_queue_lf.h +++ b/platform/linux-generic/include/odp_queue_lf.h @@ -12,7 +12,6 @@ extern "C" { #endif #include -#include /* Lock-free queue functions */ typedef struct { diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index dd2097bfb..4bd127a42 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -77,7 +77,9 @@ typedef struct schedule_fn_t { extern const schedule_fn_t *sched_fn; /* Interface for the scheduler */ -int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]); +int sched_cb_pktin_poll(int pktio_index, int pktin_index, + odp_buffer_hdr_t *hdr_tbl[], int num); +int sched_cb_pktin_poll_old(int pktio_index, int num_queue, int index[]); int sched_cb_pktin_poll_one(int pktio_index, int rx_queue, odp_event_t evts[]); void sched_cb_pktio_stop_finalize(int pktio_index); odp_queue_t sched_cb_queue_handle(uint32_t queue_index); diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index ae8e390b1..73d43b009 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -559,7 +559,7 @@ odp_pktio_t odp_pktio_lookup(const char *name) return hdl; } -static inline int pktin_recv_buf(odp_pktin_queue_t queue, +static inline int pktin_recv_buf(pktio_entry_t *entry, int pktin_index, odp_buffer_hdr_t *buffer_hdrs[], int num) { odp_packet_t pkt; @@ -570,7 +570,7 @@ static inline int pktin_recv_buf(odp_pktin_queue_t queue, int pkts; int num_rx = 0; - pkts = odp_pktin_recv(queue, packets, num); + pkts = entry->s.ops->recv(entry, pktin_index, packets, num); for (i = 0; i < pkts; i++) { pkt = packets[i]; @@ -624,13 +624,16 @@ static odp_buffer_hdr_t *pktin_dequeue(queue_t q_int) odp_buffer_hdr_t *buf_hdr; odp_buffer_hdr_t *hdr_tbl[QUEUE_MULTI_MAX]; int pkts; + odp_pktin_queue_t pktin_queue = queue_fn->get_pktin(q_int); + odp_pktio_t pktio = pktin_queue.pktio; + int pktin_index = pktin_queue.index; + pktio_entry_t *entry = get_pktio_entry(pktio); buf_hdr = queue_fn->deq(q_int); if (buf_hdr != NULL) return buf_hdr; - pkts = pktin_recv_buf(queue_fn->get_pktin(q_int), - hdr_tbl, QUEUE_MULTI_MAX); + pkts = pktin_recv_buf(entry, pktin_index, hdr_tbl, QUEUE_MULTI_MAX); if (pkts <= 0) return NULL; @@ -646,6 +649,10 @@ static int pktin_deq_multi(queue_t q_int, odp_buffer_hdr_t *buf_hdr[], int num) int nbr; odp_buffer_hdr_t *hdr_tbl[QUEUE_MULTI_MAX]; int pkts, i, j; + odp_pktin_queue_t pktin_queue = queue_fn->get_pktin(q_int); + odp_pktio_t pktio = pktin_queue.pktio; + int pktin_index = pktin_queue.index; + pktio_entry_t *entry = get_pktio_entry(pktio); nbr = queue_fn->deq_multi(q_int, buf_hdr, num); if (odp_unlikely(nbr > num)) @@ -657,8 +664,8 @@ static int pktin_deq_multi(queue_t q_int, odp_buffer_hdr_t *buf_hdr[], int num) if (nbr == num) return nbr; - pkts = pktin_recv_buf(queue_fn->get_pktin(q_int), - hdr_tbl, QUEUE_MULTI_MAX); + pkts = pktin_recv_buf(entry, pktin_index, hdr_tbl, QUEUE_MULTI_MAX); + if (pkts <= 0) return nbr; @@ -720,12 +727,29 @@ int sched_cb_pktin_poll_one(int pktio_index, return num_rx; } -int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]) +int sched_cb_pktin_poll(int pktio_index, int pktin_index, + odp_buffer_hdr_t *hdr_tbl[], int num) +{ + pktio_entry_t *entry = pktio_entry_by_index(pktio_index); + int state = entry->s.state; + + if (odp_unlikely(state != PKTIO_STATE_STARTED)) { + if (state < PKTIO_STATE_ACTIVE || + state == PKTIO_STATE_STOP_PENDING) + return -1; + + ODP_DBG("interface not started\n"); + return 0; + } + + return pktin_recv_buf(entry, pktin_index, hdr_tbl, num); +} + +int sched_cb_pktin_poll_old(int pktio_index, int num_queue, int index[]) { odp_buffer_hdr_t *hdr_tbl[QUEUE_MULTI_MAX]; int num, idx; - pktio_entry_t *entry; - entry = pktio_entry_by_index(pktio_index); + pktio_entry_t *entry = pktio_entry_by_index(pktio_index); int state = entry->s.state; if (odp_unlikely(state != PKTIO_STATE_STARTED)) { @@ -739,9 +763,9 @@ int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]) for (idx = 0; idx < num_queue; idx++) { queue_t q_int; - odp_pktin_queue_t pktin = entry->s.in_queue[index[idx]].pktin; - num = pktin_recv_buf(pktin, hdr_tbl, QUEUE_MULTI_MAX); + num = pktin_recv_buf(entry, index[idx], hdr_tbl, + QUEUE_MULTI_MAX); if (num == 0) continue; diff --git a/platform/linux-generic/odp_queue_basic.c b/platform/linux-generic/odp_queue_basic.c index d9e5fdcea..399c86ed8 100644 --- a/platform/linux-generic/odp_queue_basic.c +++ b/platform/linux-generic/odp_queue_basic.c @@ -8,7 +8,6 @@ #include #include -#include #include #include #include @@ -40,21 +39,7 @@ static int queue_init(queue_entry_t *queue, const char *name, const odp_queue_param_t *param); -typedef struct ODP_ALIGNED_CACHE { - /* Storage space for ring data */ - uint32_t data[CONFIG_QUEUE_SIZE]; -} ring_data_t; - -typedef struct queue_global_t { - queue_entry_t queue[ODP_CONFIG_QUEUES]; - ring_data_t ring_data[ODP_CONFIG_QUEUES]; - uint32_t queue_lf_num; - uint32_t queue_lf_size; - queue_lf_func_t queue_lf_func; - -} queue_global_t; - -static queue_global_t *queue_glb; +queue_global_t *queue_glb; static inline queue_entry_t *get_qentry(uint32_t queue_id) { diff --git a/platform/linux-generic/odp_queue_lf.c b/platform/linux-generic/odp_queue_lf.c index 74f529469..066e6a674 100644 --- a/platform/linux-generic/odp_queue_lf.c +++ b/platform/linux-generic/odp_queue_lf.c @@ -7,7 +7,7 @@ #include #include #include -#include +#include #include #include diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index b3847ab91..1975e3a4b 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -26,6 +26,7 @@ #include #include #include +#include /* Number of priority levels */ #define NUM_PRIO 8 @@ -699,14 +700,20 @@ static inline int queue_is_pktin(uint32_t queue_index) return sched->queue[queue_index].poll_pktin; } -static inline int poll_pktin(uint32_t qi) +static inline int poll_pktin(uint32_t qi, int atomic) { - int pktio_index, pktin_index, num, num_pktin; + odp_buffer_hdr_t *b_hdr[MAX_DEQ]; + int pktio_index, pktin_index, num, num_pktin, i; + int ret; + queue_t qint; pktio_index = sched->queue[qi].pktio_index; pktin_index = sched->queue[qi].pktin_index; - num = sched_cb_pktin_poll(pktio_index, 1, &pktin_index); + num = sched_cb_pktin_poll(pktio_index, pktin_index, b_hdr, MAX_DEQ); + + if (num == 0) + return 0; /* Pktio stopped or closed. Call stop_finalize when we have stopped * polling all pktin queues of the pktio. */ @@ -720,9 +727,32 @@ static inline int poll_pktin(uint32_t qi) if (num_pktin == 0) sched_cb_pktio_stop_finalize(pktio_index); + + return num; } - return num; + if (atomic) { + for (i = 0; i < num; i++) + sched_local.ev_stash[i] = event_from_buf_hdr(b_hdr[i]); + + return num; + } + + qint = queue_index_to_qint(qi); + + ret = queue_fn->enq_multi(qint, b_hdr, num); + + /* Drop packets that were not enqueued */ + if (odp_unlikely(ret < num)) { + int num_enq = ret; + + if (odp_unlikely(ret < 0)) + num_enq = 0; + + buffer_free_multi(&b_hdr[num_enq], num - num_enq); + } + + return ret; } static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], @@ -805,17 +835,26 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], * priorities. Stop scheduling queue when pktio * has been stopped. */ if (pktin) { - int num_pkt = poll_pktin(qi); + int atomic = queue_is_atomic(qi); + int num_pkt = poll_pktin(qi, atomic); - if (odp_likely(num_pkt >= 0)) { + if (odp_unlikely(num_pkt < 0)) + continue; + + if (num_pkt == 0 || !atomic) { ring_enq(ring, RING_MASK, qi); break; } - } - /* Remove empty queue from scheduling. Continue - * scheduling the same priority queue. */ - continue; + /* Process packets from an atomic queue + * right away */ + num = num_pkt; + } else { + /* Remove empty queue from scheduling. + * Continue scheduling the same priority + * queue. */ + continue; + } } handle = queue_from_index(qi); diff --git a/platform/linux-generic/odp_schedule_iquery.c b/platform/linux-generic/odp_schedule_iquery.c index 1a82f48da..ddd97bea7 100644 --- a/platform/linux-generic/odp_schedule_iquery.c +++ b/platform/linux-generic/odp_schedule_iquery.c @@ -674,9 +674,9 @@ static inline void pktio_poll_input(void) cmd = &sched->pktio_poll.commands[index]; /* Poll packet input */ - if (odp_unlikely(sched_cb_pktin_poll(cmd->pktio, - cmd->count, - cmd->pktin))) { + if (odp_unlikely(sched_cb_pktin_poll_old(cmd->pktio, + cmd->count, + cmd->pktin))) { /* Pktio stopped or closed. Remove poll * command and call stop_finalize when all * commands of the pktio has been removed. diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 50390274b..84d16d3c1 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -524,8 +524,9 @@ static int schedule_multi(odp_queue_t *from, uint64_t wait, cmd = sched_cmd(); if (cmd && cmd->s.type == CMD_PKTIO) { - if (sched_cb_pktin_poll(cmd->s.index, cmd->s.num_pktin, - cmd->s.pktin_idx)) { + if (sched_cb_pktin_poll_old(cmd->s.index, + cmd->s.num_pktin, + cmd->s.pktin_idx)) { /* Pktio stopped or closed. */ sched_cb_pktio_stop_finalize(cmd->s.index); } else { From patchwork Thu Mar 1 16:00:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 130319 Delivered-To: patch@linaro.org Received: by 10.80.172.228 with SMTP id x91csp2985102edc; Thu, 1 Mar 2018 08:07:32 -0800 (PST) X-Google-Smtp-Source: AG47ELu7jgC2SK/rP+pyBZwdJFMsvSlm/0+KR8yJuwcnduqlJNmsODMoaDHq/dJRGNzh0X/S7A3M X-Received: by 10.55.2.140 with SMTP id v12mr3385620qkg.251.1519920452258; Thu, 01 Mar 2018 08:07:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519920452; cv=none; d=google.com; s=arc-20160816; b=e5axPA1lh3yOXTcxeRuAfB5H+4+YkrSCX2n4E6zkYuUDextvKM/jxMOq6BpwuCHt7S 7XHziIB7bovilniNoH5G6oXttWaVKH1QzVSaDjLfrEqCIROF5IWDSHz+GKkgQ2ZfWvsi xSYp4gQIHDPPN45KJQm6tmUqLcSXaqPijU6ZD+tV4nFTXSt306f7BfyfRjZ9cscphLP2 Iq1Yn3VF2sXnsgSuLnyDiezEV7AJ4A9hXJ6ptN4MuTyYeA7Fyi821JmrREIN6wMuKQib kLBgkz6AXlWwUOzcjsxZ8aF/D+vLDpYUsJIV6ZjeN5no32pZbRaHRj4vIiN+e/D3SZrN eykg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=Ctx9BFv7msQyKIyyHNiQl7oUrMsaZoF4XLYJGKwbDqU=; b=Ut6iwTgo7YF4ZJmT2g5i8s2psVdKYjdbKSedjYekqrwGpz0pwQpOQhSJVR46DwTYIg NxQ33Gr4CfRqNBRx0Qea1KIqexAL2X8wK4of+wnG2RCSURsUxgBxSGV29tBBQ9pskj92 kSJL5YTnvrmAtKU9RQFX/+7qZ4Ji6V/0fw7U5sAwcDq7K5ACdVRXYU9k8c0xqMF/i8df i7wgeR+jbKr8cU0uc7uVBkrRp58bD/IwoykeSRsVTGT3MptTbmDk9rKSuwyMu3ueIEKw OylEhKLKsZRTXFfdbBO3jxTFBPAggTSoatdsRu8BWHF5UfPl9Sa1vYnKFFsM3ZUNhO7t siMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (ec2-54-197-127-237.compute-1.amazonaws.com. [54.197.127.237]) by mx.google.com with ESMTP id u17si1129284qka.428.2018.03.01.08.07.32; Thu, 01 Mar 2018 08:07:32 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) client-ip=54.197.127.237; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.197.127.237 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id E907D6095E; Thu, 1 Mar 2018 16:07:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2 autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id A4B23616FC; Thu, 1 Mar 2018 16:05:03 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 94577608F6; Thu, 1 Mar 2018 16:00:24 +0000 (UTC) Received: from forward105p.mail.yandex.net (forward105p.mail.yandex.net [77.88.28.108]) by lists.linaro.org (Postfix) with ESMTPS id C6B7460915 for ; Thu, 1 Mar 2018 16:00:15 +0000 (UTC) Received: from mxback1o.mail.yandex.net (mxback1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::1b]) by forward105p.mail.yandex.net (Yandex) with ESMTP id 533D74083838 for ; Thu, 1 Mar 2018 19:00:14 +0300 (MSK) Received: from smtp3o.mail.yandex.net (smtp3o.mail.yandex.net [2a02:6b8:0:1a2d::27]) by mxback1o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id 0fuX11PTCj-0EMuOBlX; Thu, 01 Mar 2018 19:00:14 +0300 Received: by smtp3o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id co35rdYWSN-0D2KthBj; Thu, 01 Mar 2018 19:00:13 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Thu, 1 Mar 2018 19:00:09 +0300 Message-Id: <1519920009-23406-4-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519920009-23406-1-git-send-email-odpbot@yandex.ru> References: <1519920009-23406-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 504 Subject: [lng-odp] [PATCH v1 3/3] linux-gen: queue: enqueue may fail X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Petri Savolainen Drop events when queue enqueue fails. Enqueue failure is more likely now when queue has limited size. Signed-off-by: Petri Savolainen --- /** Email created from pull request 504 (psavol:master-sched-optim-2) ** https://github.com/Linaro/odp/pull/504 ** Patch: https://github.com/Linaro/odp/pull/504.patch ** Base sha: 284f52d72ec19df3774c7409780f1f9eea33b8e6 ** Merge commit sha: 081cdd2f01d5a4e5b5b4d254f689e55d60a489cd **/ platform/linux-generic/odp_packet_io.c | 41 ++++++++++++++++++++++++---- platform/linux-generic/odp_schedule_basic.c | 13 +++++++-- platform/linux-generic/odp_schedule_iquery.c | 12 ++++++-- 3 files changed, 57 insertions(+), 9 deletions(-) diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index 73d43b009..17d48bfdb 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -638,8 +638,20 @@ static odp_buffer_hdr_t *pktin_dequeue(queue_t q_int) if (pkts <= 0) return NULL; - if (pkts > 1) - queue_fn->enq_multi(q_int, &hdr_tbl[1], pkts - 1); + if (pkts > 1) { + int num_enq; + int num = pkts - 1; + + num_enq = queue_fn->enq_multi(q_int, &hdr_tbl[1], num); + + if (odp_unlikely(num_enq < num)) { + if (odp_unlikely(num_enq < 0)) + num_enq = 0; + + buffer_free_multi(&hdr_tbl[num_enq + 1], num - num_enq); + } + } + buf_hdr = hdr_tbl[0]; return buf_hdr; } @@ -676,8 +688,19 @@ static int pktin_deq_multi(queue_t q_int, odp_buffer_hdr_t *buf_hdr[], int num) for (j = 0; i < pkts; i++, j++) hdr_tbl[j] = hdr_tbl[i]; - if (j) - queue_fn->enq_multi(q_int, hdr_tbl, j); + if (j) { + int num_enq; + + num_enq = queue_fn->enq_multi(q_int, hdr_tbl, j); + + if (odp_unlikely(num_enq < j)) { + if (odp_unlikely(num_enq < 0)) + num_enq = 0; + + buffer_free_multi(&buf_hdr[num_enq], j - num_enq); + } + } + return nbr; } @@ -763,6 +786,7 @@ int sched_cb_pktin_poll_old(int pktio_index, int num_queue, int index[]) for (idx = 0; idx < num_queue; idx++) { queue_t q_int; + int num_enq; num = pktin_recv_buf(entry, index[idx], hdr_tbl, QUEUE_MULTI_MAX); @@ -776,7 +800,14 @@ int sched_cb_pktin_poll_old(int pktio_index, int num_queue, int index[]) } q_int = entry->s.in_queue[index[idx]].queue_int; - queue_fn->enq_multi(q_int, hdr_tbl, num); + num_enq = queue_fn->enq_multi(q_int, hdr_tbl, num); + + if (odp_unlikely(num_enq < num)) { + if (odp_unlikely(num_enq < 0)) + num_enq = 0; + + buffer_free_multi(&hdr_tbl[num_enq], num - num_enq); + } } return 0; diff --git a/platform/linux-generic/odp_schedule_basic.c b/platform/linux-generic/odp_schedule_basic.c index 1975e3a4b..dc964f387 100644 --- a/platform/linux-generic/odp_schedule_basic.c +++ b/platform/linux-generic/odp_schedule_basic.c @@ -581,13 +581,22 @@ static inline void ordered_stash_release(void) for (i = 0; i < sched_local.ordered.stash_num; i++) { queue_entry_t *queue_entry; odp_buffer_hdr_t **buf_hdr; - int num; + int num, num_enq; queue_entry = sched_local.ordered.stash[i].queue_entry; buf_hdr = sched_local.ordered.stash[i].buf_hdr; num = sched_local.ordered.stash[i].num; - queue_fn->enq_multi(qentry_to_int(queue_entry), buf_hdr, num); + num_enq = queue_fn->enq_multi(qentry_to_int(queue_entry), + buf_hdr, num); + + /* Drop packets that were not enqueued */ + if (odp_unlikely(num_enq < num)) { + if (odp_unlikely(num_enq < 0)) + num_enq = 0; + + buffer_free_multi(&buf_hdr[num_enq], num - num_enq); + } } sched_local.ordered.stash_num = 0; } diff --git a/platform/linux-generic/odp_schedule_iquery.c b/platform/linux-generic/odp_schedule_iquery.c index ddd97bea7..ab99ce789 100644 --- a/platform/linux-generic/odp_schedule_iquery.c +++ b/platform/linux-generic/odp_schedule_iquery.c @@ -1136,13 +1136,21 @@ static inline void ordered_stash_release(void) for (i = 0; i < thread_local.ordered.stash_num; i++) { queue_entry_t *queue_entry; odp_buffer_hdr_t **buf_hdr; - int num; + int num, num_enq; queue_entry = thread_local.ordered.stash[i].queue_entry; buf_hdr = thread_local.ordered.stash[i].buf_hdr; num = thread_local.ordered.stash[i].num; - queue_fn->enq_multi(qentry_to_int(queue_entry), buf_hdr, num); + num_enq = queue_fn->enq_multi(qentry_to_int(queue_entry), + buf_hdr, num); + + if (odp_unlikely(num_enq < num)) { + if (odp_unlikely(num_enq < 0)) + num_enq = 0; + + buffer_free_multi(&buf_hdr[num_enq], num - num_enq); + } } thread_local.ordered.stash_num = 0; }