From patchwork Thu Apr 24 17:01:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taras Kondratiuk X-Patchwork-Id: 29006 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f70.google.com (mail-oa0-f70.google.com [209.85.219.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B84FC2036A for ; Thu, 24 Apr 2014 17:02:38 +0000 (UTC) Received: by mail-oa0-f70.google.com with SMTP id m1sf15298932oag.1 for ; Thu, 24 Apr 2014 10:02:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:cc:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list:content-type :content-transfer-encoding; bh=1Teu/6+tJDDej+LY8IduxdUkD8jv2OBFRbbo8QFcD0A=; b=KfO8eHg57zl4GmPzXouTwMFHuts3WoZHj4WM+FGd+GloerM5oNlJRJLYILc5DFQ0Nw XgAAGa8DT003/SDMyTFYbFR+ii6mB9mbQw/LTxEgz49sVJ+rP47bny8BfEbmv5ku3KfC 7QLvwB7BHQeybcj/WO3faRc0nbb9iQDArENFhAJcnccmBlFKQareBrU7KXdIwRFrk6fu V0O2A7/7qKHqXPk4JWgdk8s8uU4C9U45d0tZCFoCAeKElN3/1l6pV1S61YekWMN3ZoQ0 xuRwLoa46EJZ4e66C2IBqRqEq64z5K6TS3+PJIcaXTfngfxRjbwh34YIwPLpDUjPSt0G OHdQ== X-Gm-Message-State: ALoCoQkcHz/wHcLptP4HMW3xmPNrOSRCf4uwNLeNHHbfBJE+K+PpcQ/JiAPbn7QtDISP1DseUQRn X-Received: by 10.43.125.196 with SMTP id gt4mr1630307icc.32.1398358958225; Thu, 24 Apr 2014 10:02:38 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.22.75 with SMTP id 69ls1189481qgm.94.gmail; Thu, 24 Apr 2014 10:02:38 -0700 (PDT) X-Received: by 10.58.90.99 with SMTP id bv3mr1358562veb.34.1398358958082; Thu, 24 Apr 2014 10:02:38 -0700 (PDT) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by mx.google.com with ESMTPS id x18si1062026vcf.207.2014.04.24.10.02.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 24 Apr 2014 10:02:38 -0700 (PDT) Received-SPF: none (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) client-ip=209.85.220.170; Received: by mail-vc0-f170.google.com with SMTP id hr9so3376085vcb.1 for ; Thu, 24 Apr 2014 10:02:38 -0700 (PDT) X-Received: by 10.220.250.203 with SMTP id mp11mr1986517vcb.2.1398358958000; Thu, 24 Apr 2014 10:02:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp31753vcb; Thu, 24 Apr 2014 10:02:37 -0700 (PDT) X-Received: by 10.224.4.133 with SMTP id 5mr4532501qar.22.1398358957365; Thu, 24 Apr 2014 10:02:37 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id b39si2528972qge.56.2014.04.24.10.02.36 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 24 Apr 2014 10:02:37 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WdN2L-0004Iw-Oo; Thu, 24 Apr 2014 17:02:13 +0000 Received: from mail-ee0-f48.google.com ([74.125.83.48]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1WdN1m-0004GP-Tk for lng-odp@lists.linaro.org; Thu, 24 Apr 2014 17:01:39 +0000 Received: by mail-ee0-f48.google.com with SMTP id b57so2056759eek.35 for ; Thu, 24 Apr 2014 10:01:55 -0700 (PDT) X-Received: by 10.14.214.198 with SMTP id c46mr3611820eep.29.1398358915524; Thu, 24 Apr 2014 10:01:55 -0700 (PDT) Received: from uglx0153363.synapse.com ([195.238.92.128]) by mx.google.com with ESMTPSA id w46sm17371207eeo.35.2014.04.24.10.01.53 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 24 Apr 2014 10:01:54 -0700 (PDT) From: Taras Kondratiuk To: lng-odp@lists.linaro.org Date: Thu, 24 Apr 2014 20:01:37 +0300 Message-Id: <1398358899-9851-7-git-send-email-taras.kondratiuk@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1398358899-9851-1-git-send-email-taras.kondratiuk@linaro.org> References: <1398358899-9851-1-git-send-email-taras.kondratiuk@linaro.org> Cc: linaro-networking@linaro.org Subject: [lng-odp] [PATCH v4 6/8] Keystone2: Add initial HW queues support X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: taras.kondratiuk@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: patch+caf_=patchwork-forward=linaro.org@linaro.org does not designate permitted sender hosts) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Each odp_queue maps to hw queue for now. odp_queue_enq/deq() translates to hw queue enq/que. Signed-off-by: Taras Kondratiuk --- .../linux-keystone2/include/odp_buffer_internal.h | 2 +- .../include/odp_buffer_pool_internal.h | 2 +- .../linux-keystone2/include/odp_queue_internal.h | 140 ++++++++++++++++ platform/linux-keystone2/source/odp_buffer_pool.c | 11 +- platform/linux-keystone2/source/odp_queue.c | 173 +++++++++----------- 5 files changed, 226 insertions(+), 102 deletions(-) create mode 100644 platform/linux-keystone2/include/odp_queue_internal.h diff --git a/platform/linux-keystone2/include/odp_buffer_internal.h b/platform/linux-keystone2/include/odp_buffer_internal.h index 2e0c2a4..b830e12 100644 --- a/platform/linux-keystone2/include/odp_buffer_internal.h +++ b/platform/linux-keystone2/include/odp_buffer_internal.h @@ -58,7 +58,7 @@ typedef union odp_buffer_bits_t { typedef struct odp_buffer_hdr_t { Cppi_HostDesc desc; void *buf_vaddr; - odp_queue_t free_queue; + uint32_t free_queue; int type; struct odp_buffer_hdr_t *next; /* next buf in a list */ odp_buffer_bits_t handle; /* handle */ diff --git a/platform/linux-keystone2/include/odp_buffer_pool_internal.h b/platform/linux-keystone2/include/odp_buffer_pool_internal.h index 6ee3eb0..a77331c 100644 --- a/platform/linux-keystone2/include/odp_buffer_pool_internal.h +++ b/platform/linux-keystone2/include/odp_buffer_pool_internal.h @@ -57,7 +57,7 @@ struct pool_entry_s { size_t payload_size; size_t payload_align; int buf_type; - odp_queue_t free_queue; + uint32_t free_queue; uintptr_t buf_base; size_t buf_size; diff --git a/platform/linux-keystone2/include/odp_queue_internal.h b/platform/linux-keystone2/include/odp_queue_internal.h new file mode 100644 index 0000000..c7c84d6 --- /dev/null +++ b/platform/linux-keystone2/include/odp_queue_internal.h @@ -0,0 +1,140 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + + +/** + * @file + * + * ODP queue - implementation internal + */ + +#ifndef ODP_QUEUE_INTERNAL_H_ +#define ODP_QUEUE_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include +#include + + +#define USE_TICKETLOCK + +#ifdef USE_TICKETLOCK +#include +#else +#include +#endif + +#define QUEUE_MULTI_MAX 8 + +#define QUEUE_STATUS_FREE 0 +#define QUEUE_STATUS_READY 1 +#define QUEUE_STATUS_NOTSCHED 2 +#define QUEUE_STATUS_SCHED 3 + +/* forward declaration */ +union queue_entry_u; + +typedef int (*enq_func_t)(union queue_entry_u *, odp_buffer_hdr_t *); +typedef odp_buffer_hdr_t *(*deq_func_t)(union queue_entry_u *); + +typedef int (*enq_multi_func_t)(union queue_entry_u *, + odp_buffer_hdr_t **, int); +typedef int (*deq_multi_func_t)(union queue_entry_u *, + odp_buffer_hdr_t **, int); + +struct queue_entry_s { +#ifdef USE_TICKETLOCK + odp_ticketlock_t lock ODP_ALIGNED_CACHE; +#else + odp_spinlock_t lock ODP_ALIGNED_CACHE; +#endif + + odp_buffer_hdr_t *head; + odp_buffer_hdr_t *tail; + int status; + + enq_func_t enqueue ODP_ALIGNED_CACHE; + deq_func_t dequeue; + enq_multi_func_t enqueue_multi; + deq_multi_func_t dequeue_multi; + + odp_queue_t handle; + odp_buffer_t sched_buf; + odp_queue_type_t type; + odp_queue_param_t param; + odp_pktio_t pktin; + odp_pktio_t pktout; + uint32_t hw_queue; + char name[ODP_QUEUE_NAME_LEN]; +}; + +typedef union queue_entry_u { + struct queue_entry_s s; + uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct queue_entry_s))]; +} queue_entry_t; + + +queue_entry_t *get_qentry(uint32_t queue_id); + +int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr); +odp_buffer_hdr_t *queue_deq(queue_entry_t *queue); + +int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num); +int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num); + +void queue_lock(queue_entry_t *queue); +void queue_unlock(queue_entry_t *queue); + +odp_buffer_t queue_sched_buf(odp_queue_t queue); +int queue_sched_atomic(odp_queue_t handle); + +static inline uint32_t queue_to_id(odp_queue_t handle) +{ + return handle - 1; +} + +static inline odp_queue_t queue_from_id(uint32_t queue_id) +{ + return queue_id + 1; +} + +static inline queue_entry_t *queue_to_qentry(odp_queue_t handle) +{ + uint32_t queue_id; + + queue_id = queue_to_id(handle); + return get_qentry(queue_id); +} + +static inline void _ti_hw_queue_push_desc(uint32_t hw_queue, + odp_buffer_hdr_t *buf_hdr) +{ + ti_em_osal_hw_queue_push_size(hw_queue, + (void *)&buf_hdr->desc, + sizeof(Cppi_HostDesc), + TI_EM_MEM_PUBLIC_DESC); +} + +static inline odp_buffer_hdr_t *_ti_hw_queue_pop_desc(uint32_t hw_queue) +{ + return ti_em_osal_hw_queue_pop(hw_queue, + TI_EM_MEM_PUBLIC_DESC); +} + +odp_queue_t _odp_queue_create(const char *name, odp_queue_type_t type, + odp_queue_param_t *param, uint32_t hw_queue); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-keystone2/source/odp_buffer_pool.c b/platform/linux-keystone2/source/odp_buffer_pool.c index 7e10689..9a2f6cb 100644 --- a/platform/linux-keystone2/source/odp_buffer_pool.c +++ b/platform/linux-keystone2/source/odp_buffer_pool.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -218,10 +219,7 @@ static int link_bufs(pool_entry_t *pool) &tag); odp_sync_stores(); - ti_em_osal_hw_queue_push_size(pool->s.free_queue, - (void *)hdr, - sizeof(Cppi_HostDesc), - TI_EM_MEM_PUBLIC_DESC); + _ti_hw_queue_push_desc(pool->s.free_queue, hdr); buf_addr.v += buf_size; buf_addr.p += buf_size; desc_index++; @@ -303,10 +301,7 @@ odp_buffer_t odp_buffer_alloc(odp_buffer_pool_t pool_id) void odp_buffer_free(odp_buffer_t buf) { odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); - ti_em_osal_hw_queue_push_size(hdr->free_queue, - (void *)hdr, - sizeof(Cppi_HostDesc), - TI_EM_MEM_PUBLIC_DESC); + _ti_hw_queue_push_desc(hdr->free_queue, hdr); } void odp_buffer_pool_print(odp_buffer_pool_t pool_id) diff --git a/platform/linux-keystone2/source/odp_queue.c b/platform/linux-keystone2/source/odp_queue.c index 6248edb..8e6c2fe 100644 --- a/platform/linux-keystone2/source/odp_queue.c +++ b/platform/linux-keystone2/source/odp_queue.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -110,6 +111,12 @@ int odp_queue_init_global(void) queue_entry_t *queue = get_qentry(i); LOCK_INIT(&queue->s.lock); queue->s.handle = queue_from_id(i); + queue->s.status = QUEUE_STATUS_FREE; + /* + * TODO: HW queue is mapped dirrectly to queue_entry_t + * instance. It may worth to allocate HW queue on open. + */ + queue->s.hw_queue = TI_ODP_PUBLIC_QUEUE_BASE_IDX + i; } ODP_DBG("done\n"); @@ -141,8 +148,8 @@ odp_schedule_sync_t odp_queue_sched_type(odp_queue_t handle) return queue->s.param.sched.sync; } -odp_queue_t odp_queue_create(const char *name, odp_queue_type_t type, - odp_queue_param_t *param) +odp_queue_t _odp_queue_create(const char *name, odp_queue_type_t type, + odp_queue_param_t *param, uint32_t hw_queue) { uint32_t i; queue_entry_t *queue; @@ -156,6 +163,18 @@ odp_queue_t odp_queue_create(const char *name, odp_queue_type_t type, LOCK(&queue->s.lock); if (queue->s.status == QUEUE_STATUS_FREE) { + if (hw_queue) + queue->s.hw_queue = hw_queue; + /* + * Don't open hw queue if its number is specified + * as it is most probably opened by Linux kernel + */ + else if (ti_em_osal_hw_queue_open(queue->s.hw_queue) + != EM_OK) { + UNLOCK(&queue->s.lock); + continue; + } + queue_init(queue, name, type, param); if (type == ODP_QUEUE_TYPE_SCHED || @@ -188,6 +207,12 @@ odp_queue_t odp_queue_create(const char *name, odp_queue_type_t type, return handle; } +odp_queue_t odp_queue_create(const char *name, odp_queue_type_t type, + odp_queue_param_t *param) +{ + return _odp_queue_create(name, type, param, 0); +} + odp_buffer_t queue_sched_buf(odp_queue_t handle) { @@ -232,65 +257,51 @@ odp_queue_t odp_queue_lookup(const char *name) int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) { - int sched = 0; + _ti_hw_queue_push_desc(queue->s.hw_queue, buf_hdr); - LOCK(&queue->s.lock); - if (queue->s.head == NULL) { - /* Empty queue */ - queue->s.head = buf_hdr; - queue->s.tail = buf_hdr; - buf_hdr->next = NULL; - } else { - queue->s.tail->next = buf_hdr; - queue->s.tail = buf_hdr; - buf_hdr->next = NULL; - } - - if (queue->s.status == QUEUE_STATUS_NOTSCHED) { - queue->s.status = QUEUE_STATUS_SCHED; - sched = 1; /* retval: schedule queue */ + if (queue->s.type == ODP_QUEUE_TYPE_SCHED) { + int sched = 0; + LOCK(&queue->s.lock); + if (queue->s.status == QUEUE_STATUS_NOTSCHED) { + queue->s.status = QUEUE_STATUS_SCHED; + sched = 1; + } + UNLOCK(&queue->s.lock); + /* Add queue to scheduling */ + if (sched) + odp_schedule_queue(queue->s.handle, + queue->s.param.sched.prio); } - UNLOCK(&queue->s.lock); - - /* Add queue to scheduling */ - if (sched == 1) - odp_schedule_queue(queue->s.handle, queue->s.param.sched.prio); - return 0; } int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) { - int sched = 0; int i; - odp_buffer_hdr_t *tail; - for (i = 0; i < num - 1; i++) - buf_hdr[i]->next = buf_hdr[i+1]; - - tail = buf_hdr[num-1]; - buf_hdr[num-1]->next = NULL; - - LOCK(&queue->s.lock); - /* Empty queue */ - if (queue->s.head == NULL) - queue->s.head = buf_hdr[0]; - else - queue->s.tail->next = buf_hdr[0]; - - queue->s.tail = tail; - - if (queue->s.status == QUEUE_STATUS_NOTSCHED) { - queue->s.status = QUEUE_STATUS_SCHED; - sched = 1; /* retval: schedule queue */ + /* + * TODO: Should this series of buffers be enqueued atomically? + * Can another buffer be pushed in this queue in the middle? + */ + for (i = 0; i < num; i++) { + /* TODO: Implement multi dequeue a lower level */ + _ti_hw_queue_push_desc(queue->s.hw_queue, buf_hdr[i]); } - UNLOCK(&queue->s.lock); - - /* Add queue to scheduling */ - if (sched == 1) - odp_schedule_queue(queue->s.handle, queue->s.param.sched.prio); + if (queue->s.type == ODP_QUEUE_TYPE_SCHED) { + int sched = 0; + LOCK(&queue->s.lock); + if (queue->s.status == QUEUE_STATUS_NOTSCHED) { + queue->s.status = QUEUE_STATUS_SCHED; + sched = 1; + } + UNLOCK(&queue->s.lock); + /* Add queue to scheduling */ + if (sched) + odp_schedule_queue(queue->s.handle, + queue->s.param.sched.prio); + } return 0; } @@ -327,63 +338,41 @@ int odp_queue_enq(odp_queue_t handle, odp_buffer_t buf) odp_buffer_hdr_t *queue_deq(queue_entry_t *queue) { - odp_buffer_hdr_t *buf_hdr = NULL; + odp_buffer_hdr_t *buf_hdr; - LOCK(&queue->s.lock); + buf_hdr = (odp_buffer_hdr_t *)ti_em_osal_hw_queue_pop(queue->s.hw_queue, + TI_EM_MEM_PUBLIC_DESC); - if (queue->s.head == NULL) { - /* Already empty queue */ - if (queue->s.status == QUEUE_STATUS_SCHED && - queue->s.type != ODP_QUEUE_TYPE_PKTIN) + if (!buf_hdr && queue->s.type == ODP_QUEUE_TYPE_SCHED) { + LOCK(&queue->s.lock); + if (!buf_hdr && queue->s.status == QUEUE_STATUS_SCHED) queue->s.status = QUEUE_STATUS_NOTSCHED; - } else { - buf_hdr = queue->s.head; - queue->s.head = buf_hdr->next; - buf_hdr->next = NULL; - - if (queue->s.head == NULL) { - /* Queue is now empty */ - queue->s.tail = NULL; - } + UNLOCK(&queue->s.lock); } - UNLOCK(&queue->s.lock); - return buf_hdr; } int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) { - int i = 0; - - LOCK(&queue->s.lock); - - if (queue->s.head == NULL) { - /* Already empty queue */ - if (queue->s.status == QUEUE_STATUS_SCHED && - queue->s.type != ODP_QUEUE_TYPE_PKTIN) - queue->s.status = QUEUE_STATUS_NOTSCHED; - } else { - odp_buffer_hdr_t *hdr = queue->s.head; - - for (; i < num && hdr; i++) { - buf_hdr[i] = hdr; - /* odp_prefetch(hdr->addr); */ - hdr = hdr->next; - buf_hdr[i]->next = NULL; - } - - queue->s.head = hdr; - - if (hdr == NULL) { - /* Queue is now empty */ - queue->s.tail = NULL; + int i; + for (i = 0; i < num; i++) { + /* TODO: Implement multi dequeue a lower level */ + buf_hdr[i] = (odp_buffer_hdr_t *)ti_em_osal_hw_queue_pop( + queue->s.hw_queue, + TI_EM_MEM_PUBLIC_DESC); + if (!buf_hdr[i]) { + if (queue->s.type != ODP_QUEUE_TYPE_SCHED) + break; + LOCK(&queue->s.lock); + if (queue->s.status == QUEUE_STATUS_SCHED) + queue->s.status = QUEUE_STATUS_NOTSCHED; + UNLOCK(&queue->s.lock); + break; } } - UNLOCK(&queue->s.lock); - return i; }