From patchwork Wed Sep 20 09:00:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 113093 Delivered-To: patch@linaro.org Received: by 10.80.163.150 with SMTP id s22csp444112edb; Wed, 20 Sep 2017 02:07:18 -0700 (PDT) X-Google-Smtp-Source: AOwi7QDxIzeH57uIqLTxzHyxXXAfrcVaQ0Aiu5IaGqC6eBLkbqdtNNAC9mmQ7YrDg8reWX8MKnDU X-Received: by 10.200.52.173 with SMTP id w42mr6481124qtb.67.1505898438416; Wed, 20 Sep 2017 02:07:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1505898438; cv=none; d=google.com; s=arc-20160816; b=EefjIQKCUy3y/AfXX42CA3+R+/DqbBbcgCEi0PFPKai2TqSTtOcaOoHvt2rW3DmNTm heGGNeYkiyiSBule61r8teMD7ks2vexwCLch7msrcpOK1z+uZREgTbJ+KfN8SMgb923C Q3wbmvhKeSxm9Hxmf8TtCzwkBCGavvKGRxlwaHkiyJRE23tlMigqxySnss1xaRN4hDNf jSrn0K9bSsS4qCP5TCaNb2LAE98gqrHxp8AmWnCqXQInCfdQC2Bb7Cl0oiZJJn+TrU1F YGdm5Bw+1yruMWU6X4cKsH0aFzRj+TzxvOz1GJZic3sDPKzZHh51tjwB4bPeQa1OvY+A Oi8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=KAgMVe5AUZmNnryLXASRWLV4A9xSHmNxbygZdh8ihSc=; b=zg3peBTV72PCyTEJjer7zu8O6X9o9l8mwBFhXmh3MLjSYe1p49DHilsB56TKzopvCs RCglB03HodE3lHeo8DvUyB1Nn6SXoYvyXY2Dn9vUreXsYW9sro6UeprW+uUkyvVhkYAv 6jc5tj3jiYcC3QnIKsCHu6o7bukQ4OAMLt0f2Q5lmWVNm0AZhJPhinj9lXaPvgAVOP0L oz2QCA2hoyMRvQxXUnRnkAV7MbTFE6cz1fKxMrE39eM8cu49wr5lJkzEj9Twh7SVgwET W5im43Ksc0h1kDyPgRFeSn4g/AtPAa/RS++1xBY+LlK+rR4RthCvGn+v5zOtkUZDj85B hayQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id n42si1426257qtc.308.2017.09.20.02.07.17; Wed, 20 Sep 2017 02:07:18 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id C659562ED2; Wed, 20 Sep 2017 09:07:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 5B31E62F1C; Wed, 20 Sep 2017 09:01:48 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 94CF360BDB; Wed, 20 Sep 2017 09:01:43 +0000 (UTC) Received: from forward100o.mail.yandex.net (forward100o.mail.yandex.net [37.140.190.180]) by lists.linaro.org (Postfix) with ESMTPS id 57E3F607C8 for ; Wed, 20 Sep 2017 09:00:40 +0000 (UTC) Received: from mxback1g.mail.yandex.net (mxback1g.mail.yandex.net [IPv6:2a02:6b8:0:1472:2741:0:8b7:162]) by forward100o.mail.yandex.net (Yandex) with ESMTP id 01FD52A20891 for ; Wed, 20 Sep 2017 12:00:39 +0300 (MSK) Received: from smtp4p.mail.yandex.net (smtp4p.mail.yandex.net [2a02:6b8:0:1402::15:6]) by mxback1g.mail.yandex.net (nwsmtp/Yandex) with ESMTP id POB4IvqwJJ-0WQmRCdB; Wed, 20 Sep 2017 12:00:32 +0300 Received: by smtp4p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id Dmm7iA3uiT-0VNu4mrR; Wed, 20 Sep 2017 12:00:31 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Wed, 20 Sep 2017 12:00:09 +0300 Message-Id: <1505898014-18011-6-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1505898014-18011-1-git-send-email-odpbot@yandex.ru> References: <1505898014-18011-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 139 Subject: [lng-odp] [PATCH CLOUD-DEV v5 5/10] linux-gen: pktio: ipc: minor code refactory X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Yi He Move implementation specific data structure into dedicated header file. Signed-off-by: Yi He Signed-off-by: Balakrishna Garapati Reviewed-by: Brian Brooks Reviewed-by: Honnappa Nagarahalli Reviewed-by: Bogdan Pricope Reviewed-by: Josep Puigdemont Reviewed-by: Bill Fischofer --- /** Email created from pull request 139 (heyi-linaro:modular-pktio-ops) ** https://github.com/Linaro/odp/pull/139 ** Patch: https://github.com/Linaro/odp/pull/139.patch ** Base sha: c6a520126eff39b7ebce8e790fb960259ce8f812 ** Merge commit sha: 8d7f8c3de9639acb3a2b8eb8c3830c044eb6a437 **/ platform/linux-dpdk/Makefile.am | 2 +- platform/linux-generic/Makefile.am | 2 +- .../linux-generic/include/odp_packet_io_internal.h | 39 ----- .../include/odp_packet_io_ipc_internal.h | 48 ------ platform/linux-generic/include/odp_pktio_ops_ipc.h | 91 ++++++++++ .../include/odp_pktio_ops_subsystem.h | 2 + platform/linux-generic/odp_packet_io.c | 1 - platform/linux-generic/pktio/ipc.c | 185 +++++++++++---------- 8 files changed, 190 insertions(+), 180 deletions(-) delete mode 100644 platform/linux-generic/include/odp_packet_io_ipc_internal.h create mode 100644 platform/linux-generic/include/odp_pktio_ops_ipc.h diff --git a/platform/linux-dpdk/Makefile.am b/platform/linux-dpdk/Makefile.am index 9d13ec1ca..15c9fb446 100644 --- a/platform/linux-dpdk/Makefile.am +++ b/platform/linux-dpdk/Makefile.am @@ -190,12 +190,12 @@ noinst_HEADERS = \ ${top_srcdir}/platform/linux-generic/include/odp_internal.h \ ${srcdir}/include/odp_packet_dpdk.h \ ${srcdir}/include/odp_packet_internal.h \ + ${top_srcdir}/platform/linux-generic/include/odp_pktio_ops_ipc.h \ ${top_srcdir}/platform/linux-generic/include/odp_pktio_ops_subsystem.h \ ${top_srcdir}/platform/linux-generic/include/odp_pktio_ops_loopback.h \ ${top_srcdir}/platform/linux-generic/include/odp_name_table_internal.h \ ${srcdir}/include/odp_packet_io_internal.h \ ${srcdir}/include/odp_errno_define.h \ - ${top_srcdir}/platform/linux-generic/include/odp_packet_io_ipc_internal.h \ ${top_srcdir}/platform/linux-generic/include/odp_packet_io_ring_internal.h \ ${top_srcdir}/platform/linux-generic/include/odp_packet_socket.h \ ${top_srcdir}/platform/linux-generic/include/odp_pkt_queue_internal.h \ diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 68fc246ab..5c4d02a3d 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -185,12 +185,12 @@ noinst_HEADERS = \ ${srcdir}/include/odp_name_table_internal.h \ ${srcdir}/include/odp_packet_internal.h \ ${srcdir}/include/odp_packet_io_internal.h \ - ${srcdir}/include/odp_packet_io_ipc_internal.h \ ${srcdir}/include/odp_packet_io_ring_internal.h \ ${srcdir}/include/odp_packet_netmap.h \ ${srcdir}/include/odp_packet_dpdk.h \ ${srcdir}/include/odp_packet_socket.h \ ${srcdir}/include/odp_packet_tap.h \ + ${srcdir}/include/odp_pktio_ops_ipc.h \ ${srcdir}/include/odp_pktio_ops_loopback.h \ ${srcdir}/include/odp_pktio_ops_subsystem.h \ ${srcdir}/include/odp_pkt_queue_internal.h \ diff --git a/platform/linux-generic/include/odp_packet_io_internal.h b/platform/linux-generic/include/odp_packet_io_internal.h index 893fa1144..8d3c3ddff 100644 --- a/platform/linux-generic/include/odp_packet_io_internal.h +++ b/platform/linux-generic/include/odp_packet_io_internal.h @@ -66,44 +66,6 @@ typedef struct { } pkt_pcap_t; #endif -typedef struct { - /* TX */ - struct { - _ring_t *send; /**< ODP ring for IPC msg packets - indexes transmitted to shared - memory */ - _ring_t *free; /**< ODP ring for IPC msg packets - indexes already processed by remote - process */ - } tx; - /* RX */ - struct { - _ring_t *recv; /**< ODP ring for IPC msg packets - indexes received from shared - memory (from remote process) */ - _ring_t *free; /**< odp ring for ipc msg packets - indexes already processed by - current process */ - _ring_t *cache; /**< local cache to keep packet order right */ - } rx; /* slave */ - void *pool_base; /**< Remote pool base addr */ - void *pool_mdata_base; /**< Remote pool mdata base addr */ - uint64_t pkt_size; /**< Packet size in remote pool */ - odp_pool_t pool; /**< Pool of main process */ - enum { - PKTIO_TYPE_IPC_MASTER = 0, /**< Master is the process which - creates shm */ - PKTIO_TYPE_IPC_SLAVE /**< Slave is the process which - connects to shm */ - } type; /**< define if it's master or slave process */ - odp_atomic_u32_t ready; /**< 1 - pktio is ready and can recv/send - packet, 0 - not yet ready */ - void *pinfo; - odp_shm_t pinfo_shm; - odp_shm_t remote_pool_shm; /**< shm of remote pool get with - _ipc_map_remote_pool() */ -} _ipc_pktio_t; - struct pktio_entry { const pktio_ops_module_t *ops; /**< Implementation specific methods */ pktio_ops_data_t ops_data; /**< IO operation specific data */ @@ -122,7 +84,6 @@ struct pktio_entry { pkt_pcap_t pkt_pcap; /**< Using pcap for IO */ #endif pkt_tap_t pkt_tap; /**< using TAP for IO */ - _ipc_pktio_t ipc; /**< IPC pktio data */ }; enum { /* Not allocated */ diff --git a/platform/linux-generic/include/odp_packet_io_ipc_internal.h b/platform/linux-generic/include/odp_packet_io_ipc_internal.h deleted file mode 100644 index 9d8943a6c..000000000 --- a/platform/linux-generic/include/odp_packet_io_ipc_internal.h +++ /dev/null @@ -1,48 +0,0 @@ -/* Copyright (c) 2015, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -/* IPC packet I/O over shared memory ring */ -#include - -/* number of odp buffers in odp ring queue */ -#define PKTIO_IPC_ENTRIES 4096 - -/* that struct is exported to shared memory, so that processes can find - * each other. - */ -struct pktio_info { - struct { - /* number of buffer*/ - int num; - /* size of packet/segment in remote pool */ - uint32_t block_size; - char pool_name[ODP_POOL_NAME_LEN]; - /* 1 if master finished creation of all shared objects */ - int init_done; - } master; - struct { - void *base_addr; - uint32_t block_size; - char pool_name[ODP_POOL_NAME_LEN]; - /* pid of the slave process written to shm and - * used by master to look up memory created by - * slave - */ - int pid; - int init_done; - } slave; -} ODP_PACKED; diff --git a/platform/linux-generic/include/odp_pktio_ops_ipc.h b/platform/linux-generic/include/odp_pktio_ops_ipc.h new file mode 100644 index 000000000..5903ef9b8 --- /dev/null +++ b/platform/linux-generic/include/odp_pktio_ops_ipc.h @@ -0,0 +1,91 @@ +/* Copyright (c) 2015, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_PKTIO_OPS_IPC_H_ +#define ODP_PKTIO_OPS_IPC_H_ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +/* IPC packet I/O over shared memory ring */ +#include + +/* number of odp buffers in odp ring queue */ +#define PKTIO_IPC_ENTRIES 4096 + +/* that struct is exported to shared memory, so that processes can find + * each other. + */ +struct pktio_info { + struct { + /* number of buffer*/ + int num; + /* size of packet/segment in remote pool */ + uint32_t block_size; + char pool_name[ODP_POOL_NAME_LEN]; + /* 1 if master finished creation of all shared objects */ + int init_done; + } master; + struct { + void *base_addr; + uint32_t block_size; + char pool_name[ODP_POOL_NAME_LEN]; + /* pid of the slave process written to shm and + * used by master to look up memory created by + * slave + */ + int pid; + int init_done; + } slave; +} ODP_PACKED; + +typedef struct { + /* TX */ + struct { + _ring_t *send; /**< ODP ring for IPC msg packets + indexes transmitted to shared + memory */ + _ring_t *free; /**< ODP ring for IPC msg packets + indexes already processed by remote + process */ + } tx; + /* RX */ + struct { + _ring_t *recv; /**< ODP ring for IPC msg packets + indexes received from shared + memory (from remote process) */ + _ring_t *free; /**< odp ring for ipc msg packets + indexes already processed by + current process */ + _ring_t *cache; /**< local cache to keep packet order right */ + } rx; /* slave */ + void *pool_base; /**< Remote pool base addr */ + void *pool_mdata_base; /**< Remote pool mdata base addr */ + uint64_t pkt_size; /**< Packet size in remote pool */ + odp_pool_t pool; /**< Pool of main process */ + enum { + PKTIO_TYPE_IPC_MASTER = 0, /**< Master is the process which + creates shm */ + PKTIO_TYPE_IPC_SLAVE /**< Slave is the process which + connects to shm */ + } type; /**< define if it's master or slave process */ + odp_atomic_u32_t ready; /**< 1 - pktio is ready and can recv/send + packet, 0 - not yet ready */ + void *pinfo; + odp_shm_t pinfo_shm; + odp_shm_t remote_pool_shm; /**< shm of remote pool get with + _ipc_map_remote_pool() */ +} pktio_ops_ipc_data_t; + +#endif diff --git a/platform/linux-generic/include/odp_pktio_ops_subsystem.h b/platform/linux-generic/include/odp_pktio_ops_subsystem.h index 7b90ed3d3..650b36648 100644 --- a/platform/linux-generic/include/odp_pktio_ops_subsystem.h +++ b/platform/linux-generic/include/odp_pktio_ops_subsystem.h @@ -79,12 +79,14 @@ typedef ODP_MODULE_CLASS(pktio_ops) { } pktio_ops_module_t; /* All implementations of this subsystem */ +#include #include /* Per implementation private data * TODO: refactory each implementation to hide it internally */ typedef union { + pktio_ops_ipc_data_t ipc; pktio_ops_loopback_data_t loopback; } pktio_ops_data_t; diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index d8dcc45da..6a19394a5 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index 984f0ab44..14cd86eb7 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -3,7 +3,6 @@ * * SPDX-License-Identifier: BSD-3-Clause */ -#include #include #include #include @@ -43,7 +42,7 @@ static const char *_ipc_odp_buffer_pool_shm_name(odp_pool_t pool_hdl) static int _ipc_master_start(pktio_entry_t *pktio_entry) { - struct pktio_info *pinfo = pktio_entry->s.ipc.pinfo; + struct pktio_info *pinfo = pktio_entry->ops_data(ipc).pinfo; odp_shm_t shm; if (pinfo->slave.init_done == 0) @@ -57,11 +56,11 @@ static int _ipc_master_start(pktio_entry_t *pktio_entry) return -1; } - pktio_entry->s.ipc.remote_pool_shm = shm; - pktio_entry->s.ipc.pool_base = odp_shm_addr(shm); - pktio_entry->s.ipc.pool_mdata_base = (char *)odp_shm_addr(shm); + pktio_entry->ops_data(ipc).remote_pool_shm = shm; + pktio_entry->ops_data(ipc).pool_base = odp_shm_addr(shm); + pktio_entry->ops_data(ipc).pool_mdata_base = (char *)odp_shm_addr(shm); - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); + odp_atomic_store_u32(&pktio_entry->ops_data(ipc).ready, 1); IPC_ODP_DBG("%s started.\n", pktio_entry->s.name); return 0; @@ -88,62 +87,62 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, * to be processed packets ring. */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.tx.send = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).tx.send = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.tx.send) { + if (!pktio_entry->ops_data(ipc).tx.send) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); return -1; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.send), - _ring_free_count(pktio_entry->s.ipc.tx.send)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.send), + _ring_free_count(pktio_entry->ops_data(ipc).tx.send)); /* generate name in shm like ipc_pktio_p for * already processed packets */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.tx.free = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).tx.free = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.tx.free) { + if (!pktio_entry->ops_data(ipc).tx.free) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_prod; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.free), - _ring_free_count(pktio_entry->s.ipc.tx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.free), + _ring_free_count(pktio_entry->ops_data(ipc).tx.free)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.rx.recv = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).rx.recv = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.rx.recv) { + if (!pktio_entry->ops_data(ipc).rx.recv) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_cons; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.recv), - _ring_free_count(pktio_entry->s.ipc.rx.recv)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.recv), + _ring_free_count(pktio_entry->ops_data(ipc).rx.recv)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.rx.free = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).rx.free = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.rx.free) { + if (!pktio_entry->ops_data(ipc).rx.free) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_s_prod; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.free), - _ring_free_count(pktio_entry->s.ipc.rx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.free), + _ring_free_count(pktio_entry->ops_data(ipc).rx.free)); /* Set up pool name for remote info */ - pinfo = pktio_entry->s.ipc.pinfo; + pinfo = pktio_entry->ops_data(ipc).pinfo; pool_name = _ipc_odp_buffer_pool_shm_name(pool_hdl); if (strlen(pool_name) > ODP_POOL_NAME_LEN) { ODP_ERR("pid %d ipc pool name %s is too big %d\n", @@ -156,7 +155,7 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, pinfo->slave.pid = 0; pinfo->slave.init_done = 0; - pktio_entry->s.ipc.pool = pool_hdl; + pktio_entry->ops_data(ipc).pool = pool_hdl; ODP_DBG("Pre init... DONE.\n"); pinfo->master.init_done = 1; @@ -225,7 +224,7 @@ static int _ipc_init_slave(const char *dev, if (strlen(dev) > (ODP_POOL_NAME_LEN - sizeof("_slave_r"))) ODP_ABORT("too big ipc name\n"); - pktio_entry->s.ipc.pool = pool; + pktio_entry->ops_data(ipc).pool = pool; return 0; } @@ -246,61 +245,61 @@ static int _ipc_slave_start(pktio_entry_t *pktio_entry) sprintf(dev, "ipc:%s", tail); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.rx.recv = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.rx.recv) { + pktio_entry->ops_data(ipc).rx.recv = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).rx.recv) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); sleep(1); return -1; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.recv), - _ring_free_count(pktio_entry->s.ipc.rx.recv)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.recv), + _ring_free_count(pktio_entry->ops_data(ipc).rx.recv)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.rx.free = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.rx.free) { + pktio_entry->ops_data(ipc).rx.free = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).rx.free) { ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_prod; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.free), - _ring_free_count(pktio_entry->s.ipc.rx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.free), + _ring_free_count(pktio_entry->ops_data(ipc).rx.free)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.tx.send = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.tx.send) { + pktio_entry->ops_data(ipc).tx.send = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).tx.send) { ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_cons; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.send), - _ring_free_count(pktio_entry->s.ipc.tx.send)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.send), + _ring_free_count(pktio_entry->ops_data(ipc).tx.send)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.tx.free = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.tx.free) { + pktio_entry->ops_data(ipc).tx.free = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).tx.free) { ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_s_prod; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.free), - _ring_free_count(pktio_entry->s.ipc.tx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.free), + _ring_free_count(pktio_entry->ops_data(ipc).tx.free)); /* Get info about remote pool */ - pinfo = pktio_entry->s.ipc.pinfo; + pinfo = pktio_entry->ops_data(ipc).pinfo; shm = _ipc_map_remote_pool(pinfo->master.pool_name, pid); - pktio_entry->s.ipc.remote_pool_shm = shm; - pktio_entry->s.ipc.pool_mdata_base = (char *)odp_shm_addr(shm); - pktio_entry->s.ipc.pkt_size = pinfo->master.block_size; + pktio_entry->ops_data(ipc).remote_pool_shm = shm; + pktio_entry->ops_data(ipc).pool_mdata_base = (char *)odp_shm_addr(shm); + pktio_entry->ops_data(ipc).pkt_size = pinfo->master.block_size; - _ipc_export_pool(pinfo, pktio_entry->s.ipc.pool); + _ipc_export_pool(pinfo, pktio_entry->ops_data(ipc).pool); - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); + odp_atomic_store_u32(&pktio_entry->ops_data(ipc).ready, 1); pinfo->slave.init_done = 1; ODP_DBG("%s started.\n", pktio_entry->s.name); @@ -339,15 +338,15 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, if (strncmp(dev, "ipc", 3)) return -1; - odp_atomic_init_u32(&pktio_entry->s.ipc.ready, 0); + odp_atomic_init_u32(&pktio_entry->ops_data(ipc).ready, 0); - pktio_entry->s.ipc.rx.cache = _ring_create("ipc_rx_cache", + pktio_entry->ops_data(ipc).rx.cache = _ring_create("ipc_rx_cache", PKTIO_IPC_ENTRIES, _RING_NO_LIST); /* Shared info about remote pktio */ if (sscanf(dev, "ipc:%d:%s", &pid, tail) == 2) { - pktio_entry->s.ipc.type = PKTIO_TYPE_IPC_SLAVE; + pktio_entry->ops_data(ipc).type = PKTIO_TYPE_IPC_SLAVE; snprintf(name, sizeof(name), "ipc:%s_info", tail); IPC_ODP_DBG("lookup for name %s for pid %d\n", name, pid); @@ -360,12 +359,12 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, odp_shm_free(shm); return -1; } - pktio_entry->s.ipc.pinfo = pinfo; - pktio_entry->s.ipc.pinfo_shm = shm; + pktio_entry->ops_data(ipc).pinfo = pinfo; + pktio_entry->ops_data(ipc).pinfo_shm = shm; ODP_DBG("process %d is slave\n", getpid()); ret = _ipc_init_slave(name, pktio_entry, pool); } else { - pktio_entry->s.ipc.type = PKTIO_TYPE_IPC_MASTER; + pktio_entry->ops_data(ipc).type = PKTIO_TYPE_IPC_MASTER; snprintf(name, sizeof(name), "%s_info", dev); shm = odp_shm_reserve(name, sizeof(struct pktio_info), ODP_CACHE_LINE_SIZE, @@ -378,8 +377,8 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, pinfo = odp_shm_addr(shm); pinfo->master.init_done = 0; pinfo->master.pool_name[0] = 0; - pktio_entry->s.ipc.pinfo = pinfo; - pktio_entry->s.ipc.pinfo_shm = shm; + pktio_entry->ops_data(ipc).pinfo = pinfo; + pktio_entry->ops_data(ipc).pinfo_shm = shm; ODP_DBG("process %d is master\n", getpid()); ret = _ipc_init_master(pktio_entry, dev, pool); } @@ -399,7 +398,7 @@ static void _ipc_free_ring_packets(pktio_entry_t *pktio_entry, _ring_t *r) if (!r) return; - pool = pool_entry_from_hdl(pktio_entry->s.ipc.pool); + pool = pool_entry_from_hdl(pktio_entry->ops_data(ipc).pool); addr = odp_shm_addr(pool->shm); rbuf_p = (void *)&offsets; @@ -433,16 +432,16 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, uint32_t ready; int pkts_ring; - ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); + ready = odp_atomic_load_u32(&pktio_entry->ops_data(ipc).ready); if (odp_unlikely(!ready)) { IPC_ODP_DBG("start pktio is missing before usage?\n"); return 0; } - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free); + _ipc_free_ring_packets(pktio_entry, pktio_entry->ops_data(ipc).tx.free); /* rx from cache */ - r = pktio_entry->s.ipc.rx.cache; + r = pktio_entry->ops_data(ipc).rx.cache; pkts = _ring_mc_dequeue_burst(r, ipcbufs_p, len); if (odp_unlikely(pkts < 0)) ODP_ABORT("internal error dequeue\n"); @@ -450,7 +449,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, /* rx from other app */ if (pkts == 0) { ipcbufs_p = (void *)&offsets[0]; - r = pktio_entry->s.ipc.rx.recv; + r = pktio_entry->ops_data(ipc).rx.recv; pkts = _ring_mc_dequeue_burst(r, ipcbufs_p, len); if (odp_unlikely(pkts < 0)) ODP_ABORT("internal error dequeue\n"); @@ -468,10 +467,10 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, uint64_t data_pool_off; void *rmt_data_ptr; - phdr = (void *)((uint8_t *)pktio_entry->s.ipc.pool_mdata_base + - offsets[i]); + phdr = (void *)((uint8_t *)pktio_entry-> + ops_data(ipc).pool_mdata_base + offsets[i]); - pool = pktio_entry->s.ipc.pool; + pool = pktio_entry->ops_data(ipc).pool; if (odp_unlikely(pool == ODP_POOL_INVALID)) ODP_ABORT("invalid pool"); @@ -496,11 +495,11 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, if (odp_unlikely(!pkt_data)) ODP_ABORT("unable to map pkt_data ipc_slave %d\n", (PKTIO_TYPE_IPC_SLAVE == - pktio_entry->s.ipc.type)); + pktio_entry->ops_data(ipc).type)); /* Copy packet data from shared pool to local pool. */ - rmt_data_ptr = (uint8_t *)pktio_entry->s.ipc.pool_mdata_base + - data_pool_off; + rmt_data_ptr = (uint8_t *)pktio_entry-> + ops_data(ipc).pool_mdata_base + data_pool_off; memcpy(pkt_data, rmt_data_ptr, phdr->frame_len); /* Copy packets L2, L3 parsed offsets and size */ @@ -519,7 +518,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, /* put back to rx ring dequed but not processed packets*/ if (pkts != i) { ipcbufs_p = (void *)&offsets[i]; - r_p = pktio_entry->s.ipc.rx.cache; + r_p = pktio_entry->ops_data(ipc).rx.cache; pkts_ring = _ring_mp_enqueue_burst(r_p, ipcbufs_p, pkts - i); if (pkts_ring != (pkts - i)) @@ -534,7 +533,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, pkts = i; /* Now tell other process that we no longer need that buffers.*/ - r_p = pktio_entry->s.ipc.rx.free; + r_p = pktio_entry->ops_data(ipc).rx.free; repeat: @@ -582,7 +581,7 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, void **rbuf_p; int ret; int i; - uint32_t ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); + uint32_t ready = odp_atomic_load_u32(&pktio_entry->ops_data(ipc).ready); odp_packet_t pkt_table_mapped[len]; /**< Ready to send packet has to be * in memory mapped pool. */ uintptr_t offsets[len]; @@ -590,14 +589,15 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, if (odp_unlikely(!ready)) return 0; - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free); + _ipc_free_ring_packets(pktio_entry, pktio_entry->ops_data(ipc).tx.free); /* Copy packets to shm shared pool if they are in different * pool, or if they are references (we can't share across IPC). */ for (i = 0; i < len; i++) { odp_packet_t pkt = pkt_table[i]; - pool_t *ipc_pool = pool_entry_from_hdl(pktio_entry->s.ipc.pool); + pool_t *ipc_pool = pool_entry_from_hdl( + pktio_entry->ops_data(ipc).pool); odp_packet_hdr_t *pkt_hdr; pool_t *pool; @@ -608,7 +608,8 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, odp_packet_has_ref(pkt)) { odp_packet_t newpkt; - newpkt = odp_packet_copy(pkt, pktio_entry->s.ipc.pool); + newpkt = odp_packet_copy( + pkt, pktio_entry->ops_data(ipc).pool); if (newpkt == ODP_PACKET_INVALID) ODP_ABORT("Unable to copy packet\n"); @@ -640,18 +641,19 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, odp_packet_to_u64(pkt), odp_pool_to_u64(pool_hdl), pkt_hdr, pkt_hdr->buf_hdr.ipc_data_offset, offsets[i], odp_shm_addr(pool->shm), - odp_shm_addr(pool_entry_from_hdl( - pktio_entry->s.ipc.pool)->shm)); + odp_shm_addr(pool_entry_from_hdl(pktio_entry-> + ops_data(ipc).pool)->shm)); } /* Put packets to ring to be processed by other process. */ rbuf_p = (void *)&offsets[0]; - r = pktio_entry->s.ipc.tx.send; + r = pktio_entry->ops_data(ipc).tx.send; ret = _ring_mp_enqueue_burst(r, rbuf_p, len); if (odp_unlikely(ret < 0)) { ODP_ERR("pid %d odp_ring_mp_enqueue_bulk fail, ipc_slave %d, ret %d\n", getpid(), - (PKTIO_TYPE_IPC_SLAVE == pktio_entry->s.ipc.type), + (PKTIO_TYPE_IPC_SLAVE == + pktio_entry->ops_data(ipc).type), ret); ODP_ERR("odp_ring_full: %d, odp_ring_count %d, _ring_free_count %d\n", _ring_full(r), _ring_count(r), @@ -691,14 +693,15 @@ static int ipc_mac_addr_get(pktio_entry_t *pktio_entry ODP_UNUSED, static int ipc_start(pktio_entry_t *pktio_entry) { - uint32_t ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); + uint32_t ready = odp_atomic_load_u32( + &pktio_entry->ops_data(ipc).ready); if (ready) { ODP_ABORT("%s Already started\n", pktio_entry->s.name); return -1; } - if (pktio_entry->s.ipc.type == PKTIO_TYPE_IPC_MASTER) + if (pktio_entry->ops_data(ipc).type == PKTIO_TYPE_IPC_MASTER) return _ipc_master_start(pktio_entry); else return _ipc_slave_start(pktio_entry); @@ -708,20 +711,22 @@ static int ipc_stop(pktio_entry_t *pktio_entry) { unsigned tx_send = 0, tx_free = 0; - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 0); + odp_atomic_store_u32(&pktio_entry->ops_data(ipc).ready, 0); - if (pktio_entry->s.ipc.tx.send) - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.send); + if (pktio_entry->ops_data(ipc).tx.send) + _ipc_free_ring_packets(pktio_entry, + pktio_entry->ops_data(ipc).tx.send); /* other process can transfer packets from one ring to * other, use delay here to free that packets. */ sleep(1); - if (pktio_entry->s.ipc.tx.free) - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free); - - if (pktio_entry->s.ipc.tx.send) - tx_send = _ring_count(pktio_entry->s.ipc.tx.send); - if (pktio_entry->s.ipc.tx.free) - tx_free = _ring_count(pktio_entry->s.ipc.tx.free); + if (pktio_entry->ops_data(ipc).tx.free) + _ipc_free_ring_packets(pktio_entry, + pktio_entry->ops_data(ipc).tx.free); + + if (pktio_entry->ops_data(ipc).tx.send) + tx_send = _ring_count(pktio_entry->ops_data(ipc).tx.send); + if (pktio_entry->ops_data(ipc).tx.free) + tx_free = _ring_count(pktio_entry->ops_data(ipc).tx.free); if (tx_send | tx_free) { ODP_DBG("IPC rings: tx send %d tx free %d\n", tx_send, tx_free); @@ -740,7 +745,7 @@ static int ipc_close(pktio_entry_t *pktio_entry) ipc_stop(pktio_entry); - odp_shm_free(pktio_entry->s.ipc.remote_pool_shm); + odp_shm_free(pktio_entry->ops_data(ipc).remote_pool_shm); if (sscanf(dev, "ipc:%d:%s", &pid, tail) == 2) snprintf(name, sizeof(name), "ipc:%s", tail); @@ -748,7 +753,7 @@ static int ipc_close(pktio_entry_t *pktio_entry) snprintf(name, sizeof(name), "%s", dev); /* unlink this pktio info for both master and slave */ - odp_shm_free(pktio_entry->s.ipc.pinfo_shm); + odp_shm_free(pktio_entry->ops_data(ipc).pinfo_shm); /* destroy rings */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", name);