From patchwork Wed Aug 23 06:00:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Github ODP bot X-Patchwork-Id: 110716 Delivered-To: patch@linaro.org Received: by 10.140.95.78 with SMTP id h72csp3676616qge; Tue, 22 Aug 2017 23:06:45 -0700 (PDT) X-Received: by 10.237.35.67 with SMTP id i3mr2251441qtc.212.1503468405691; Tue, 22 Aug 2017 23:06:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1503468405; cv=none; d=google.com; s=arc-20160816; b=WMYXSi3HAyajFZ8q0y/Cr0/Z8fFCuGblUrqJb3nMNUg04ck9UqziLOoZRDSyNFDA8T IRKIcjpLB++cFxMLh/ft1enBFhyYZwqBcRMvvfqPjTxtU3DL8b4Dw5QGn6zyVFHC540v YrouIKgKv1Wik51l7D33p2uDkyHfS00iY16yJ5QIkdU3nrolS+s+Yr4rc/12Gy01tVHD TnW+d+Lwbqwo8cQ3XiRkNFSkovf0xYm0YL31ZKYaXOpZd7RAhBb4PXNm7WWhscSiqJDI e7hvvNtJ2RtT2pAIt9E0F11lHurzmLOj2EyyZmIpodFF2Yg88v6+p/7nVi2mv0G3UptC XmqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:github-pr-num :references:in-reply-to:message-id:date:to:from:delivered-to :arc-authentication-results; bh=PmINkd8PWYHsctB72bFeihuW9R6fdqoR9Et35BRUnJA=; b=EORIXTKEMwzkdAj2vmdtj528nrjDEizcUYhbo4HoImPx7CtxL7n73MJQ1SgivFoKPH c82O4Ldi4pVafd6LrqzGsuAPGYtg/XTrgsgtjk/Hzwsk3T77upB/8r+xqgrW7/eks1xV kQjf1U7WU7s9aUklaby25CtzO92sw8Ot7ezWVbt6fkr8g+d2ZZp6x57/6xbWYU4+JXtU 4/N+wfo27qwy8IA7o6Qv000Y7BDCrE5/ktSSZAxmqDEhpg9GCuTL1UsQsJWZlLk51zsa /4ZS7ZuSQ5ntoIa51Y+ZyiahLMi2qbGkw8CgFfYbynrjZ2Lg6lzzLoxOQ8oduTLtTqyw Unvw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id v11si757471qti.550.2017.08.22.23.06.45; Tue, 22 Aug 2017 23:06:45 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=yandex.ru Received: by lists.linaro.org (Postfix, from userid 109) id 32D4C644C9; Wed, 23 Aug 2017 06:06:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 1C96860743; Wed, 23 Aug 2017 06:00:57 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 55772644B5; Wed, 23 Aug 2017 06:00:35 +0000 (UTC) Received: from forward101j.mail.yandex.net (forward101j.mail.yandex.net [5.45.198.241]) by lists.linaro.org (Postfix) with ESMTPS id A583A6446B for ; Wed, 23 Aug 2017 06:00:23 +0000 (UTC) Received: from mxback10o.mail.yandex.net (mxback10o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::24]) by forward101j.mail.yandex.net (Yandex) with ESMTP id B446712418AE for ; Wed, 23 Aug 2017 09:00:21 +0300 (MSK) Received: from smtp3o.mail.yandex.net (smtp3o.mail.yandex.net [2a02:6b8:0:1a2d::27]) by mxback10o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id jbBFC98BBc-0L2an9I0; Wed, 23 Aug 2017 09:00:21 +0300 Received: by smtp3o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id QeRkbDFm9Y-0KLKQDGc; Wed, 23 Aug 2017 09:00:20 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Github ODP bot To: lng-odp@lists.linaro.org Date: Wed, 23 Aug 2017 09:00:02 +0300 Message-Id: <1503468007-12911-6-git-send-email-odpbot@yandex.ru> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1503468007-12911-1-git-send-email-odpbot@yandex.ru> References: <1503468007-12911-1-git-send-email-odpbot@yandex.ru> Github-pr-num: 139 Subject: [lng-odp] [PATCH CLOUD-DEV v1 5/10] linux-gen: pktio: ipc: minor code refactory X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" From: Yi He Move implementation specific data structure into dedicated header file. Signed-off-by: Yi He --- /** Email created from pull request 139 (heyi-linaro:modular-pktio-ops) ** https://github.com/Linaro/odp/pull/139 ** Patch: https://github.com/Linaro/odp/pull/139.patch ** Base sha: 3cdc8a07993b93b145dc3b4a373b5ebc33bae882 ** Merge commit sha: a87822f157888ef3269a7b8b3f38c63ecc111669 **/ platform/linux-generic/Makefile.am | 2 +- .../linux-generic/include/odp_packet_io_internal.h | 39 ----- .../include/odp_packet_io_ipc_internal.h | 48 ------ platform/linux-generic/include/odp_pktio_ops_ipc.h | 91 ++++++++++ .../include/odp_pktio_ops_subsystem.h | 2 + platform/linux-generic/odp_packet_io.c | 1 - platform/linux-generic/pktio/ipc.c | 185 +++++++++++---------- 7 files changed, 189 insertions(+), 179 deletions(-) delete mode 100644 platform/linux-generic/include/odp_packet_io_ipc_internal.h create mode 100644 platform/linux-generic/include/odp_pktio_ops_ipc.h diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 7dfbfed2..3e2b8756 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -184,12 +184,12 @@ noinst_HEADERS = \ ${srcdir}/include/odp_name_table_internal.h \ ${srcdir}/include/odp_packet_internal.h \ ${srcdir}/include/odp_packet_io_internal.h \ - ${srcdir}/include/odp_packet_io_ipc_internal.h \ ${srcdir}/include/odp_packet_io_ring_internal.h \ ${srcdir}/include/odp_packet_netmap.h \ ${srcdir}/include/odp_packet_dpdk.h \ ${srcdir}/include/odp_packet_socket.h \ ${srcdir}/include/odp_packet_tap.h \ + ${srcdir}/include/odp_pktio_ops_ipc.h \ ${srcdir}/include/odp_pktio_ops_loopback.h \ ${srcdir}/include/odp_pktio_ops_subsystem.h \ ${srcdir}/include/odp_pkt_queue_internal.h \ diff --git a/platform/linux-generic/include/odp_packet_io_internal.h b/platform/linux-generic/include/odp_packet_io_internal.h index 7ad3ab9c..3646dbac 100644 --- a/platform/linux-generic/include/odp_packet_io_internal.h +++ b/platform/linux-generic/include/odp_packet_io_internal.h @@ -66,44 +66,6 @@ typedef struct { } pkt_pcap_t; #endif -typedef struct { - /* TX */ - struct { - _ring_t *send; /**< ODP ring for IPC msg packets - indexes transmitted to shared - memory */ - _ring_t *free; /**< ODP ring for IPC msg packets - indexes already processed by remote - process */ - } tx; - /* RX */ - struct { - _ring_t *recv; /**< ODP ring for IPC msg packets - indexes received from shared - memory (from remote process) */ - _ring_t *free; /**< odp ring for ipc msg packets - indexes already processed by - current process */ - _ring_t *cache; /**< local cache to keep packet order right */ - } rx; /* slave */ - void *pool_base; /**< Remote pool base addr */ - void *pool_mdata_base; /**< Remote pool mdata base addr */ - uint64_t pkt_size; /**< Packet size in remote pool */ - odp_pool_t pool; /**< Pool of main process */ - enum { - PKTIO_TYPE_IPC_MASTER = 0, /**< Master is the process which - creates shm */ - PKTIO_TYPE_IPC_SLAVE /**< Slave is the process which - connects to shm */ - } type; /**< define if it's master or slave process */ - odp_atomic_u32_t ready; /**< 1 - pktio is ready and can recv/send - packet, 0 - not yet ready */ - void *pinfo; - odp_shm_t pinfo_shm; - odp_shm_t remote_pool_shm; /**< shm of remote pool get with - _ipc_map_remote_pool() */ -} _ipc_pktio_t; - struct pktio_entry { const pktio_ops_module_t *ops; /**< Implementation specific methods */ pktio_ops_data_t ops_data; /**< IO operation specific data */ @@ -122,7 +84,6 @@ struct pktio_entry { pkt_pcap_t pkt_pcap; /**< Using pcap for IO */ #endif pkt_tap_t pkt_tap; /**< using TAP for IO */ - _ipc_pktio_t ipc; /**< IPC pktio data */ }; enum { /* Not allocated */ diff --git a/platform/linux-generic/include/odp_packet_io_ipc_internal.h b/platform/linux-generic/include/odp_packet_io_ipc_internal.h deleted file mode 100644 index 9d8943a6..00000000 --- a/platform/linux-generic/include/odp_packet_io_ipc_internal.h +++ /dev/null @@ -1,48 +0,0 @@ -/* Copyright (c) 2015, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -/* IPC packet I/O over shared memory ring */ -#include - -/* number of odp buffers in odp ring queue */ -#define PKTIO_IPC_ENTRIES 4096 - -/* that struct is exported to shared memory, so that processes can find - * each other. - */ -struct pktio_info { - struct { - /* number of buffer*/ - int num; - /* size of packet/segment in remote pool */ - uint32_t block_size; - char pool_name[ODP_POOL_NAME_LEN]; - /* 1 if master finished creation of all shared objects */ - int init_done; - } master; - struct { - void *base_addr; - uint32_t block_size; - char pool_name[ODP_POOL_NAME_LEN]; - /* pid of the slave process written to shm and - * used by master to look up memory created by - * slave - */ - int pid; - int init_done; - } slave; -} ODP_PACKED; diff --git a/platform/linux-generic/include/odp_pktio_ops_ipc.h b/platform/linux-generic/include/odp_pktio_ops_ipc.h new file mode 100644 index 00000000..5903ef9b --- /dev/null +++ b/platform/linux-generic/include/odp_pktio_ops_ipc.h @@ -0,0 +1,91 @@ +/* Copyright (c) 2015, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_PKTIO_OPS_IPC_H_ +#define ODP_PKTIO_OPS_IPC_H_ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +/* IPC packet I/O over shared memory ring */ +#include + +/* number of odp buffers in odp ring queue */ +#define PKTIO_IPC_ENTRIES 4096 + +/* that struct is exported to shared memory, so that processes can find + * each other. + */ +struct pktio_info { + struct { + /* number of buffer*/ + int num; + /* size of packet/segment in remote pool */ + uint32_t block_size; + char pool_name[ODP_POOL_NAME_LEN]; + /* 1 if master finished creation of all shared objects */ + int init_done; + } master; + struct { + void *base_addr; + uint32_t block_size; + char pool_name[ODP_POOL_NAME_LEN]; + /* pid of the slave process written to shm and + * used by master to look up memory created by + * slave + */ + int pid; + int init_done; + } slave; +} ODP_PACKED; + +typedef struct { + /* TX */ + struct { + _ring_t *send; /**< ODP ring for IPC msg packets + indexes transmitted to shared + memory */ + _ring_t *free; /**< ODP ring for IPC msg packets + indexes already processed by remote + process */ + } tx; + /* RX */ + struct { + _ring_t *recv; /**< ODP ring for IPC msg packets + indexes received from shared + memory (from remote process) */ + _ring_t *free; /**< odp ring for ipc msg packets + indexes already processed by + current process */ + _ring_t *cache; /**< local cache to keep packet order right */ + } rx; /* slave */ + void *pool_base; /**< Remote pool base addr */ + void *pool_mdata_base; /**< Remote pool mdata base addr */ + uint64_t pkt_size; /**< Packet size in remote pool */ + odp_pool_t pool; /**< Pool of main process */ + enum { + PKTIO_TYPE_IPC_MASTER = 0, /**< Master is the process which + creates shm */ + PKTIO_TYPE_IPC_SLAVE /**< Slave is the process which + connects to shm */ + } type; /**< define if it's master or slave process */ + odp_atomic_u32_t ready; /**< 1 - pktio is ready and can recv/send + packet, 0 - not yet ready */ + void *pinfo; + odp_shm_t pinfo_shm; + odp_shm_t remote_pool_shm; /**< shm of remote pool get with + _ipc_map_remote_pool() */ +} pktio_ops_ipc_data_t; + +#endif diff --git a/platform/linux-generic/include/odp_pktio_ops_subsystem.h b/platform/linux-generic/include/odp_pktio_ops_subsystem.h index 7b90ed3d..650b3664 100644 --- a/platform/linux-generic/include/odp_pktio_ops_subsystem.h +++ b/platform/linux-generic/include/odp_pktio_ops_subsystem.h @@ -79,12 +79,14 @@ typedef ODP_MODULE_CLASS(pktio_ops) { } pktio_ops_module_t; /* All implementations of this subsystem */ +#include #include /* Per implementation private data * TODO: refactory each implementation to hide it internally */ typedef union { + pktio_ops_ipc_data_t ipc; pktio_ops_loopback_data_t loopback; } pktio_ops_data_t; diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index 3e47aac9..bdcf524c 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index 984f0ab4..14cd86eb 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -3,7 +3,6 @@ * * SPDX-License-Identifier: BSD-3-Clause */ -#include #include #include #include @@ -43,7 +42,7 @@ static const char *_ipc_odp_buffer_pool_shm_name(odp_pool_t pool_hdl) static int _ipc_master_start(pktio_entry_t *pktio_entry) { - struct pktio_info *pinfo = pktio_entry->s.ipc.pinfo; + struct pktio_info *pinfo = pktio_entry->ops_data(ipc).pinfo; odp_shm_t shm; if (pinfo->slave.init_done == 0) @@ -57,11 +56,11 @@ static int _ipc_master_start(pktio_entry_t *pktio_entry) return -1; } - pktio_entry->s.ipc.remote_pool_shm = shm; - pktio_entry->s.ipc.pool_base = odp_shm_addr(shm); - pktio_entry->s.ipc.pool_mdata_base = (char *)odp_shm_addr(shm); + pktio_entry->ops_data(ipc).remote_pool_shm = shm; + pktio_entry->ops_data(ipc).pool_base = odp_shm_addr(shm); + pktio_entry->ops_data(ipc).pool_mdata_base = (char *)odp_shm_addr(shm); - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); + odp_atomic_store_u32(&pktio_entry->ops_data(ipc).ready, 1); IPC_ODP_DBG("%s started.\n", pktio_entry->s.name); return 0; @@ -88,62 +87,62 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, * to be processed packets ring. */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.tx.send = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).tx.send = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.tx.send) { + if (!pktio_entry->ops_data(ipc).tx.send) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); return -1; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.send), - _ring_free_count(pktio_entry->s.ipc.tx.send)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.send), + _ring_free_count(pktio_entry->ops_data(ipc).tx.send)); /* generate name in shm like ipc_pktio_p for * already processed packets */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.tx.free = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).tx.free = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.tx.free) { + if (!pktio_entry->ops_data(ipc).tx.free) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_prod; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.free), - _ring_free_count(pktio_entry->s.ipc.tx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.free), + _ring_free_count(pktio_entry->ops_data(ipc).tx.free)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.rx.recv = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).rx.recv = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.rx.recv) { + if (!pktio_entry->ops_data(ipc).rx.recv) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_cons; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.recv), - _ring_free_count(pktio_entry->s.ipc.rx.recv)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.recv), + _ring_free_count(pktio_entry->ops_data(ipc).rx.recv)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.rx.free = _ring_create(ipc_shm_name, + pktio_entry->ops_data(ipc).rx.free = _ring_create(ipc_shm_name, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); - if (!pktio_entry->s.ipc.rx.free) { + if (!pktio_entry->ops_data(ipc).rx.free) { ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_s_prod; } ODP_DBG("Created IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.free), - _ring_free_count(pktio_entry->s.ipc.rx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.free), + _ring_free_count(pktio_entry->ops_data(ipc).rx.free)); /* Set up pool name for remote info */ - pinfo = pktio_entry->s.ipc.pinfo; + pinfo = pktio_entry->ops_data(ipc).pinfo; pool_name = _ipc_odp_buffer_pool_shm_name(pool_hdl); if (strlen(pool_name) > ODP_POOL_NAME_LEN) { ODP_ERR("pid %d ipc pool name %s is too big %d\n", @@ -156,7 +155,7 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, pinfo->slave.pid = 0; pinfo->slave.init_done = 0; - pktio_entry->s.ipc.pool = pool_hdl; + pktio_entry->ops_data(ipc).pool = pool_hdl; ODP_DBG("Pre init... DONE.\n"); pinfo->master.init_done = 1; @@ -225,7 +224,7 @@ static int _ipc_init_slave(const char *dev, if (strlen(dev) > (ODP_POOL_NAME_LEN - sizeof("_slave_r"))) ODP_ABORT("too big ipc name\n"); - pktio_entry->s.ipc.pool = pool; + pktio_entry->ops_data(ipc).pool = pool; return 0; } @@ -246,61 +245,61 @@ static int _ipc_slave_start(pktio_entry_t *pktio_entry) sprintf(dev, "ipc:%s", tail); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.rx.recv = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.rx.recv) { + pktio_entry->ops_data(ipc).rx.recv = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).rx.recv) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); sleep(1); return -1; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.recv), - _ring_free_count(pktio_entry->s.ipc.rx.recv)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.recv), + _ring_free_count(pktio_entry->ops_data(ipc).rx.recv)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.rx.free = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.rx.free) { + pktio_entry->ops_data(ipc).rx.free = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).rx.free) { ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_prod; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.rx.free), - _ring_free_count(pktio_entry->s.ipc.rx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).rx.free), + _ring_free_count(pktio_entry->ops_data(ipc).rx.free)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.tx.send = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.tx.send) { + pktio_entry->ops_data(ipc).tx.send = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).tx.send) { ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_cons; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.send), - _ring_free_count(pktio_entry->s.ipc.tx.send)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.send), + _ring_free_count(pktio_entry->ops_data(ipc).tx.send)); snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.tx.free = _ipc_shm_map(ipc_shm_name, pid); - if (!pktio_entry->s.ipc.tx.free) { + pktio_entry->ops_data(ipc).tx.free = _ipc_shm_map(ipc_shm_name, pid); + if (!pktio_entry->ops_data(ipc).tx.free) { ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_s_prod; } ODP_DBG("Connected IPC ring: %s, count %d, free %d\n", - ipc_shm_name, _ring_count(pktio_entry->s.ipc.tx.free), - _ring_free_count(pktio_entry->s.ipc.tx.free)); + ipc_shm_name, _ring_count(pktio_entry->ops_data(ipc).tx.free), + _ring_free_count(pktio_entry->ops_data(ipc).tx.free)); /* Get info about remote pool */ - pinfo = pktio_entry->s.ipc.pinfo; + pinfo = pktio_entry->ops_data(ipc).pinfo; shm = _ipc_map_remote_pool(pinfo->master.pool_name, pid); - pktio_entry->s.ipc.remote_pool_shm = shm; - pktio_entry->s.ipc.pool_mdata_base = (char *)odp_shm_addr(shm); - pktio_entry->s.ipc.pkt_size = pinfo->master.block_size; + pktio_entry->ops_data(ipc).remote_pool_shm = shm; + pktio_entry->ops_data(ipc).pool_mdata_base = (char *)odp_shm_addr(shm); + pktio_entry->ops_data(ipc).pkt_size = pinfo->master.block_size; - _ipc_export_pool(pinfo, pktio_entry->s.ipc.pool); + _ipc_export_pool(pinfo, pktio_entry->ops_data(ipc).pool); - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); + odp_atomic_store_u32(&pktio_entry->ops_data(ipc).ready, 1); pinfo->slave.init_done = 1; ODP_DBG("%s started.\n", pktio_entry->s.name); @@ -339,15 +338,15 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, if (strncmp(dev, "ipc", 3)) return -1; - odp_atomic_init_u32(&pktio_entry->s.ipc.ready, 0); + odp_atomic_init_u32(&pktio_entry->ops_data(ipc).ready, 0); - pktio_entry->s.ipc.rx.cache = _ring_create("ipc_rx_cache", + pktio_entry->ops_data(ipc).rx.cache = _ring_create("ipc_rx_cache", PKTIO_IPC_ENTRIES, _RING_NO_LIST); /* Shared info about remote pktio */ if (sscanf(dev, "ipc:%d:%s", &pid, tail) == 2) { - pktio_entry->s.ipc.type = PKTIO_TYPE_IPC_SLAVE; + pktio_entry->ops_data(ipc).type = PKTIO_TYPE_IPC_SLAVE; snprintf(name, sizeof(name), "ipc:%s_info", tail); IPC_ODP_DBG("lookup for name %s for pid %d\n", name, pid); @@ -360,12 +359,12 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, odp_shm_free(shm); return -1; } - pktio_entry->s.ipc.pinfo = pinfo; - pktio_entry->s.ipc.pinfo_shm = shm; + pktio_entry->ops_data(ipc).pinfo = pinfo; + pktio_entry->ops_data(ipc).pinfo_shm = shm; ODP_DBG("process %d is slave\n", getpid()); ret = _ipc_init_slave(name, pktio_entry, pool); } else { - pktio_entry->s.ipc.type = PKTIO_TYPE_IPC_MASTER; + pktio_entry->ops_data(ipc).type = PKTIO_TYPE_IPC_MASTER; snprintf(name, sizeof(name), "%s_info", dev); shm = odp_shm_reserve(name, sizeof(struct pktio_info), ODP_CACHE_LINE_SIZE, @@ -378,8 +377,8 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, pinfo = odp_shm_addr(shm); pinfo->master.init_done = 0; pinfo->master.pool_name[0] = 0; - pktio_entry->s.ipc.pinfo = pinfo; - pktio_entry->s.ipc.pinfo_shm = shm; + pktio_entry->ops_data(ipc).pinfo = pinfo; + pktio_entry->ops_data(ipc).pinfo_shm = shm; ODP_DBG("process %d is master\n", getpid()); ret = _ipc_init_master(pktio_entry, dev, pool); } @@ -399,7 +398,7 @@ static void _ipc_free_ring_packets(pktio_entry_t *pktio_entry, _ring_t *r) if (!r) return; - pool = pool_entry_from_hdl(pktio_entry->s.ipc.pool); + pool = pool_entry_from_hdl(pktio_entry->ops_data(ipc).pool); addr = odp_shm_addr(pool->shm); rbuf_p = (void *)&offsets; @@ -433,16 +432,16 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, uint32_t ready; int pkts_ring; - ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); + ready = odp_atomic_load_u32(&pktio_entry->ops_data(ipc).ready); if (odp_unlikely(!ready)) { IPC_ODP_DBG("start pktio is missing before usage?\n"); return 0; } - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free); + _ipc_free_ring_packets(pktio_entry, pktio_entry->ops_data(ipc).tx.free); /* rx from cache */ - r = pktio_entry->s.ipc.rx.cache; + r = pktio_entry->ops_data(ipc).rx.cache; pkts = _ring_mc_dequeue_burst(r, ipcbufs_p, len); if (odp_unlikely(pkts < 0)) ODP_ABORT("internal error dequeue\n"); @@ -450,7 +449,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, /* rx from other app */ if (pkts == 0) { ipcbufs_p = (void *)&offsets[0]; - r = pktio_entry->s.ipc.rx.recv; + r = pktio_entry->ops_data(ipc).rx.recv; pkts = _ring_mc_dequeue_burst(r, ipcbufs_p, len); if (odp_unlikely(pkts < 0)) ODP_ABORT("internal error dequeue\n"); @@ -468,10 +467,10 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, uint64_t data_pool_off; void *rmt_data_ptr; - phdr = (void *)((uint8_t *)pktio_entry->s.ipc.pool_mdata_base + - offsets[i]); + phdr = (void *)((uint8_t *)pktio_entry-> + ops_data(ipc).pool_mdata_base + offsets[i]); - pool = pktio_entry->s.ipc.pool; + pool = pktio_entry->ops_data(ipc).pool; if (odp_unlikely(pool == ODP_POOL_INVALID)) ODP_ABORT("invalid pool"); @@ -496,11 +495,11 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, if (odp_unlikely(!pkt_data)) ODP_ABORT("unable to map pkt_data ipc_slave %d\n", (PKTIO_TYPE_IPC_SLAVE == - pktio_entry->s.ipc.type)); + pktio_entry->ops_data(ipc).type)); /* Copy packet data from shared pool to local pool. */ - rmt_data_ptr = (uint8_t *)pktio_entry->s.ipc.pool_mdata_base + - data_pool_off; + rmt_data_ptr = (uint8_t *)pktio_entry-> + ops_data(ipc).pool_mdata_base + data_pool_off; memcpy(pkt_data, rmt_data_ptr, phdr->frame_len); /* Copy packets L2, L3 parsed offsets and size */ @@ -519,7 +518,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, /* put back to rx ring dequed but not processed packets*/ if (pkts != i) { ipcbufs_p = (void *)&offsets[i]; - r_p = pktio_entry->s.ipc.rx.cache; + r_p = pktio_entry->ops_data(ipc).rx.cache; pkts_ring = _ring_mp_enqueue_burst(r_p, ipcbufs_p, pkts - i); if (pkts_ring != (pkts - i)) @@ -534,7 +533,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, pkts = i; /* Now tell other process that we no longer need that buffers.*/ - r_p = pktio_entry->s.ipc.rx.free; + r_p = pktio_entry->ops_data(ipc).rx.free; repeat: @@ -582,7 +581,7 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, void **rbuf_p; int ret; int i; - uint32_t ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); + uint32_t ready = odp_atomic_load_u32(&pktio_entry->ops_data(ipc).ready); odp_packet_t pkt_table_mapped[len]; /**< Ready to send packet has to be * in memory mapped pool. */ uintptr_t offsets[len]; @@ -590,14 +589,15 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, if (odp_unlikely(!ready)) return 0; - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free); + _ipc_free_ring_packets(pktio_entry, pktio_entry->ops_data(ipc).tx.free); /* Copy packets to shm shared pool if they are in different * pool, or if they are references (we can't share across IPC). */ for (i = 0; i < len; i++) { odp_packet_t pkt = pkt_table[i]; - pool_t *ipc_pool = pool_entry_from_hdl(pktio_entry->s.ipc.pool); + pool_t *ipc_pool = pool_entry_from_hdl( + pktio_entry->ops_data(ipc).pool); odp_packet_hdr_t *pkt_hdr; pool_t *pool; @@ -608,7 +608,8 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, odp_packet_has_ref(pkt)) { odp_packet_t newpkt; - newpkt = odp_packet_copy(pkt, pktio_entry->s.ipc.pool); + newpkt = odp_packet_copy( + pkt, pktio_entry->ops_data(ipc).pool); if (newpkt == ODP_PACKET_INVALID) ODP_ABORT("Unable to copy packet\n"); @@ -640,18 +641,19 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, odp_packet_to_u64(pkt), odp_pool_to_u64(pool_hdl), pkt_hdr, pkt_hdr->buf_hdr.ipc_data_offset, offsets[i], odp_shm_addr(pool->shm), - odp_shm_addr(pool_entry_from_hdl( - pktio_entry->s.ipc.pool)->shm)); + odp_shm_addr(pool_entry_from_hdl(pktio_entry-> + ops_data(ipc).pool)->shm)); } /* Put packets to ring to be processed by other process. */ rbuf_p = (void *)&offsets[0]; - r = pktio_entry->s.ipc.tx.send; + r = pktio_entry->ops_data(ipc).tx.send; ret = _ring_mp_enqueue_burst(r, rbuf_p, len); if (odp_unlikely(ret < 0)) { ODP_ERR("pid %d odp_ring_mp_enqueue_bulk fail, ipc_slave %d, ret %d\n", getpid(), - (PKTIO_TYPE_IPC_SLAVE == pktio_entry->s.ipc.type), + (PKTIO_TYPE_IPC_SLAVE == + pktio_entry->ops_data(ipc).type), ret); ODP_ERR("odp_ring_full: %d, odp_ring_count %d, _ring_free_count %d\n", _ring_full(r), _ring_count(r), @@ -691,14 +693,15 @@ static int ipc_mac_addr_get(pktio_entry_t *pktio_entry ODP_UNUSED, static int ipc_start(pktio_entry_t *pktio_entry) { - uint32_t ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); + uint32_t ready = odp_atomic_load_u32( + &pktio_entry->ops_data(ipc).ready); if (ready) { ODP_ABORT("%s Already started\n", pktio_entry->s.name); return -1; } - if (pktio_entry->s.ipc.type == PKTIO_TYPE_IPC_MASTER) + if (pktio_entry->ops_data(ipc).type == PKTIO_TYPE_IPC_MASTER) return _ipc_master_start(pktio_entry); else return _ipc_slave_start(pktio_entry); @@ -708,20 +711,22 @@ static int ipc_stop(pktio_entry_t *pktio_entry) { unsigned tx_send = 0, tx_free = 0; - odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 0); + odp_atomic_store_u32(&pktio_entry->ops_data(ipc).ready, 0); - if (pktio_entry->s.ipc.tx.send) - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.send); + if (pktio_entry->ops_data(ipc).tx.send) + _ipc_free_ring_packets(pktio_entry, + pktio_entry->ops_data(ipc).tx.send); /* other process can transfer packets from one ring to * other, use delay here to free that packets. */ sleep(1); - if (pktio_entry->s.ipc.tx.free) - _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free); - - if (pktio_entry->s.ipc.tx.send) - tx_send = _ring_count(pktio_entry->s.ipc.tx.send); - if (pktio_entry->s.ipc.tx.free) - tx_free = _ring_count(pktio_entry->s.ipc.tx.free); + if (pktio_entry->ops_data(ipc).tx.free) + _ipc_free_ring_packets(pktio_entry, + pktio_entry->ops_data(ipc).tx.free); + + if (pktio_entry->ops_data(ipc).tx.send) + tx_send = _ring_count(pktio_entry->ops_data(ipc).tx.send); + if (pktio_entry->ops_data(ipc).tx.free) + tx_free = _ring_count(pktio_entry->ops_data(ipc).tx.free); if (tx_send | tx_free) { ODP_DBG("IPC rings: tx send %d tx free %d\n", tx_send, tx_free); @@ -740,7 +745,7 @@ static int ipc_close(pktio_entry_t *pktio_entry) ipc_stop(pktio_entry); - odp_shm_free(pktio_entry->s.ipc.remote_pool_shm); + odp_shm_free(pktio_entry->ops_data(ipc).remote_pool_shm); if (sscanf(dev, "ipc:%d:%s", &pid, tail) == 2) snprintf(name, sizeof(name), "ipc:%s", tail); @@ -748,7 +753,7 @@ static int ipc_close(pktio_entry_t *pktio_entry) snprintf(name, sizeof(name), "%s", dev); /* unlink this pktio info for both master and slave */ - odp_shm_free(pktio_entry->s.ipc.pinfo_shm); + odp_shm_free(pktio_entry->ops_data(ipc).pinfo_shm); /* destroy rings */ snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", name);