From patchwork Tue Oct 28 16:20:28 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 39677 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3657424026 for ; Tue, 28 Oct 2014 16:21:10 +0000 (UTC) Received: by mail-la0-f70.google.com with SMTP id ge10sf669205lab.1 for ; Tue, 28 Oct 2014 09:21:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=GWHR6ylXmiq1smZfKKutsb62HPf9kf/kc86lcUdyRpU=; b=WUFF7u3UK0CZvE7dKAN5I4Zy1MjH3Ge47lSLW8mEYU+lD+yio9dNNyiyB+BbRsA4UF tOuIt0wexzNcIxSipF6NzxB2NZ/LsTj81/QcaPAwl1dHchFWwsv6njRyRJGNm/OTQLMA 3j+6dihLkwILqJM2MajikikJFd9bCjBa8eq5BJbJopnu5x3uYK+Q1bKtS8VgxJrl8Xvt IuIEfALArOgf4NBUfQX2crqJN07UZs5Rmld//gMagqSU1i5YrXFJvmaJACeXsxU/piJW IRq/UM+V7nKLuhG302lcNwI/PiMCtqqjP8H32OpGqmm4k8tLV1DFEl9h6gm1Adp0QrUM c6Ag== X-Gm-Message-State: ALoCoQnr+RRi0SNs7xlm0M0g6eGf9fKpsrWrMCC2+u38NZdXLpHm+18DPCkub3YojTwvLYE9WyQV X-Received: by 10.180.182.164 with SMTP id ef4mr923116wic.0.1414513269058; Tue, 28 Oct 2014 09:21:09 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.26.130 with SMTP id l2ls79584lag.45.gmail; Tue, 28 Oct 2014 09:21:08 -0700 (PDT) X-Received: by 10.152.87.98 with SMTP id w2mr5072243laz.27.1414513268671; Tue, 28 Oct 2014 09:21:08 -0700 (PDT) Received: from mail-lb0-f170.google.com (mail-lb0-f170.google.com. [209.85.217.170]) by mx.google.com with ESMTPS id ci5si3264985lad.58.2014.10.28.09.21.08 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Oct 2014 09:21:08 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) client-ip=209.85.217.170; Received: by mail-lb0-f170.google.com with SMTP id 10so933501lbg.15 for ; Tue, 28 Oct 2014 09:21:08 -0700 (PDT) X-Received: by 10.152.87.98 with SMTP id w2mr5072178laz.27.1414513267977; Tue, 28 Oct 2014 09:21:07 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp510394lbz; Tue, 28 Oct 2014 09:21:07 -0700 (PDT) X-Received: by 10.236.22.161 with SMTP id t21mr4062830yht.80.1414513266414; Tue, 28 Oct 2014 09:21:06 -0700 (PDT) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id b10si3063631qce.21.2014.10.28.09.21.04 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 28 Oct 2014 09:21:06 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Xj9W3-0008Tm-Ce; Tue, 28 Oct 2014 16:21:03 +0000 Received: from mail-lb0-f174.google.com ([209.85.217.174]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Xj9Vt-0008TV-VV for lng-odp@lists.linaro.org; Tue, 28 Oct 2014 16:20:54 +0000 Received: by mail-lb0-f174.google.com with SMTP id p9so930451lbv.33 for ; Tue, 28 Oct 2014 09:20:47 -0700 (PDT) X-Received: by 10.152.7.7 with SMTP id f7mr5048205laa.57.1414513247279; Tue, 28 Oct 2014 09:20:47 -0700 (PDT) Received: from localhost.localdomain (ppp91-76-163-205.pppoe.mtu-net.ru. [91.76.163.205]) by mx.google.com with ESMTPSA id ug3sm784711lac.33.2014.10.28.09.20.45 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Oct 2014 09:20:46 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Tue, 28 Oct 2014 19:20:28 +0300 Message-Id: <1414513228-7398-2-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.8.5.1.163.gd7aced9 In-Reply-To: <1414513228-7398-1-git-send-email-maxim.uvarov@linaro.org> References: <1414513228-7398-1-git-send-email-maxim.uvarov@linaro.org> X-Topics: Architecture patch Subject: [lng-odp] [ARCH PATCH] ipc design and usage modes X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Maxim Uvarov --- ipc.dox | 219 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 219 insertions(+) create mode 100644 ipc.dox diff --git a/ipc.dox b/ipc.dox new file mode 100644 index 0000000..93ecaa0 --- /dev/null +++ b/ipc.dox @@ -0,0 +1,219 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** +@page ipc_design Inter Process Communication API + +@tableofcontents + +@section ipc_intro Introduction + This document defines different ODP applications modes such as +multi threading and multiprocessing and IPC APIs for that modes ODP. + +@subsection odp_modes Application Thread/Process modes: + ODP applications can use following programming models for multi core support: + -# Single application with ODP worker Threads. + -# Multi process application with single packet I/O pool and common initialization. + -# Different processed communicated thought IPC API. + +@subsubsection odp_mode_threads Thread mode + Initialization sequence in thread mode is following: + +@verbatim + main() { + /* Init ODP before calling anything else. */ + odp_init_global(NULL, NULL); + + /* Init this thread. */ + odp_init_local(); + + /* Allocate memory for packets pool. That memory will be visible for all threads.*/ + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + pool_base = odp_shm_addr(shm); + + /* Create pool instance with reserved shm. */ + pool = odp_buffer_pool_create("packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + /* Create worker threads. */ + odph_linux_pthread_create(&thread_tbl[i], 1, core, thr_run_func, + &args); + } + + //// thread code + thr_run_func () { + /* Lookup the packet pool */ + pkt_pool = odp_buffer_pool_lookup("packet_pool"); + + /* Open a packet IO instance for this thread */ + pktio = odp_pktio_open("eth0", pkt_pool); + + for (;;) { + /* read buffer */ + buf = odp_schedule(NULL, ODP_SCHED_WAIT); + ... do something ... + } + } +@endverbatim + +@subsubsection odp_mode_processes Processes mode with shared memory + Initialization sequence in processes mode with shared memory is following: + +@verbatim + main() { + /* Init ODP before calling anything else. In process mode odp_init_global + * function called only once in main run process. + */ + odp_init_global(NULL, NULL); + + /* Init this thread. */ + odp_init_local(); + + /* Allocate memory for packets pool. That memory will be visible for all threads.*/ + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + pool_base = odp_shm_addr(shm); + + /* Create pool instance with reserved shm. */ + pool = odp_buffer_pool_create("packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + /* Call odph_linux_process_fork_n which will fork() current process to + * different processes. + */ + odph_linux_process_fork_n(proc, num_workers, first_core); + + /* Run same function as thread uses */ + thr_run_func(); + } + + //// thread code + thr_run_func () { + /* Lookup the packet pool */ + pkt_pool = odp_buffer_pool_lookup("packet_pool"); + + /* Open a packet IO instance for this thread */ + pktio = odp_pktio_open("eth0", pkt_pool); + + for (;;) { + /* read buffer */ + buf = odp_schedule(NULL, ODP_SCHED_WAIT); + ... do something ... + } + } +@endverbatim + +@subsubsection odp_mode_sep_processes Separate Processes mode + This mode differs from mode with common shared memory. Each process is completely separate process which calls +odp_init_global() and do other initialization process then open IPC pktio interface and do packets exchange between processes. +For base implementation (linux-generic) shared memory is used for IPC mechanism to make it flexible to port to different use +cases. Which can be different processes, processes in different VM's or any bare metal applications which can share memory +between each other. In hardware implementation IPC pktio can be offloaded to HW SoC packets functions. + Initialization sequence in separate processes mode is same as for threads or processes with shared memory but with following + difference: + +@subsubsection odp_mode_sep_processes_cons Separate Processes Sender (linux-generic) + -# Each process calls odp_init_global(), pool creation and etc. + + -# ODP_SHM_PROC flag provided to be able to map that memory from different process. + + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_SHM_PROC); + + pool_base = odp_shm_addr(shm); + + -# Worker thread (or process) creates IPC pktio, and sends buffers to it: + + A) + odp_pktio_t ipc_pktio = odp_pktio_open("ipc_pktio", 0); + odp_pktio_send(ipc_pktio, pkt_tbl, pkts); + + B) instead of using packet io queue can be used in following way: + +@verbatim + odp_queue_t ipcq = odp_pktio_outq_getdef(ipc_pktio); + /* Then enqueue the packet for output queue */ + odp_queue_enq(ipcq, buf); +@endverbatim + +@subsubsection odp_mode_sep_processes_recv Separate Processes Receiver (linux-generic) + On the other end process also creates IPC packet I/O and receives packets + from it. + +@verbatim + /* Create packet pool visible by only second process. We will copy + * packets to that queue from IPC shared memory. + */ + shm = odp_shm_reserve("local_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + + pool_base = odp_shm_addr(shm); + pool = odp_buffer_pool_create("ipc_packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + pool_base = NULL; + /* Loop to find remote shared pool */ + while (1) { + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_SHM_PROC_NOCREAT); <- ODP_SHM_PROC_NOCREAT flag provided to + not create shared memory object, do only lookup. + pool_base = odp_shm_addr(shm); + if (pool_base != NULL) { + break; + } else { + ODP_DBG("looking up for shm_packet_pool\n"); + sleep(1); + } + } + + + /* Do lookup packet I/O in IPC shared memory, + * and link it to local pool. */ + while (1) { + pktio = odp_pktio_lookup("ipc_pktio", pool, pool_base); + if (pktio == ODP_PKTIO_INVALID) { + sleep(1); + printf("pid %d: looking for ipc_pktio\n", getpid()); + continue; + } + break; + } + + /* Get packets from the IPC */ + for (;;) { + pkts = odp_pktio_recv(pktio, pkt_tbl, MAX_PKT_BURST); + ... + } +@endverbatim + +@subsubsection odp_mode_sep_processes_hw Separate Processes Hardware optimized + Hardware SoC implementation of IPC exchange can differ. It can use share pool + or can relay on hardware for packet transmission. But the API interface remains the + same: + + odp_pktio_open(), odp_pktio_lookup() + + +@subsubsection odp_ipc_future_updates IPC future update and rework: + -# odp_buffer_pool_create() API will change to allocate memory for pool inside it. + So odp_shm_reserve() for remote pool memory and odp_pktio_lookup() can go inside + odp_buffer_pool_create(). + +*/