From patchwork Wed Nov 5 19:07:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 40220 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f69.google.com (mail-wg0-f69.google.com [74.125.82.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8C965240A6 for ; Wed, 5 Nov 2014 19:08:19 +0000 (UTC) Received: by mail-wg0-f69.google.com with SMTP id l18sf882397wgh.4 for ; Wed, 05 Nov 2014 11:08:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=lchBXZy8TNaQQOcCA4uCj6roKHhwjCR1izd5Majzyy8=; b=cWWNUAl2hlCCIUcSL+rm3K8ws3mP+vzQa77qoE5yOnplk+9l4DHzui2HnYFC6C/b+f s1yJd/V86mGjgOvvKAAIvfGMB8sHX6/38nd8db5GqEaP+fkLg4SC132yyNQKyeMg3YKc s+fXYvx0gzi8fi0LEr4bLWPKpbMXopQ225ekEx56tYUUi/NT8tZ2uaPM3lO4fhVzp44+ Ep8pXpv7vGgS1tyUAj9ZlSZ5fFW8y0TwJSNXNW4qEenpJQ07/mLsEwzgBGmDDh8K5ubY 1wsBbxJ6pzmVhh6CdevUB5GAXplk/+rm8P0S+NN7XQokFXgq1zjei35j70FNF1p4cJnT ZD9g== X-Gm-Message-State: ALoCoQknEAL3wFB5/MF3xw/pvt+vVRKN9pHu+MSZ5AENPj6O6beE53El+dgP5rzpSeOYiKGGD19v X-Received: by 10.180.219.40 with SMTP id pl8mr1159329wic.3.1415214498711; Wed, 05 Nov 2014 11:08:18 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.203.202 with SMTP id ks10ls402401lac.72.gmail; Wed, 05 Nov 2014 11:08:18 -0800 (PST) X-Received: by 10.112.162.41 with SMTP id xx9mr69451888lbb.21.1415214498497; Wed, 05 Nov 2014 11:08:18 -0800 (PST) Received: from mail-lb0-f179.google.com (mail-lb0-f179.google.com. [209.85.217.179]) by mx.google.com with ESMTPS id s5si7815194las.53.2014.11.05.11.08.18 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Nov 2014 11:08:18 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) client-ip=209.85.217.179; Received: by mail-lb0-f179.google.com with SMTP id l4so1203088lbv.38 for ; Wed, 05 Nov 2014 11:08:18 -0800 (PST) X-Received: by 10.152.42.226 with SMTP id r2mr39172231lal.29.1415214497995; Wed, 05 Nov 2014 11:08:17 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp338082lbc; Wed, 5 Nov 2014 11:08:17 -0800 (PST) X-Received: by 10.140.109.244 with SMTP id l107mr83833406qgf.80.1415214496298; Wed, 05 Nov 2014 11:08:16 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id 9si7978383qgp.54.2014.11.05.11.08.15 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 05 Nov 2014 11:08:16 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Xm5wD-00062y-Lb; Wed, 05 Nov 2014 19:08:13 +0000 Received: from mail-lb0-f172.google.com ([209.85.217.172]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Xm5vy-00062V-Ej for lng-odp@lists.linaro.org; Wed, 05 Nov 2014 19:07:58 +0000 Received: by mail-lb0-f172.google.com with SMTP id w7so1205313lbi.31 for ; Wed, 05 Nov 2014 11:07:52 -0800 (PST) X-Received: by 10.112.142.33 with SMTP id rt1mr69963252lbb.85.1415214472196; Wed, 05 Nov 2014 11:07:52 -0800 (PST) Received: from localhost.localdomain (ppp91-76-163-205.pppoe.mtu-net.ru. [91.76.163.205]) by mx.google.com with ESMTPSA id r4sm1631138lar.3.2014.11.05.11.07.50 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Nov 2014 11:07:51 -0800 (PST) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Wed, 5 Nov 2014 22:07:39 +0300 Message-Id: <1415214459-14179-2-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.8.5.1.163.gd7aced9 In-Reply-To: <1415214459-14179-1-git-send-email-maxim.uvarov@linaro.org> References: <1415214459-14179-1-git-send-email-maxim.uvarov@linaro.org> X-Topics: Architecture patch Subject: [lng-odp] [ARCH PATCHv3] ipc design and usage modes X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.179 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Maxim Uvarov --- v3: simplify odp_pktio_lookup() to one argument; v2: fixed according to Mikes comments. ipc.dox | 198 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+) create mode 100644 ipc.dox diff --git a/ipc.dox b/ipc.dox new file mode 100644 index 0000000..ab85011 --- /dev/null +++ b/ipc.dox @@ -0,0 +1,198 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** +@page ipc_design Inter Process Communication (IPC) API + +@tableofcontents + +@section ipc_intro Introduction + This document defines the two different ODP application modes + multithreading and multiprocessing with respect to their impact on IPC + +@subsection odp_modes Application Thread/Process modes: + ODP applications can use following programming models for multi core support: + -# Single application with ODP worker Threads. + -# Multi process application with single packet I/O pool and common initialization. + -# Different processed communicated thought IPC API. + +@todo - add diagram about IPC modes. + +@subsubsection odp_mode_threads Thread mode + The initialization sequence for thread mode is following: + +@verbatim + main() { + /* Init ODP before calling anything else. */ + odp_init_global(NULL, NULL); + + /* Init this thread. */ + odp_init_local(); + + /* Allocate memory for packets pool. That memory will be visible for all threads.*/ + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + pool_base = odp_shm_addr(shm); + + /* Create pool instance with reserved shm. */ + pool = odp_buffer_pool_create("packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + /* Create worker threads. */ + odph_linux_pthread_create(&thread_tbl[i], 1, core, thr_run_func, + &args); + } + + /* thread function */ + thr_run_func () { + /* Lookup the packet pool */ + pkt_pool = odp_buffer_pool_lookup("packet_pool"); + + /* Open a packet IO instance for this thread */ + pktio = odp_pktio_open("eth0", pkt_pool); + + for (;;) { + /* read buffer */ + buf = odp_schedule(NULL, ODP_SCHED_WAIT); + ... do something ... + } + } +@endverbatim + +@subsubsection odp_mode_processes Processes mode with shared memory + Initialization sequence in processes mode with shared memory is following: + +@verbatim + main() { + /* Init ODP before calling anything else. In process mode odp_init_global + * function called only once in main run process. + */ + odp_init_global(NULL, NULL); + + /* Init this thread. */ + odp_init_local(); + + /* Allocate memory for packets pool. That memory will be visible for all threads.*/ + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + pool_base = odp_shm_addr(shm); + + /* Create pool instance with reserved shm. */ + pool = odp_buffer_pool_create("packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + /* Call odph_linux_process_fork_n which will fork() current process to + * different processes. + */ + odph_linux_process_fork_n(proc, num_workers, first_core); + + /* Run same function as thread uses */ + thr_run_func(); + } + + /* thread function */ + thr_run_func () { + /* Lookup the packet pool */ + pkt_pool = odp_buffer_pool_lookup("packet_pool"); + + /* Open a packet IO instance for this thread */ + pktio = odp_pktio_open("eth0", pkt_pool); + + for (;;) { + /* read buffer */ + buf = odp_schedule(NULL, ODP_SCHED_WAIT); + ... do something ... + } + } +@endverbatim + +@subsubsection odp_mode_sep_processes Separate Processes mode + This mode differs from mode with common shared memory. Each execution thread is completely independent process which calls + odp_init_global() and do other initialization process then opens IPC pktio interface and does packets exchange between processes + to communicate between these independent processes. IPC pktio interface may be used to exchange packets. + For the base implementation (linux-generic) shared memory is used as the IPC mechanism to make it easy to reuse for different use + cases. The use cases are process that may be spread amongst different VMs, bare metal or regular Linux user space, in fact any + process that can share memory. + + In hardware implementations IPC pktio can be offloaded to HW SoC packets functions. + The initialization sequence in the separate thread mode model is same as it is process, both using shared memory but with following + difference: + +@subsubsection odp_mode_sep_processes_cons Separate Processes Sender (linux-generic) + -# Each process calls odp_init_global(), pool creation and etc. + + -# ODP_SHM_PROC flag provided to be able to map that memory from different process. + +@verbatim + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_SHM_PROC); + + pool_base = odp_shm_addr(shm); +@endverbatim + + -# Worker thread (or process) creates IPC pktio, and sends buffers to it: + + A) + odp_pktio_t ipc_pktio = odp_pktio_open("ipc_pktio", pkt_pool); + odp_pktio_send(ipc_pktio, pkt_tbl, pkts); + + B) instead of using packet io queue can be used in following way: + +@verbatim + odp_queue_t ipcq = odp_pktio_outq_getdef(ipc_pktio); + /* Then enqueue the packet for output queue */ + odp_queue_enq(ipcq, buf); +@endverbatim + +@subsubsection odp_mode_sep_processes_recv Separate Processes Receiver (linux-generic) + On the other end process also creates IPC packet I/O and receives packets + from it. + +@verbatim + + /* Do lookup packet I/O in IPC shared memory, + * and link it to local pool. */ + while (1) { + pktio = odp_pktio_lookup("ipc_pktio"); + if (pktio == ODP_PKTIO_INVALID) { + sleep(1); + printf("pid %d: looking for ipc_pktio\n", getpid()); + continue; + } + break; + } + + /* Get packets from the IPC */ + for (;;) { + pkts = odp_pktio_recv(pktio, pkt_tbl, MAX_PKT_BURST); + ... + } +@endverbatim + +@subsubsection odp_mode_sep_processes_hw Separate Processes Hardware optimized + Hardware SoC implementation of IPC exchange can differ. It can use a shared pool + or it can rely on the hardware for packet transmission. But the API interface remains the + + + Hardware SoC implementation of IPC exchange can differ. It can use share pool + or can relay on hardware for packet transmission. But the API interface remains the + same: + + odp_pktio_open(), odp_pktio_lookup() + +@todo - Bug 825 odp_buffer_pool_create() API will change to allocate memory for pool inside it. + So odp_shm_reserve() for remote pool memory and odp_pktio_lookup() can go inside + odp_buffer_pool_create(). + +*/