From patchwork Wed Feb 17 17:57:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 62120 Delivered-To: patch@linaro.org Received: by 10.112.43.199 with SMTP id y7csp111777lbl; Wed, 17 Feb 2016 08:59:47 -0800 (PST) X-Received: by 10.140.105.195 with SMTP id c61mr3177965qgf.40.1455728373142; Wed, 17 Feb 2016 08:59:33 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id g81si664154qkb.56.2016.02.17.08.59.32; Wed, 17 Feb 2016 08:59:33 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id A9B8666713; Wed, 17 Feb 2016 16:59:32 +0000 (UTC) Authentication-Results: lists.linaro.org; dkim=fail reason="verification failed; unprotected key" header.d=linaro.org header.i=@linaro.org header.b=PMdfUr57; dkim-adsp=none (unprotected policy); dkim-atps=neutral X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id D6C24617D4; Wed, 17 Feb 2016 16:59:21 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 8FECC6173F; Wed, 17 Feb 2016 16:58:21 +0000 (UTC) Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com [209.85.217.180]) by lists.linaro.org (Postfix) with ESMTPS id 0BE7761690 for ; Wed, 17 Feb 2016 16:58:17 +0000 (UTC) Received: by mail-lb0-f180.google.com with SMTP id of3so13184240lbc.1 for ; Wed, 17 Feb 2016 08:58:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lgQZKFkCrDyNqzLexluGDmiQZmunGOBQxLf/QHyakm4=; b=PMdfUr57ymx2FVPH3patJTfiYpBZYq7kiY6FgnMhIFG2S4uECOuN/tEg9KI/bMC/nA Y/vQXdNfDyLPscX39ZyxjKbJKu8xG7hm2ddgXHDMAe0rQFNd4W6Q7Y72j4VfOQ5ccl6L AKsSD3YNgCLK20BdCRoU6tyPcQF3H/xy/b4Ns= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lgQZKFkCrDyNqzLexluGDmiQZmunGOBQxLf/QHyakm4=; b=YEyJDOSvDmlD29V/ew1ojGVRKPk4Vj4F2rbcLXyvzycn9Bd5W48zBulXR/Og5bSa9Z RNvCBbvzheuzUNjECOY8XrDbe8HfEyPEaHllXuOXtxMwB0580ZStymJeEKF8L1NewHhh UkcQSrNEF3nviakakIVjRHBp1R2Yyu7VmcSq4RCchtCMME9V5dB876hg3NYDBOD1qyNX y+bC7Xu65GsxIzFBekAp4MMu7xeaO09V3ldL5ao9q0DH8XSaqtxUpoS4N8kAX0+0JcBG GAUvVKD29C+P5e3jN8m7UDVYEgZvozvl19/rVamLIqXOyJVeKMoNQZJ2D+rG1a2v0akL L5EA== X-Gm-Message-State: AG10YOTwHH1RD5+N+5fwtIaH+GCxbHZ5sUolF8vrUrSwk9IrsGKT3aoCVzj+oNlOARHAQzB++I4= X-Received: by 10.112.136.136 with SMTP id qa8mr1256539lbb.51.1455728295969; Wed, 17 Feb 2016 08:58:15 -0800 (PST) Received: from erachmi-ericsson.ki.sw.ericsson.se (c-83-233-90-46.cust.bredband2.com. [83.233.90.46]) by smtp.gmail.com with ESMTPSA id td7sm327393lbb.6.2016.02.17.08.58.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 17 Feb 2016 08:58:15 -0800 (PST) From: Christophe Milard To: mike.holmes@linaro.org, bill.fischofer@linaro.org Date: Wed, 17 Feb 2016 18:57:02 +0100 Message-Id: <1455731824-17230-2-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1455731824-17230-1-git-send-email-christophe.milard@linaro.org> References: <1455731824-17230-1-git-send-email-christophe.milard@linaro.org> X-Topics: patch Cc: lng-odp@lists.linaro.org Subject: [lng-odp] [PATCH 2/4] doc: user-guide: shmem X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" section regarding shared memo added. Signed-off-by: Christophe Milard --- doc/users-guide/users-guide.adoc | 129 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 129 insertions(+) diff --git a/doc/users-guide/users-guide.adoc b/doc/users-guide/users-guide.adoc index bbb53a7..07d8949 100644 --- a/doc/users-guide/users-guide.adoc +++ b/doc/users-guide/users-guide.adoc @@ -543,6 +543,135 @@ lookup. The lookup function is particularly useful to allow an ODP application that is divided into multiple processes to obtain the handle for the common resource. +== Shared memory +=== Allocating shared memory +Blocks of shared memory can be created using the +odp_shm_reserve()+ API +call. The call expects a shared memory block name, a block size, an alignment +requirement, and optional flags as parameters. It returns a +odp_shm_t+ +handle. The size and alignment requirement are given in bytes. + +.creating a block of shared memory +[source,c] +---- +#define ALIGNMENT 128 +#define BLKNAME +shared_items+ + +odp_shm_t shm; + +typedef struct { +... +} shared_data_t; + +shm = odp_shm_reserve(BLKNAME, sizeof(shared_data_t), ALIGNMENT, 0); +---- + +=== Getting the shared memory block address +The returned odp_shm_t handle can then be used to retrieve the actual +address (in the caller's ODP thread virtual address space) of the created +shared memory block. + +.getting the address of a shared memory block +[source,c] +---- +shared_data_t *shared_data; +shared_data = odp_shm_addr(shm); +---- + +The address returned by +odp_shm_addr()+ is valid only in the calling ODP +thread space: odp_shm_t handles can be shared between ODP threads and remain +valid within any threads, whereas the address returned by +odp_shm_addr(shm)+ +may differ from ODP threads to ODP threads (for the same 'shm' block), and +should therefore not be shared between ODP threads. +For instance, it would be correct to send a shm handle using IPC between two +ODP threads and let each of these thread do their own +odp_shm_addr()+ to +get the block address. Directly sending the address returned by ++odp_shm_addr()+ from one ODP thread to another would however possibly fail +(the address may have no sense in the receiver address space). + +The address returned by +odp_shm_addr()+ is nevertheless guaranteed to be +aligned according to the alignment requirements provided at block creation +time, even if the call to +odp_shm_addr()+ is performed by a different ODP +thread than the one which originally called +odp_shm_reserve()+. + +All shared memory blocks are contiguous in any ODP thread addressing space: +'address' to 'address'+'size' (where 'size' is the shared memory block size, +as provided in the +odp_shm_reserve()+ call) is read and writeable and +mapping the shared memory block. There is no fragmentation. + +=== Memory behaviour +By default ODP threads are assumed to behave as cache coherent systems: +Any change performed on a shared memory block is guaranteed to eventually +become visible to other ODP threads sharing this memory block. +(this behaviour may be altered by flags to +odp_shm_reserve()+ in the future). +Nevertheless, there is no implicit memory barrier associated with any action +on shared memories: *When* a change performed by an ODP thread becomes visible +to another ODP thread is not known: An application using shared memory +blocks has to use some memory barrier provided by ODP to guarantee shared data +validity between ODP threads. + +=== Lookup by name +As mentioned, shared memory handles can be sent from ODP threads to ODP +threads using any IPC mechanism, and then the block address retrieved. +A simpler approach to get the shared memory block handle of an already created +block is to use the +odp_shm_lookup()+ API function call. +This nevertheless requires the calling ODP thread to provide the name of the +shared memory block: ++odp_shm_lookup()+ will return +ODP_SHM_INVALID+ if no shared memory block +with the provided name is known by ODP. + +.retrieving a block handle and address from another ODP task +[source,c] +---- +#define BLKNAME 'shared_items' + +odp_shm_t shm; +shared_data_t *shared_data; + +shm = odp_shm_lookup(BLKNAME); +if (shm != ODP_SHM_INVALID) { + shared_data = odp_shm_addr(shm); + ... +} +---- + +=== Freeing memory +Freeing shared memory is performed using the +odp_shm_free()+ API call. ++odp_shm_free()+ takes one single argument, the shared memory block handle. +Any ODP thread is allowed to perform a +odp_shm_free()+ on a shared memory +block (i.e. the thread performing the +odp_shm_free()+ may be different +from the thread which did the +odp_shm_reserve()+). Shared memory blocks should +be freed only once, and once freed, a shared memory block should no longer +be referenced by any ODP threads. + +.freeing a shared memory block +[source,c] +---- +if (odp_shm_free(shm) != 0) { + ...//handle error +} +---- + +=== Memory creation flags +The last argument to odp_shm_reserve() is a set of ORed flags. +Two flags are supported: + +==== ODP_SHM_PROC +When this flag is given, the allocated shared memory will become visible +outside ODP. Non ODP threads (e.g. usual linux process or linux threads) +will be able to access the memory using native (non ODP) OS calls such as +'shm_open()' and 'mmap' (for linux). +Each ODP implementation should provide a description on exactly how +this mapping should be done on that specific platform. + +==== ODP_SHM_SW_ONLY +This flag tells ODP that the shared memory will be used by the ODP application +software only: no HW (such as DMA, or other accelerator) will ever +try to access the memory. No other ODP call will be involved on this memory +(as ODP calls could implicitly involve HW, depending on the ODP +implementation), except for +odp_shm_lookup()+ and +odp_shm_free()+. +ODP implementations may use this flag as a hint for performance optimization, +or may as well ignore this flag. + == Queues Queues are the fundamental event sequencing mechanism provided by ODP and all ODP applications make use of them either explicitly or implicitly. Queues are