From patchwork Wed Nov 29 12:48:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Volodymyr Babchuk X-Patchwork-Id: 119969 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp3012899qgn; Wed, 29 Nov 2017 04:52:24 -0800 (PST) X-Google-Smtp-Source: AGs4zMZ70aA2zuhSKflAE8DUoeXaSnNUMwyuzqCRcvOJgPFyL94NFAH/XKd+CQUpp/lupCTq9C8C X-Received: by 10.84.202.194 with SMTP id q2mr2834325plh.19.1511959944570; Wed, 29 Nov 2017 04:52:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511959944; cv=none; d=google.com; s=arc-20160816; b=HyQNxauIa4KhRZ1DssFTZbZCQNaNi9mi5tOljsXQ0JN4rhgMAHQS4XyCmagOZc9yZb tvbovj07peJZWjGqjnDUGQIWDENLSHuxrCNzDaTpOtYeLh2Oe1sJ5UT1t87RGfqYYa5p +G8LiSNbfoK2GTaFP93KajP8nQQLX/JCZ494O9AYGFDaGMvf1sIkx7NEPogThENVWBmy LqR50ehdbBtIeQKCx6QhMUSkCOturvh7+Gq6+wL5tFp5qWNo9udV0iNSo6tbD6/OSDLM U4MHNvs92mTisOxvmDcyNit6aiVFVRcPKMASjz3698MUPWtbW2uSZpLSjPWk/L2AQC7P 92rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:spamdiagnosticmetadata :spamdiagnosticoutput:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature:arc-authentication-results; bh=iZi+k3+kTkx4OCmJLCfoAMxvHXdbv50DUKvYGWWi974=; b=OiYcgiVIg2z5A0WBKrePE+m5SYY2v84wFJigSfJ+1kHJ8K/c6QFVLXtCGitko2laFN q7WTrA3nIQaGpxF4YnJEXVfrZ1dYuzMgnnMJD/YJ4WWXMq4jR0iBH55I949ilflnGLVG JGt06HWkgZLbkSBdDmR0dPVK2jScbixdHxo+kqPvmAz4K+tqe/KDLch4QF4JC8c9gwwi fou8mbEbSQJv1JogAnRbuLC02dxqhhOike0lxcQlqf92KWH9itK5be70AootJ5Fd9xKE 3Fq4nfAC9rATbTmx1kfMYRVIv5aDGkh3tHcW2bGq4mufCZjA4odoQ+CnLu4SAlhWzb/i 4ZdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@epam.com header.s=selector1 header.b=cmcFZnVc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t14si1201677pgv.647.2017.11.29.04.52.24; Wed, 29 Nov 2017 04:52:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@epam.com header.s=selector1 header.b=cmcFZnVc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754828AbdK2MtF (ORCPT + 28 others); Wed, 29 Nov 2017 07:49:05 -0500 Received: from mail-he1eur01on0053.outbound.protection.outlook.com ([104.47.0.53]:28603 "EHLO EUR01-HE1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932082AbdK2MtA (ORCPT ); Wed, 29 Nov 2017 07:49:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=iZi+k3+kTkx4OCmJLCfoAMxvHXdbv50DUKvYGWWi974=; b=cmcFZnVcHuJglIvBbHjKYZE4X6hrbOLL9DIuGryYne9EVXx3HcOsUwKjXCtkNSgDSNSQmW5EsquSYwDxQvUGjUW09B3dOMxOFcULNSmoKKD8HmcuUkbLY4jpToSYkahbzeiS5yr0PiWPba1iG6dm10BNaihvZXPOmkkYAPMC/9Y= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Volodymyr_Babchuk@epam.com; Received: from EPUAKYIW2556.kyiv.epam.com (85.223.209.56) by AM4PR03MB1763.eurprd03.prod.outlook.com (2603:10a6:200:10::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.282.5; Wed, 29 Nov 2017 12:48:54 +0000 Received: by EPUAKYIW2556.kyiv.epam.com (sSMTP sendmail emulation); Wed, 29 Nov 2017 14:48:51 +0200 From: Volodymyr Babchuk To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, tee-dev@lists.linaro.org, Jens Wiklander Cc: volodymyr_babchuk@epam.com, Volodymyr Babchuk Subject: [RESEND PATCH v2 01/14] tee: flexible shared memory pool creation Date: Wed, 29 Nov 2017 14:48:25 +0200 Message-Id: <1511959718-5421-2-git-send-email-volodymyr_babchuk@epam.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1511959718-5421-1-git-send-email-volodymyr_babchuk@epam.com> References: <1507923164-12796-1-git-send-email-volodymyr_babchuk@epam.com> <1511959718-5421-1-git-send-email-volodymyr_babchuk@epam.com> MIME-Version: 1.0 X-Originating-IP: [85.223.209.56] X-ClientProxiedBy: AM0PR0102CA0042.eurprd01.prod.exchangelabs.com (2603:10a6:208::19) To AM4PR03MB1763.eurprd03.prod.outlook.com (2603:10a6:200:10::7) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e7f965a9-a40d-4f02-ae9d-08d537278cd8 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(7168020)(4627115)(201703031133081)(201702281549075)(2017052603199); SRVR:AM4PR03MB1763; X-Microsoft-Exchange-Diagnostics: 1; AM4PR03MB1763; 3:OLCm548Fp1BHWgoynEgIZ3vkjelI71v0phcdti3Qsq9H2SsYxcR6dSO6Xh+T+Mf0lfC+8vZDHqeOyjAL7ljJr9IXx+sUIK+UDBnF2GRQemtp74EfQXeMp4EnMsqG1KwIUFRnEnz7tVmIbV8gqoDanQsL5K+FLtf80dgDagZKnhkkqhL4vj0905zaRwJsa2QVszvNycDe3ED0kcj960HxdTL3hz8fxhcKP7l4B/lODd6c+vP9JYV7zrn3NFU6i9z5; 25:i++k9WUtgxJjmvUi6Lm6KAliHoJlzKRaqgGx5SSqnHTDesO7GOwNOioLkL+q21y0ooqkOWNhzqQ5lSt7Z1m3+t+ZORO+0XzivU4GzuQGkOe/MNt68b7OdVkIgCeWcv+qtz/J49iohJeTLdY475/i9eRvT6HKw8ERmcI+FyZEnT7TzTRc8QpZOq7QDPtHN+xoFrD2opXzrZnXeatPExKb4GogHkX1mCqjIv0E+Uq9KtlrcJFquae9PSMClQgftr374UseXzMNO7ZmzSzH/5egOC2M194VSgKOjuSsmb/EfiJpfCOUBKmcEqCjG6nVlrMW7xj9ig6LpK8vP2GY20qlxQ==; 31:LzVZUwp6BIiTiy7A3ZqvHyouyjK3TOODg7E5PcMB9GqCTIncyg/x57hy+6EstNzdw2uUdSd2ACQFzJ3yRA+VtezepfAucPvfMXd5qqFK1wkygOCMxrXCoXdzBV34rn1UtrYz9rxJqMoPeZTwEYk4hbkKElp5W0OLTRMCHlxhCBy9en1MGckin0g8M/K+Z0NKfks3EaIj510TSy1jfba8Yhj2BPUpz2MYwkYlMdGhjNc= X-MS-TrafficTypeDiagnostic: AM4PR03MB1763: X-Microsoft-Exchange-Diagnostics: 1; AM4PR03MB1763; 20:fdIBkxS0L+KI0StPgRHMns0MrAps14hmsLXd9/snw4VUncbmqMYS9UCTrhHHo7sS6ORqmanfW0q/Jq8howrWQgFyAE0LyN1C9XsApkrPh+3abRQdwv4AKG+f5zYymRHRrtGjj8PpxBJA9nly0H9tgMXKDzHwUjg0T9lIUWb/cFxfQ4y/bAV+kuZwrhq6hc36snE/ATis8VFTXOzsscyDq64g0ska7QDFIrRzkBis1mjbG2fn6sytwz3k5/HR6yBJQuQRkYsUntW5cY+KZGU4gZX3EHq4pWs0xgUTRg3p3fEVemjOqdTvaFxYzUswk2MPYuc5NpmCK1TyO9jOD2kpl/OtTjRPzssDlJtycJ0IQBQb7x0vRHoR15oj5SG2xFW0amGteY6aex/fLnrwk33uJOkugQOrJSZKAGG/HuPwHZla+A2KY//PsZJMTaTdkuQu32NBgVv8bL5xWn4akAcxs7qeQbuc2EUz+TnWkRKWp3LlFK9IF5BUmUAWcnGRpS2h; 4:aWgJWg3k/nk79jsp2LcnTnd4aDpLjdlpBXOCYbFneUrK/Sg+P5dsFnTdNg0H4mT65VOE3vHHLtc8Cr0EpnQm4AkpRKdrRDzgY+UOIRb5zyRzu6fsZpxea0FQ36w2wi2zYlmE/DyjNLwiBIWZD11199dMRbGKsSLlPGLhovFJzQXYlV334jYdyMJw+1X3pV62r8heF0XVlVl2ZiLhfrrL1j4gkXClfyroThmNFYOnPxvNUTdZmqsE2ZIScbmx70r8FBh+SyEieM2/icYiw0cZwg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(3231022)(3002001)(10201501046)(93006095)(93001095)(6041248)(20161123560025)(20161123564025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(20161123558100)(6072148)(201708071742011); SRVR:AM4PR03MB1763; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:AM4PR03MB1763; X-Forefront-PRVS: 05066DEDBB X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(366004)(346002)(199003)(189002)(50986999)(68736007)(76176999)(97736004)(101416001)(6916009)(2950100002)(2906002)(80792005)(6666003)(6116002)(4326008)(53936002)(3846002)(316002)(81166006)(106356001)(8936002)(33646002)(81156014)(72206003)(16586007)(5660300001)(36756003)(42186006)(305945005)(478600001)(50226002)(48376002)(51416003)(7736002)(8676002)(105586002)(50466002)(66066001)(55236003)(47776003)(86362001)(52116002)(189998001)(122856001)(39060400002)(217873001); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR03MB1763; H:EPUAKYIW2556.kyiv.epam.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: epam.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: 1; AM4PR03MB1763; 23:LvnNX3bxHGz4LWOfuhmEZRzWk/HdcfMP1ePR1rgF3JPqQ8KDlmoK+l21aoHExBxowBGuquJF08qcdAKexU4c5AVT4h52yrmG2S5gJn1tIWoUNm/293BWiG/FvtIjRMCsdu5a6CXkuf43r5hhwtvSvYGVXGEa+Vg0DX4KXWvn3U39a4YxcV94H7c+Z3483ZJJ6Ob5edhAkv836w5PY+v1+D8Pg9xw60jUG31eudbVjwu/7YAKR5L9KOZwgs7Kv6xjrsz8fE2buWa34XFtPvfT02Q7XI1BSC3F5bDWVy9CJzy8+F3ZAlqJiFTAP40I8+sWNqoOsx6OmPmBXQLD1BKNbEhIgumHx/U+xk0PpQDRhJ6cVskITU6ftTWCnG0ZANr13j0y6pgbmVS3LElLvhxE+vtHCSv/IkNVUHuj5Ae8z1ohgs5zWNazUJUnLhAvhTqeUFOV6Lw34gp/FAebnoXUQfvtBDXKF2c3XqHKJbTb18j++DNFzhJFXhsrpJTGQd2wZ/Y0jVM3R5BpOctqGQ2E1Qk4ibQOH8KItZx7XIAlXOUM/huFqP3nWQe171ChwYRheFcP4hbcvhAoUGYbzc6envcmkBSPVpiASpJLFUUzFs++NoKKvuNNVzJbWSXgv6cWq5NzpJH8qPmV9muf0dTJ+05p0U9yzlPHP4HRlbzBT0NRzSOje3g1e0umE8cq9ouQ4IlbxPXdmY6c395Ymqz40MlfSxBRAP8zZQV914drt8iEztmnLX9BPS+qZiPNTEVPAD0XetoNGEN+MkCB0PteXOFwlYLTzJ0Z+EL4bU2YUGxru87UYa+/lS8R8XGISC9N4wzt9RqVexYMVPpzGSjiRIqFPKc6yrOKI7NqpNhSsD5Q5BAYTxyevauow8xJ0s0ta+pdNs1XaiR6r+wsuy4Isg4lB4DIXYiSmHdRybh4kClkVfQp31b5F29lhdoLZOXY7urn4rPFh7OXbWuNtKgHTXOvxuAWORrOaaR/dWk81gdAo7vmrmxqpUtM42lvOFh+Y5UCJ69jUEF8PntsaiHj7EdOh5zLwy8l1UZa3n5AouULTxrwdoxaEhsEJkKblh5hBGWJms3HF/YDtsbYC9/Qt7fzHCnO+XBkCbB6E35dRRLmAyw1CwtAUaYdX1RZjC4u X-Microsoft-Exchange-Diagnostics: 1; AM4PR03MB1763; 6:zVhchs6yh7Rc8CmfXBEpT+zwktEWuUeOlgOJYkpOvakm1V1Lv2oGMs8qLPFK1aj+uaeIixdNTS0Glpjg9K602mudk/3l3e6TsKsFF9H6dHuN6NtMc9RIq65uMzppc2H1orBuQSVLrnNBYDy3SRosbwK/Vd9kYjE6P+8MTvKCNBqL5MaE13jTGW6ybAuDrXb1+xB/UeE9myBTAbfmk7KAdPXKA3QaMe82nZhB/i3B/7X1h+OsR+b7cm5Pu0hEPrCCCiE6hgP1OqtChjpzjKqM4QwbzkDZR57rzC1TzXqsgBhw2RuDKgDwrRD7gnmwO/rz3XTlAT/P64Ms6Y9hlIZXMwvkR5rAL2Xxgb8kfOfpW9k=; 5:3KSXXjMeyLF0oK8dt3Vd+816ZXBHiImFwxMrvt6xEwZJPNCk76mbyJtt6ERHuaCvPB5++7/s78yx2Wlnq0APiCn/Wo+/1qR1qEKpTGfixIQbZaxlkdQmtucJK54cwknsj1cLpdQiGlE1nh53JyysjOs5CTaLLRUan33SfM7G8i4=; 24:0vAsj0Yjk+hyN5L7+Le3T5f4WEOln40jYjvk65KtWcoO7zMys/5S/Ypg+8bwxauCwVqj+s5WU1yttYD1jIHd5QsWDMSqSgiH0MLyFLWEqsg=; 7:k8V0uYGCTkHYZR/ZyrUAhi+J8d4H4Kk6hE3mO1w1GGT8geKKfA8YOiJArmCS5/pP7zogrOns13Itk8oBY2NZDMduYAV+wUU0DFlBUwjNrrN7dgoR2BrhP+wduENDr70oNlow43c1GRf8o3tvazSEU5OMbMvLEr7ymnFTXiwSfmGX1ilfG+eEv2+eTMddvimlXaPeH8AMJchufI5N5YeiAvZUU2A2dN1NnjGDVV7SgiSfMeUYKSsvEpvF27nhQexa SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: epam.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Nov 2017 12:48:54.5305 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e7f965a9-a40d-4f02-ae9d-08d537278cd8 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR03MB1763 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jens Wiklander Makes creation of shm pools more flexible by adding new more primitive functions to allocate a shm pool. This makes it easier to add driver specific shm pool management. Signed-off-by: Jens Wiklander Signed-off-by: Volodymyr Babchuk --- drivers/tee/tee_private.h | 57 +--------------- drivers/tee/tee_shm.c | 8 +-- drivers/tee/tee_shm_pool.c | 165 ++++++++++++++++++++++++++++----------------- include/linux/tee_drv.h | 91 +++++++++++++++++++++++++ 4 files changed, 199 insertions(+), 122 deletions(-) -- 2.7.4 diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 21cb6be..2bc2b5a 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -21,68 +21,15 @@ #include #include -struct tee_device; - -/** - * struct tee_shm - shared memory object - * @teedev: device used to allocate the object - * @ctx: context using the object, if NULL the context is gone - * @link link element - * @paddr: physical address of the shared memory - * @kaddr: virtual address of the shared memory - * @size: size of shared memory - * @dmabuf: dmabuf used to for exporting to user space - * @flags: defined by TEE_SHM_* in tee_drv.h - * @id: unique id of a shared memory object on this device - */ -struct tee_shm { - struct tee_device *teedev; - struct tee_context *ctx; - struct list_head link; - phys_addr_t paddr; - void *kaddr; - size_t size; - struct dma_buf *dmabuf; - u32 flags; - int id; -}; - -struct tee_shm_pool_mgr; - -/** - * struct tee_shm_pool_mgr_ops - shared memory pool manager operations - * @alloc: called when allocating shared memory - * @free: called when freeing shared memory - */ -struct tee_shm_pool_mgr_ops { - int (*alloc)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm, - size_t size); - void (*free)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm); -}; - -/** - * struct tee_shm_pool_mgr - shared memory manager - * @ops: operations - * @private_data: private data for the shared memory manager - */ -struct tee_shm_pool_mgr { - const struct tee_shm_pool_mgr_ops *ops; - void *private_data; -}; - /** * struct tee_shm_pool - shared memory pool * @private_mgr: pool manager for shared memory only between kernel * and secure world * @dma_buf_mgr: pool manager for shared memory exported to user space - * @destroy: called when destroying the pool - * @private_data: private data for the pool */ struct tee_shm_pool { - struct tee_shm_pool_mgr private_mgr; - struct tee_shm_pool_mgr dma_buf_mgr; - void (*destroy)(struct tee_shm_pool *pool); - void *private_data; + struct tee_shm_pool_mgr *private_mgr; + struct tee_shm_pool_mgr *dma_buf_mgr; }; #define TEE_DEVICE_FLAG_REGISTERED 0x1 diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 4bc7956..fdda89e 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -32,9 +32,9 @@ static void tee_shm_release(struct tee_shm *shm) mutex_unlock(&teedev->mutex); if (shm->flags & TEE_SHM_DMA_BUF) - poolm = &teedev->pool->dma_buf_mgr; + poolm = teedev->pool->dma_buf_mgr; else - poolm = &teedev->pool->private_mgr; + poolm = teedev->pool->private_mgr; poolm->ops->free(poolm, shm); kfree(shm); @@ -139,9 +139,9 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags) shm->teedev = teedev; shm->ctx = ctx; if (flags & TEE_SHM_DMA_BUF) - poolm = &teedev->pool->dma_buf_mgr; + poolm = teedev->pool->dma_buf_mgr; else - poolm = &teedev->pool->private_mgr; + poolm = teedev->pool->private_mgr; rc = poolm->ops->alloc(poolm, shm, size); if (rc) { diff --git a/drivers/tee/tee_shm_pool.c b/drivers/tee/tee_shm_pool.c index fb4f852..e6d4b9e 100644 --- a/drivers/tee/tee_shm_pool.c +++ b/drivers/tee/tee_shm_pool.c @@ -44,49 +44,18 @@ static void pool_op_gen_free(struct tee_shm_pool_mgr *poolm, shm->kaddr = NULL; } +static void pool_op_gen_destroy_poolmgr(struct tee_shm_pool_mgr *poolm) +{ + gen_pool_destroy(poolm->private_data); + kfree(poolm); +} + static const struct tee_shm_pool_mgr_ops pool_ops_generic = { .alloc = pool_op_gen_alloc, .free = pool_op_gen_free, + .destroy_poolmgr = pool_op_gen_destroy_poolmgr, }; -static void pool_res_mem_destroy(struct tee_shm_pool *pool) -{ - gen_pool_destroy(pool->private_mgr.private_data); - gen_pool_destroy(pool->dma_buf_mgr.private_data); -} - -static int pool_res_mem_mgr_init(struct tee_shm_pool_mgr *mgr, - struct tee_shm_pool_mem_info *info, - int min_alloc_order) -{ - size_t page_mask = PAGE_SIZE - 1; - struct gen_pool *genpool = NULL; - int rc; - - /* - * Start and end must be page aligned - */ - if ((info->vaddr & page_mask) || (info->paddr & page_mask) || - (info->size & page_mask)) - return -EINVAL; - - genpool = gen_pool_create(min_alloc_order, -1); - if (!genpool) - return -ENOMEM; - - gen_pool_set_algo(genpool, gen_pool_best_fit, NULL); - rc = gen_pool_add_virt(genpool, info->vaddr, info->paddr, info->size, - -1); - if (rc) { - gen_pool_destroy(genpool); - return rc; - } - - mgr->private_data = genpool; - mgr->ops = &pool_ops_generic; - return 0; -} - /** * tee_shm_pool_alloc_res_mem() - Create a shared memory pool from reserved * memory range @@ -104,42 +73,109 @@ struct tee_shm_pool * tee_shm_pool_alloc_res_mem(struct tee_shm_pool_mem_info *priv_info, struct tee_shm_pool_mem_info *dmabuf_info) { - struct tee_shm_pool *pool = NULL; - int ret; - - pool = kzalloc(sizeof(*pool), GFP_KERNEL); - if (!pool) { - ret = -ENOMEM; - goto err; - } + struct tee_shm_pool_mgr *priv_mgr; + struct tee_shm_pool_mgr *dmabuf_mgr; + void *rc; /* * Create the pool for driver private shared memory */ - ret = pool_res_mem_mgr_init(&pool->private_mgr, priv_info, - 3 /* 8 byte aligned */); - if (ret) - goto err; + rc = tee_shm_pool_mgr_alloc_res_mem(priv_info->vaddr, priv_info->paddr, + priv_info->size, + 3 /* 8 byte aligned */); + if (IS_ERR(rc)) + return rc; + priv_mgr = rc; /* * Create the pool for dma_buf shared memory */ - ret = pool_res_mem_mgr_init(&pool->dma_buf_mgr, dmabuf_info, - PAGE_SHIFT); - if (ret) + rc = tee_shm_pool_mgr_alloc_res_mem(dmabuf_info->vaddr, + dmabuf_info->paddr, + dmabuf_info->size, PAGE_SHIFT); + if (IS_ERR(rc)) + goto err_free_priv_mgr; + dmabuf_mgr = rc; + + rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr); + if (IS_ERR(rc)) + goto err_free_dmabuf_mgr; + + return rc; + +err_free_dmabuf_mgr: + tee_shm_pool_mgr_destroy(dmabuf_mgr); +err_free_priv_mgr: + tee_shm_pool_mgr_destroy(priv_mgr); + + return rc; +} +EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem); + +struct tee_shm_pool_mgr *tee_shm_pool_mgr_alloc_res_mem(unsigned long vaddr, + phys_addr_t paddr, + size_t size, + int min_alloc_order) +{ + const size_t page_mask = PAGE_SIZE - 1; + struct tee_shm_pool_mgr *mgr; + int rc; + + /* Start and end must be page aligned */ + if (vaddr & page_mask || paddr & page_mask || size & page_mask) + return ERR_PTR(-EINVAL); + + mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); + if (!mgr) + return ERR_PTR(-ENOMEM); + + mgr->private_data = gen_pool_create(min_alloc_order, -1); + if (!mgr->private_data) { + rc = -ENOMEM; goto err; + } - pool->destroy = pool_res_mem_destroy; - return pool; + gen_pool_set_algo(mgr->private_data, gen_pool_best_fit, NULL); + rc = gen_pool_add_virt(mgr->private_data, vaddr, paddr, size, -1); + if (rc) { + gen_pool_destroy(mgr->private_data); + goto err; + } + + mgr->ops = &pool_ops_generic; + + return mgr; err: - if (ret == -ENOMEM) - pr_err("%s: can't allocate memory for res_mem shared memory pool\n", __func__); - if (pool && pool->private_mgr.private_data) - gen_pool_destroy(pool->private_mgr.private_data); - kfree(pool); - return ERR_PTR(ret); + kfree(mgr); + + return ERR_PTR(rc); } -EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem); +EXPORT_SYMBOL_GPL(tee_shm_pool_mgr_alloc_res_mem); + +static bool check_mgr_ops(struct tee_shm_pool_mgr *mgr) +{ + return mgr && mgr->ops && mgr->ops->alloc && mgr->ops->free && + mgr->ops->destroy_poolmgr; +} + +struct tee_shm_pool *tee_shm_pool_alloc(struct tee_shm_pool_mgr *priv_mgr, + struct tee_shm_pool_mgr *dmabuf_mgr) +{ + struct tee_shm_pool *pool; + + if (!check_mgr_ops(priv_mgr) || !check_mgr_ops(dmabuf_mgr)) + return ERR_PTR(-EINVAL); + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + + pool->private_mgr = priv_mgr; + pool->dma_buf_mgr = dmabuf_mgr; + + return pool; +} +EXPORT_SYMBOL_GPL(tee_shm_pool_alloc); /** * tee_shm_pool_free() - Free a shared memory pool @@ -150,7 +186,10 @@ EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem); */ void tee_shm_pool_free(struct tee_shm_pool *pool) { - pool->destroy(pool); + if (pool->private_mgr) + tee_shm_pool_mgr_destroy(pool->private_mgr); + if (pool->dma_buf_mgr) + tee_shm_pool_mgr_destroy(pool->dma_buf_mgr); kfree(pool); } EXPORT_SYMBOL_GPL(tee_shm_pool_free); diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index cb889af..e9be4a4 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -150,6 +150,97 @@ int tee_device_register(struct tee_device *teedev); void tee_device_unregister(struct tee_device *teedev); /** + * struct tee_shm - shared memory object + * @teedev: device used to allocate the object + * @ctx: context using the object, if NULL the context is gone + * @link link element + * @paddr: physical address of the shared memory + * @kaddr: virtual address of the shared memory + * @size: size of shared memory + * @offset: offset of buffer in user space + * @pages: locked pages from userspace + * @num_pages: number of locked pages + * @dmabuf: dmabuf used to for exporting to user space + * @flags: defined by TEE_SHM_* in tee_drv.h + * @id: unique id of a shared memory object on this device + * + * This pool is only supposed to be accessed directly from the TEE + * subsystem and from drivers that implements their own shm pool manager. + */ +struct tee_shm { + struct tee_device *teedev; + struct tee_context *ctx; + struct list_head link; + phys_addr_t paddr; + void *kaddr; + size_t size; + unsigned int offset; + struct page **pages; + size_t num_pages; + struct dma_buf *dmabuf; + u32 flags; + int id; +}; + +/** + * struct tee_shm_pool_mgr - shared memory manager + * @ops: operations + * @private_data: private data for the shared memory manager + */ +struct tee_shm_pool_mgr { + const struct tee_shm_pool_mgr_ops *ops; + void *private_data; +}; + +/** + * struct tee_shm_pool_mgr_ops - shared memory pool manager operations + * @alloc: called when allocating shared memory + * @free: called when freeing shared memory + * @destroy_poolmgr: called when destroying the pool manager + */ +struct tee_shm_pool_mgr_ops { + int (*alloc)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm, + size_t size); + void (*free)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm); + void (*destroy_poolmgr)(struct tee_shm_pool_mgr *poolmgr); +}; + +/** + * tee_shm_pool_alloc() - Create a shared memory pool from shm managers + * @priv_mgr: manager for driver private shared memory allocations + * @dmabuf_mgr: manager for dma-buf shared memory allocations + * + * Allocation with the flag TEE_SHM_DMA_BUF set will use the range supplied + * in @dmabuf, others will use the range provided by @priv. + * + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. + */ +struct tee_shm_pool *tee_shm_pool_alloc(struct tee_shm_pool_mgr *priv_mgr, + struct tee_shm_pool_mgr *dmabuf_mgr); + +/* + * tee_shm_pool_mgr_alloc_res_mem() - Create a shm manager for reserved + * memory + * @vaddr: Virtual address of start of pool + * @paddr: Physical address of start of pool + * @size: Size in bytes of the pool + * + * @returns pointer to a 'struct tee_shm_pool_mgr' or an ERR_PTR on failure. + */ +struct tee_shm_pool_mgr *tee_shm_pool_mgr_alloc_res_mem(unsigned long vaddr, + phys_addr_t paddr, + size_t size, + int min_alloc_order); + +/** + * tee_shm_pool_mgr_destroy() - Free a shared memory manager + */ +static inline void tee_shm_pool_mgr_destroy(struct tee_shm_pool_mgr *poolm) +{ + poolm->ops->destroy_poolmgr(poolm); +} + +/** * struct tee_shm_pool_mem_info - holds information needed to create a shared * memory pool * @vaddr: Virtual address of start of pool