From patchwork Tue Mar 5 20:54:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 159711 Delivered-To: patches@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp5410338jad; Tue, 5 Mar 2019 12:54:43 -0800 (PST) X-Received: by 2002:a65:6651:: with SMTP id z17mr2938200pgv.95.1551819283585; Tue, 05 Mar 2019 12:54:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551819283; cv=none; d=google.com; s=arc-20160816; b=ZFE2iJtnZJ6IjzUsegLlQpgN7GdLaaolo7u3i8hP5Dz1xliuFQqpOtnjvHxsGwLt3g J5gXJliQ8OdaDBxFL8GE0SWYNXDf7kv4zJtxDP4nPtyf/W1BAaGbtXyb+YefkqRFWo9g hT9s4jARpcBkTpZkFXlNR2ErBcjz6a5mPXE8vaaWzgspESCzZ10vV54ozwe6vpdYtkeg ZDsHVg2ki+d1bowgaAkWWo+Db4FCdDdTBqOEJ8T810pjhJ1oEtnfYBtlhB8TBmae118e Z0qjYdlAv7yRh5x96XAseaoVX5vkYu3iIVnQ7FBeDpM3+jVwSO0hFEo6uqSB4lwBiEXc PALg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=BG94J8n4JZbnxfrekiwpt69zPxSc7JCbOZr/oAI9WHA=; b=cjrkj4YhzRXqRTrJD3eNsqmb2h2+287VO88rWF+Ea5FJ6JCgRye6DYTWYWoTVLecxI 7haI37hR+QCSsgWHkVc1wNllfNjVwaIG1/mLcJ5GtSxUOtfBPPOyv9FVQrpw+vJ1YEVN Y7PLvJvns/apl1/I81rSiarNJTTnUZXozdlJt+MNs/bKOSCxM0ACbS2zsBRsEg2RFGOE hS0tk3lzt4UPLFn5c5TGikhg1O5qpyXXx9a69OiqkXSVYiCrFiGPA64TNHq0SIsv+ROi vP3gXVmsxx2HEQcBAmTRRwVE8NUAUZXTZYdDVxn7LD+92KslEAIn00eC/rykzGO8ELkA kfBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DvBnCV+V; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 91sor4088316ply.0.2019.03.05.12.54.43 for (Google Transport Security); Tue, 05 Mar 2019 12:54:43 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=DvBnCV+V; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BG94J8n4JZbnxfrekiwpt69zPxSc7JCbOZr/oAI9WHA=; b=DvBnCV+VDj6+dTQ3JqYAVu46uY16fx5QHPkEp0TQbCH9iJDLmAAAmUPVlNyfhOA3Ot jRcM94UeZmqSG17v3aKd4pggxnbM38DAL8k6azBaWy756ZTGdlovhWnCPUW6iNVvf4rC 1I4YhSK3IvAsVj2RktHrg/beONtZf1eRla9GTJwRPog+8SCK7nzk/Mb7PatfthTChnjC OM7U3DUq45P2tw19RCx8QcjRrtKjF019N5ySCUmbZUqJw8WvyXb18eoHbw5XabTbtbmL dvv8RYA2TgjjXwf9K45/B39jMIavUe9Hq7eyEEPgwa++YdiRLLmVr+ptEx0i4MaQ3Ye0 x68Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BG94J8n4JZbnxfrekiwpt69zPxSc7JCbOZr/oAI9WHA=; b=lWOefJr4nPEqeUHICzoh+p+RpBLW6DRsnBV7O0AWY/nu7aN3BNfCC6E2ZnyNo49coC R9Blt01DMywci3hOHOwY08r5VxUF7G9f8cLbQubU68hI/K19NS4ssO+qg2FK4I6rjefT sAdH0WRyRXihZRt2IuvEI0e+889Vuex6H4nMWA6bTRXq652FlhdocZqZKRIo18Ou3J3n H82pD6+8sl96fsSGs/5vQ5vvovi4/YF+VWT5BubBgvsG7X9JuMAJ873vDJnfwQvYOZRm 0lGBKRRE1/+yD8eyHsVL1/2iMkSpTS3C/P5GNVlkxsWy7YwR7k5f58ftO4PxHBmsc9/S 2U9A== X-Gm-Message-State: APjAAAUukHWDEPd1b+ULh8p1NMWBuSmKx7eebHOTOYkXmnGP3b092sOP lRuuL6Y568kGbEw68IVYjIk0nRyB X-Google-Smtp-Source: APXvYqwhBQzKMB98Agxzd4hdnG+gvvWsB/MpP1KzDsieRzFUOBTztn3nv3OM7smscsoV9DNO6/GoYg== X-Received: by 2002:a17:902:380c:: with SMTP id l12mr3150482plc.326.1551819283088; Tue, 05 Mar 2019 12:54:43 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id i4sm13411788pfo.158.2019.03.05.12.54.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 05 Mar 2019 12:54:41 -0800 (PST) From: John Stultz To: lkml Cc: "Andrew F. Davis" , Laura Abbott , Benjamin Gaignard , Greg KH , Sumit Semwal , Liam Mark , Brian Starkey , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org, John Stultz Subject: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework Date: Tue, 5 Mar 2019 12:54:29 -0800 Message-Id: <1551819273-640-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1551819273-640-1-git-send-email-john.stultz@linaro.org> References: <1551819273-640-1-git-send-email-john.stultz@linaro.org> From: "Andrew F. Davis" This framework allows a unified userspace interface for dma-buf exporters, allowing userland to allocate specific types of memory for use in dma-buf sharing. Each heap is given its own device node, which a user can allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC. This code is an evoluiton of the Android ION implementation, and a big thanks is due to its authors/maintainers over time for their effort: Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard, Laura Abbott, and many other contributors! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Greg KH Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: Andrew F. Davis [jstultz: reworded commit message, and lots of cleanups] Signed-off-by: John Stultz --- v2: * Folded down fixes I had previously shared in implementing heaps * Make flags a u64 (Suggested by Laura) * Add PAGE_ALIGN() fix to the core alloc funciton * IOCTL fixups suggested by Brian * Added fixes suggested by Benjamin * Removed core stats mgmt, as that should be implemented by per-heap code * Changed alloc to return a dma-buf fd, rather then a buffer (as it simplifies error handling) --- MAINTAINERS | 16 ++++ drivers/dma-buf/Kconfig | 8 ++ drivers/dma-buf/Makefile | 1 + drivers/dma-buf/dma-heap.c | 191 ++++++++++++++++++++++++++++++++++++++++++ include/linux/dma-heap.h | 65 ++++++++++++++ include/uapi/linux/dma-heap.h | 52 ++++++++++++ 6 files changed, 333 insertions(+) create mode 100644 drivers/dma-buf/dma-heap.c create mode 100644 include/linux/dma-heap.h create mode 100644 include/uapi/linux/dma-heap.h -- 2.7.4 diff --git a/MAINTAINERS b/MAINTAINERS index ac2e518..a661e19 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4621,6 +4621,22 @@ F: include/linux/*fence.h F: Documentation/driver-api/dma-buf.rst T: git git://anongit.freedesktop.org/drm/drm-misc +DMA-BUF HEAPS FRAMEWORK +M: Laura Abbott +R: Liam Mark +R: Brian Starkey +R: "Andrew F. Davis" +R: John Stultz +S: Maintained +L: linux-media@vger.kernel.org +L: dri-devel@lists.freedesktop.org +L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers) +F: include/uapi/linux/dma-heap.h +F: include/linux/dma-heap.h +F: drivers/dma-buf/dma-heap.c +F: drivers/dma-buf/heaps/* +T: git git://anongit.freedesktop.org/drm/drm-misc + DMA GENERIC OFFLOAD ENGINE SUBSYSTEM M: Vinod Koul L: dmaengine@vger.kernel.org diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig index 2e5a0fa..09c61db 100644 --- a/drivers/dma-buf/Kconfig +++ b/drivers/dma-buf/Kconfig @@ -39,4 +39,12 @@ config UDMABUF A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers. +menuconfig DMABUF_HEAPS + bool "DMA-BUF Userland Memory Heaps" + select DMA_SHARED_BUFFER + help + Choose this option to enable the DMA-BUF userland memory heaps, + this allows userspace to allocate dma-bufs that can be shared between + drivers. + endmenu diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 0913a6c..b0332f1 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1,4 +1,5 @@ obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o +obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o obj-$(CONFIG_UDMABUF) += udmabuf.o diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c new file mode 100644 index 0000000..14b3975 --- /dev/null +++ b/drivers/dma-buf/dma-heap.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Framework for userspace DMA-BUF allocations + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#define DEVNAME "dma_heap" + +#define NUM_HEAP_MINORS 128 +static DEFINE_IDR(dma_heap_idr); +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */ + +dev_t dma_heap_devt; +struct class *dma_heap_class; +struct list_head dma_heap_list; +struct dentry *dma_heap_debug_root; + +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len, + unsigned int flags) +{ + len = PAGE_ALIGN(len); + if (!len) + return -EINVAL; + + return heap->ops->allocate(heap, len, flags); +} + +static int dma_heap_open(struct inode *inode, struct file *filp) +{ + struct dma_heap *heap; + + mutex_lock(&minor_lock); + heap = idr_find(&dma_heap_idr, iminor(inode)); + mutex_unlock(&minor_lock); + if (!heap) { + pr_err("dma_heap: minor %d unknown.\n", iminor(inode)); + return -ENODEV; + } + + /* instance data as context */ + filp->private_data = heap; + nonseekable_open(inode, filp); + + return 0; +} + +static int dma_heap_release(struct inode *inode, struct file *filp) +{ + filp->private_data = NULL; + + return 0; +} + +static long dma_heap_ioctl(struct file *filp, unsigned int cmd, + unsigned long arg) +{ + switch (cmd) { + case DMA_HEAP_IOC_ALLOC: + { + struct dma_heap_allocation_data heap_allocation; + struct dma_heap *heap = filp->private_data; + int fd; + + if (copy_from_user(&heap_allocation, (void __user *)arg, + sizeof(heap_allocation))) + return -EFAULT; + + if (heap_allocation.fd || + heap_allocation.reserved0 || + heap_allocation.reserved1 || + heap_allocation.reserved2) { + pr_warn_once("dma_heap: ioctl data not valid\n"); + return -EINVAL; + } + if (heap_allocation.flags & ~DMA_HEAP_VALID_FLAGS) { + pr_warn_once("dma_heap: flags has invalid or unsupported flags set\n"); + return -EINVAL; + } + + fd = dma_heap_buffer_alloc(heap, heap_allocation.len, + heap_allocation.flags); + if (fd < 0) + return fd; + + heap_allocation.fd = fd; + + if (copy_to_user((void __user *)arg, &heap_allocation, + sizeof(heap_allocation))) + return -EFAULT; + + break; + } + default: + return -ENOTTY; + } + + return 0; +} + +static const struct file_operations dma_heap_fops = { + .owner = THIS_MODULE, + .open = dma_heap_open, + .release = dma_heap_release, + .unlocked_ioctl = dma_heap_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = dma_heap_ioctl, +#endif +}; + +int dma_heap_add(struct dma_heap *heap) +{ + struct device *dev_ret; + int ret; + + if (!heap->name || !strcmp(heap->name, "")) { + pr_err("dma_heap: Cannot add heap without a name\n"); + return -EINVAL; + } + + if (!heap->ops || !heap->ops->allocate) { + pr_err("dma_heap: Cannot add heap with invalid ops struct\n"); + return -EINVAL; + } + + /* Find unused minor number */ + mutex_lock(&minor_lock); + ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL); + mutex_unlock(&minor_lock); + if (ret < 0) { + pr_err("dma_heap: Unable to get minor number for heap\n"); + return ret; + } + heap->minor = ret; + + /* Create device */ + heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor); + dev_ret = device_create(dma_heap_class, + NULL, + heap->heap_devt, + NULL, + heap->name); + if (IS_ERR(dev_ret)) { + pr_err("dma_heap: Unable to create char device\n"); + return PTR_ERR(dev_ret); + } + + /* Add device */ + cdev_init(&heap->heap_cdev, &dma_heap_fops); + ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS); + if (ret < 0) { + device_destroy(dma_heap_class, heap->heap_devt); + pr_err("dma_heap: Unable to add char device\n"); + return ret; + } + + return 0; +} +EXPORT_SYMBOL(dma_heap_add); + +static int dma_heap_init(void) +{ + int ret; + + ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME); + if (ret) + return ret; + + dma_heap_class = class_create(THIS_MODULE, DEVNAME); + if (IS_ERR(dma_heap_class)) { + unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS); + return PTR_ERR(dma_heap_class); + } + + return 0; +} +subsys_initcall(dma_heap_init); diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h new file mode 100644 index 0000000..ed86a8e --- /dev/null +++ b/include/linux/dma-heap.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DMABUF Heaps Allocation Infrastructure + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#ifndef _DMA_HEAPS_H +#define _DMA_HEAPS_H + +#include +#include + +/** + * struct dma_heap_buffer - metadata for a particular buffer + * @heap: back pointer to the heap the buffer came from + * @dmabuf: backing dma-buf for this buffer + * @size: size of the buffer + * @flags: buffer specific flags + */ +struct dma_heap_buffer { + struct dma_heap *heap; + struct dma_buf *dmabuf; + size_t size; + unsigned long flags; +}; + +/** + * struct dma_heap - represents a dmabuf heap in the system + * @name: used for debugging/device-node name + * @ops: ops struct for this heap + * @minor minor number of this heap device + * @heap_devt heap device node + * @heap_cdev heap char device + * + * Represents a heap of memory from which buffers can be made. + */ +struct dma_heap { + const char *name; + struct dma_heap_ops *ops; + unsigned int minor; + dev_t heap_devt; + struct cdev heap_cdev; +}; + +/** + * struct dma_heap_ops - ops to operate on a given heap + * @allocate: allocate dmabuf and return fd + * + * allocate returns dmabuf fd on success, -errno on error. + */ +struct dma_heap_ops { + int (*allocate)(struct dma_heap *heap, + unsigned long len, + unsigned long flags); +}; + +/** + * dma_heap_add - adds a heap to dmabuf heaps + * @heap: the heap to add + */ +int dma_heap_add(struct dma_heap *heap); + +#endif /* _DMA_HEAPS_H */ diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h new file mode 100644 index 0000000..75c5d3f --- /dev/null +++ b/include/uapi/linux/dma-heap.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DMABUF Heaps Userspace API + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _UAPI_LINUX_DMABUF_POOL_H +#define _UAPI_LINUX_DMABUF_POOL_H + +#include +#include + +/** + * DOC: DMABUF Heaps Userspace API + * + */ + +/* Currently no flags */ +#define DMA_HEAP_VALID_FLAGS (0) + +/** + * struct dma_heap_allocation_data - metadata passed from userspace for + * allocations + * @len: size of the allocation + * @flags: flags passed to pool + * @fd: will be populated with a fd which provdes the + * handle to the allocated dma-buf + * + * Provided by userspace as an argument to the ioctl + */ +struct dma_heap_allocation_data { + __u64 len; + __u64 flags; + __u32 fd; + __u32 reserved0; + __u32 reserved1; + __u32 reserved2; +}; + +#define DMA_HEAP_IOC_MAGIC 'H' + +/** + * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool + * + * Takes an dma_heap_allocation_data struct and returns it with the fd field + * populated with the dmabuf handle of the allocation. + */ +#define DMA_HEAP_IOC_ALLOC _IOWR(DMA_HEAP_IOC_MAGIC, 0, \ + struct dma_heap_allocation_data) + +#endif /* _UAPI_LINUX_DMABUF_POOL_H */ From patchwork Tue Mar 5 20:54:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 159712 Delivered-To: patches@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp5410368jad; Tue, 5 Mar 2019 12:54:45 -0800 (PST) X-Received: by 2002:a17:902:8f81:: with SMTP id z1mr3254331plo.265.1551819285313; Tue, 05 Mar 2019 12:54:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551819285; cv=none; d=google.com; s=arc-20160816; b=VMHveXfxxq+0DyJNzvh2pbOErJ/zeZ4Vo9Q11eQwwYIHr2JHOabZPF6qOf0+6S3yp7 6Brr17BTfdqN1JLjwTo+/gVV94Xfa0sqSqjazPjrKm1AelVfYICcQZyJ+G24VkQIAIde rinZiFypVSFT9zwxMUe8Mhe20zZ+epp8TxXuX1EozNtajy1MiuioxtJHUY3UboE1FRFu ybmBER80VJtuOmDiQomatiTxDTgAAyek2gLA+RWEswlY8dN4sdeLnJGRm7TIdMbEZOhI UqIiz4VXAwDHzpedE/3h1gIMFiE9RxRt5EXocCaTkgqX3alDSwF8A8aX2o1G7cG4FNPK WtDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=V4gbFKLyjI3IwmyANMx9RTxgAjbN5gJRajPVnQsoKow=; b=WSpnzZ5UTK0XD30UiuzT92sljvpyivrjI96BBrLsKBwjpmqej0cb77MAmUWtprVupy YMn5f2ul7/jrzaZ13CdBPbR6o50yur1/Er8ljUgJve6L7qTvXgpEzRVoAwB00OK0DP0t vzXiW4Qwq/5Q4t1Ygk9T7n8LEgUAdnF9eZhwfFAGIqXT2h+lWyZVwxRCiwwI2oODltdp xcRuoO/gMw7yIAC4PSGn2LHaZC6mTmIQ9SExIkw8Ap6Y7q4A8LQUgvVQoA1gZaIZ+fX8 orgI2wytoTquZeSC34Tshc28yVt8fCop9yUns2NzK8gH1Ga7vdrSEmAeORT4fLHdSp2J vTuQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tQ3o7bDE; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j187sor15079460pge.20.2019.03.05.12.54.45 for (Google Transport Security); Tue, 05 Mar 2019 12:54:45 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tQ3o7bDE; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=V4gbFKLyjI3IwmyANMx9RTxgAjbN5gJRajPVnQsoKow=; b=tQ3o7bDEfeFHNNx9lxcsffx/ug2FlH6XBA80njihZ82Qubnsm9gFWpTOt3ssLsgmke NVoHUmU82mFiPJkh9x+SrBQOwCRNNdbmJ5F47VB2OKwuWtoFa8Z1ykd9aryppldlnhcV RTgJS41k57yo0St04Wff+xyrcDmVD8sLa10X1fxerw9NcgfUjOdne2Q4kiztqENL80wW dRxtZtc2vxLK2D7ym5tjbbkF//mBdrWXByMIz8bu1oLhUsNKfh0z1aAhpABTE0cAu3Tw EpIgqIT5HXIqN35clE+c4O4XmtdPoqy3sVU5WWVZVdFeEiBueZbmohXH82F9LEMt/c9x e1Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=V4gbFKLyjI3IwmyANMx9RTxgAjbN5gJRajPVnQsoKow=; b=Ggy4aDYKdFT9f0zBJIY7wIPm5kXlkQzgWz4rUJxKw1EuCzPLPzO1spqrue2seENTJ5 lXTdpDpndgSlHjFIqcgvexEpZoVP6HvR2Jm/LQLzzuUDOo8ZlRfnPcXbfs0RhcBZWUGa yg0MSJhDPBFeCTuDLxt+hKoCuacO4i4UoTxn1OwnmO4JjDdlPypC3H1fDMr2jV5K3iK7 iu7NCWk+qQciwT4D6X5jNgIrWEqefWelp8dzPiOxFaH8wcpM4QhyqQiUgQ32Nh7K/the DBL5FrCumBIbbPIDZgNKrAaYeYxDT4O4obyASfmBVyZGjGnikEgaSS/ModoxnuFcU9xC qKTg== X-Gm-Message-State: APjAAAX/rcIVBB5p+cwDgt0hZMjXmMLj3NaqNUMC5gLc39hyaO7sXH9n jVAGGzNEGyycDGLhjNSsLf4CToi+ X-Google-Smtp-Source: APXvYqwjoDN0TV9qV8dX52oCEtTS7B1X2+9fRWiLb/GYUrddBubzSbk3CGQdr1xikKUKpm7vwp+aIg== X-Received: by 2002:a65:6241:: with SMTP id q1mr3146039pgv.340.1551819284810; Tue, 05 Mar 2019 12:54:44 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id i4sm13411788pfo.158.2019.03.05.12.54.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 05 Mar 2019 12:54:43 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Greg KH , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers Date: Tue, 5 Mar 2019 12:54:30 -0800 Message-Id: <1551819273-640-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1551819273-640-1-git-send-email-john.stultz@linaro.org> References: <1551819273-640-1-git-send-email-john.stultz@linaro.org> Add generic helper dmabuf ops for dma heaps, so we can reduce the amount of duplicative code for the exported dmabufs. This code is an evolution of the Android ION implementation, so thanks to its original authors and maintainters: Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Greg KH Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Removed cache management performance hack that I had accidentally folded in. * Removed stats code that was in helpers * Lots of checkpatch cleanups --- drivers/dma-buf/Makefile | 1 + drivers/dma-buf/heaps/Makefile | 2 + drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++ drivers/dma-buf/heaps/heap-helpers.h | 48 +++++ 4 files changed, 386 insertions(+) create mode 100644 drivers/dma-buf/heaps/Makefile create mode 100644 drivers/dma-buf/heaps/heap-helpers.c create mode 100644 drivers/dma-buf/heaps/heap-helpers.h -- 2.7.4 diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index b0332f1..09c2f2d 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1,4 +1,5 @@ obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o +obj-$(CONFIG_DMABUF_HEAPS) += heaps/ obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile new file mode 100644 index 0000000..de49898 --- /dev/null +++ b/drivers/dma-buf/heaps/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-y += heap-helpers.o diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c new file mode 100644 index 0000000..ae5e9d0 --- /dev/null +++ b/drivers/dma-buf/heaps/heap-helpers.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + + +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer) +{ + struct scatterlist *sg; + int i, j; + void *vaddr; + pgprot_t pgprot; + struct sg_table *table = buffer->sg_table; + int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE; + struct page **pages = vmalloc(array_size(npages, + sizeof(struct page *))); + struct page **tmp = pages; + + if (!pages) + return ERR_PTR(-ENOMEM); + + pgprot = PAGE_KERNEL; + + for_each_sg(table->sgl, sg, table->nents, i) { + int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE; + struct page *page = sg_page(sg); + + WARN_ON(i >= npages); + for (j = 0; j < npages_this_entry; j++) + *(tmp++) = page++; + } + vaddr = vmap(pages, npages, VM_MAP, pgprot); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int dma_heap_map_user(struct heap_helper_buffer *buffer, + struct vm_area_struct *vma) +{ + struct sg_table *table = buffer->sg_table; + unsigned long addr = vma->vm_start; + unsigned long offset = vma->vm_pgoff * PAGE_SIZE; + struct scatterlist *sg; + int i; + int ret; + + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page = sg_page(sg); + unsigned long remainder = vma->vm_end - addr; + unsigned long len = sg->length; + + if (offset >= sg->length) { + offset -= sg->length; + continue; + } else if (offset) { + page += offset / PAGE_SIZE; + len = sg->length - offset; + offset = 0; + } + len = min(len, remainder); + ret = remap_pfn_range(vma, addr, page_to_pfn(page), len, + vma->vm_page_prot); + if (ret) + return ret; + addr += len; + if (addr >= vma->vm_end) + return 0; + } + + return 0; +} + + +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer) +{ + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + if (buffer->kmap_cnt > 0) { + pr_warn_once("%s: buffer still mapped in the kernel\n", + __func__); + vunmap(buffer->vaddr); + } + + buffer->free(buffer); +} + +static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer) +{ + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + void *vaddr; + + if (buffer->kmap_cnt) { + buffer->kmap_cnt++; + return buffer->vaddr; + } + vaddr = dma_heap_map_kernel(buffer); + if (WARN_ONCE(!vaddr, + "heap->ops->map_kernel should return ERR_PTR on error")) + return ERR_PTR(-EINVAL); + if (IS_ERR(vaddr)) + return vaddr; + buffer->vaddr = vaddr; + buffer->kmap_cnt++; + return vaddr; +} + +static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer) +{ + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + buffer->kmap_cnt--; + if (!buffer->kmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } +} + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sg(table->sgl, sg, table->nents, i) { + memcpy(new_sg, sg, sizeof(*sg)); + new_sg->dma_address = 0; + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static void free_duped_table(struct sg_table *table) +{ + sg_free_table(table); + kfree(table); +} + +struct dma_heaps_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; +}; + +static int dma_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct dma_heaps_attachment *a; + struct sg_table *table; + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void dma_heap_detatch(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct dma_heaps_attachment *a = attachment->priv; + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + free_duped_table(a->table); + + kfree(a); +} + +static struct sg_table *dma_heap_map_dma_buf( + struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heaps_attachment *a = attachment->priv; + struct sg_table *table; + + table = a->table; + + if (!dma_map_sg(attachment->dev, table->sgl, table->nents, + direction)) + table = ERR_PTR(-ENOMEM); + return table; +} + +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); +} + +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + int ret = 0; + + mutex_lock(&buffer->lock); + /* now map it to userspace */ + ret = dma_heap_map_user(buffer, vma); + mutex_unlock(&buffer->lock); + + if (ret) + pr_err("%s: failure mapping buffer to userspace\n", + __func__); + + return ret; +} + +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct dma_heap_buffer *buffer = dmabuf->priv; + + dma_heap_buffer_destroy(buffer); +} + +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf, + unsigned long offset) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + return buffer->vaddr + offset * PAGE_SIZE; +} + +static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf, + unsigned long offset, + void *ptr) +{ +} + +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + void *vaddr; + struct dma_heaps_attachment *a; + int ret = 0; + + mutex_lock(&buffer->lock); + vaddr = dma_heap_buffer_kmap_get(heap_buffer); + if (IS_ERR(vaddr)) { + ret = PTR_ERR(vaddr); + goto unlock; + } + mutex_unlock(&buffer->lock); + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, + direction); + } + +unlock: + mutex_unlock(&buffer->lock); + return ret; +} + +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + struct dma_heaps_attachment *a; + + mutex_lock(&buffer->lock); + dma_heap_buffer_kmap_put(heap_buffer); + mutex_unlock(&buffer->lock); + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents, + direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +const struct dma_buf_ops heap_helper_ops = { + .map_dma_buf = dma_heap_map_dma_buf, + .unmap_dma_buf = dma_heap_unmap_dma_buf, + .mmap = dma_heap_mmap, + .release = dma_heap_dma_buf_release, + .attach = dma_heap_attach, + .detach = dma_heap_detatch, + .begin_cpu_access = dma_heap_dma_buf_begin_cpu_access, + .end_cpu_access = dma_heap_dma_buf_end_cpu_access, + .map = dma_heap_dma_buf_kmap, + .unmap = dma_heap_dma_buf_kunmap, +}; diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h new file mode 100644 index 0000000..0bd8643 --- /dev/null +++ b/drivers/dma-buf/heaps/heap-helpers.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DMABUF Heaps helper code + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#ifndef _HEAP_HELPERS_H +#define _HEAP_HELPERS_H + +#include +#include + +struct heap_helper_buffer { + struct dma_heap_buffer heap_buffer; + + unsigned long private_flags; + void *priv_virt; + struct mutex lock; + int kmap_cnt; + void *vaddr; + struct sg_table *sg_table; + struct list_head attachments; + + void (*free)(struct heap_helper_buffer *buffer); + +}; + +#define to_helper_buffer(x) \ + container_of(x, struct heap_helper_buffer, heap_buffer) + +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer, + void (*free)(struct heap_helper_buffer *)) +{ + buffer->private_flags = 0; + buffer->priv_virt = NULL; + mutex_init(&buffer->lock); + buffer->kmap_cnt = 0; + buffer->vaddr = NULL; + buffer->sg_table = NULL; + INIT_LIST_HEAD(&buffer->attachments); + buffer->free = free; +} + +extern const struct dma_buf_ops heap_helper_ops; + +#endif /* _HEAP_HELPERS_H */ From patchwork Tue Mar 5 20:54:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 159713 Delivered-To: patches@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp5410385jad; Tue, 5 Mar 2019 12:54:46 -0800 (PST) X-Received: by 2002:a65:43cc:: with SMTP id n12mr3004344pgp.218.1551819286770; Tue, 05 Mar 2019 12:54:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551819286; cv=none; d=google.com; s=arc-20160816; b=AJDvOcNZairK8akjzDecBV9BhqSkIjGNDTTWLxjtrEDzZsT9XkOCZt0bbT2v7/hMXZ 7MKw7eTBTQt7zxez4WboOmPXXYZ8vqflzuobnvBnFE1ymPlUdEwtbxwy9gFZvil81DOX kZS62uTjZEgjVTOjGq3zQ3oER4W+zm3VDXPgl3CNIYsrXXmPWMVO2qBlCJ+3e04sYZd1 9rODNCTFJ+lbHZdeA6Zs/+HpWR1YVlMC6MAmqn7yCpyCCe9UENr9763YSzj/CZT+mrbC FqrnDYWVV3wLx6A0boUFD9bKqC6qT1ZjsHg8gb2Tw2kKUGt4QSX/GEvdg39fkd+ejq69 W73Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=10sN7326k7j9LMSRAvavufAoxXSQ/sxj9rGttiEYBLI=; b=RD5D2OsgMXxSInZaM7D/5/gwA6If+yAi4XpGxuogutUKmYa8frh/uB1d9gJHpiyFHT 2JgAHMwwSZKu1nyMjGlDcQF4kgEcuUgyf/SWj1dowOwc1WYQKZzIfjAY069buUZe6UPf QBQyMzCFDAdrc4P3gQ2X8zaH118751fbIt+vtiyNOt07H6HpcVPfwYe9SpG42Wsaq5s0 8of6S0CvJL3UAOwjl0E1yyJtOWkIz0xOiSkHiTZymMJjAuTxhKzCJsx265WaGoqQwNYl btRziq5eMoS3ePq9hCDbo4b5p9y3TBAuyRwER7pcne1xbPu0W0hjAGy25U0Ex0zLKePo VqVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=y9lnW8Km; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id h7sor14846601plb.46.2019.03.05.12.54.46 for (Google Transport Security); Tue, 05 Mar 2019 12:54:46 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=y9lnW8Km; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=10sN7326k7j9LMSRAvavufAoxXSQ/sxj9rGttiEYBLI=; b=y9lnW8Kmg4irPZPNx525O45cijd7sNdoQiLbkI6aBgRQH4pY3hwkFcZpSHZpvky9FZ wpSE5CC9PnR8pSMtn0ZsW1xOz92fTrNyLGW2r7ABwUQmx3L2VMncHcp5P3veEfzU6L4+ q2380Qw4gqYxrEPwi4arsJBcSu0eyBPNppR4/XvZxWsXchOWW4w8+QagHCsbsll+/3wx sp2F2H9XMwiCaWco6ECKJLGvu1nVENUrD52ScmkmD7PLxFiW4HRE8F73dtwP6om+pRd+ 0rtk3moDnsiXq1f9dQsdo7nXNnzESau052vS2EGHRWvdKP1Km7azf1V1S5TQulCcFwCs 6f6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=10sN7326k7j9LMSRAvavufAoxXSQ/sxj9rGttiEYBLI=; b=i9QH7oO5Rxb42XJfcfjVw0UJn3u/kzs1q85y8dCjYQYNFT1DCTst97tGG5qdBdnGNe 9PDe53hePP8K5fflnE2P4OhsYqLlwxpJrcUfjz+Qzaz7qIySONNI+CcqBu0q/J06q9Tf Fk5uE038Au9Xksmb30m9TsHOLHWgzkhRaBhYbeXpRUPM/bS/tYnnFqVibVHbPYXEalCp 3ct+CgtI5BTb/eK8RY/8exaLBdNs7fwjV/9+t+WRgM9/q/Yz0nFZ4UR/kGMBI8OvwyWX DuTOZNC9kAFi8AGP0/ES/1G1EppVrgpXLMOukbmgpI4nfa87NMJax9DMK4PpEEQjKoqV GqBQ== X-Gm-Message-State: APjAAAWzcLuCW0bUOhjY9+5DrnVv2XYHupukynDrqcDE2wYUwd/Ut3R1 yvDiInPgE5hONuVaNwFsK1SYuu3N X-Google-Smtp-Source: APXvYqyw6P1dcy/08qrowb3Lcm2ulZcEET3axbofUHPNEjyyJTCXZqG8t1VPyxs28WwSFfLYwYRtGw== X-Received: by 2002:a17:902:b093:: with SMTP id p19mr3194304plr.139.1551819286365; Tue, 05 Mar 2019 12:54:46 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id i4sm13411788pfo.158.2019.03.05.12.54.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 05 Mar 2019 12:54:45 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Greg KH , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps Date: Tue, 5 Mar 2019 12:54:31 -0800 Message-Id: <1551819273-640-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1551819273-640-1-git-send-email-john.stultz@linaro.org> References: <1551819273-640-1-git-send-email-john.stultz@linaro.org> This patch adds system heap to the dma-buf heaps framework. This allows applications to get a page-allocator backed dma-buf for non-contiguous memory. This code is an evolution of the Android ION implementation, so thanks to its original authors and maintainters: Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Greg KH Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Switch allocate to return dmabuf fd * Simplify init code * Checkpatch fixups * Droped dead system-contig code --- drivers/dma-buf/Kconfig | 2 + drivers/dma-buf/heaps/Kconfig | 6 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/system_heap.c | 132 ++++++++++++++++++++++++++++++++++++ 4 files changed, 141 insertions(+) create mode 100644 drivers/dma-buf/heaps/Kconfig create mode 100644 drivers/dma-buf/heaps/system_heap.c -- 2.7.4 diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig index 09c61db..63c139d 100644 --- a/drivers/dma-buf/Kconfig +++ b/drivers/dma-buf/Kconfig @@ -47,4 +47,6 @@ menuconfig DMABUF_HEAPS this allows userspace to allocate dma-bufs that can be shared between drivers. +source "drivers/dma-buf/heaps/Kconfig" + endmenu diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig new file mode 100644 index 0000000..2050527 --- /dev/null +++ b/drivers/dma-buf/heaps/Kconfig @@ -0,0 +1,6 @@ +config DMABUF_HEAPS_SYSTEM + bool "DMA-BUF System Heap" + depends on DMABUF_HEAPS + help + Choose this option to enable the system dmabuf heap. The system heap + is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index de49898..d1808ec 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += heap-helpers.o +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c new file mode 100644 index 0000000..e001661 --- /dev/null +++ b/drivers/dma-buf/heaps/system_heap.c @@ -0,0 +1,132 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF System heap exporter + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + + +struct system_heap { + struct dma_heap heap; +}; + + +static void system_heap_free(struct heap_helper_buffer *buffer) +{ + int i; + struct scatterlist *sg; + struct sg_table *table = buffer->sg_table; + + for_each_sg(table->sgl, sg, table->nents, i) + __free_page(sg_page(sg)); + + sg_free_table(table); + kfree(table); + kfree(buffer); +} + +static int system_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long flags) +{ + struct heap_helper_buffer *helper_buffer; + struct sg_table *table; + struct scatterlist *sg; + int i, j; + int npages = PAGE_ALIGN(len) / PAGE_SIZE; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + int ret = -ENOMEM; + + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); + if (!helper_buffer) + return -ENOMEM; + + INIT_HEAP_HELPER_BUFFER(helper_buffer, system_heap_free); + helper_buffer->heap_buffer.flags = flags; + helper_buffer->heap_buffer.heap = heap; + helper_buffer->heap_buffer.size = len; + + table = kmalloc(sizeof(struct sg_table), GFP_KERNEL); + if (!table) + goto err0; + + i = sg_alloc_table(table, npages, GFP_KERNEL); + if (i) + goto err1; + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page; + + page = alloc_page(GFP_KERNEL); + if (!page) + goto err2; + sg_set_page(sg, page, PAGE_SIZE, 0); + } + + /* create the dmabuf */ + exp_info.ops = &heap_helper_ops; + exp_info.size = len; + exp_info.flags = O_RDWR; + exp_info.priv = &helper_buffer->heap_buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err2; + } + + helper_buffer->heap_buffer.dmabuf = dmabuf; + helper_buffer->sg_table = table; + + ret = dma_buf_fd(dmabuf, O_CLOEXEC); + if (ret < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + return ret; + } + + return ret; + +err2: + for_each_sg(table->sgl, sg, i, j) + __free_page(sg_page(sg)); + sg_free_table(table); +err1: + kfree(table); +err0: + kfree(helper_buffer); + return -ENOMEM; +} + + +static struct dma_heap_ops system_heap_ops = { + .allocate = system_heap_allocate, +}; + +static int system_heap_create(void) +{ + struct system_heap *sys_heap; + + sys_heap = kzalloc(sizeof(*sys_heap), GFP_KERNEL); + if (!sys_heap) + return -ENOMEM; + sys_heap->heap.name = "system_heap"; + sys_heap->heap.ops = &system_heap_ops; + + dma_heap_add(&sys_heap->heap); + + return 0; +} +device_initcall(system_heap_create); From patchwork Tue Mar 5 20:54:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 159714 Delivered-To: patches@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp5410408jad; Tue, 5 Mar 2019 12:54:48 -0800 (PST) X-Received: by 2002:a63:4607:: with SMTP id t7mr3070397pga.306.1551819288529; Tue, 05 Mar 2019 12:54:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551819288; cv=none; d=google.com; s=arc-20160816; b=HSuIn/Me6NotbX7pmQQvayygBXUhA0Nwk0MSOOzHRroMmJb0F0TyBtJ5QGLeK4u90J XG2ZPlqP/R0AbjyfXfONAXC9KWvtZ1ih+W/DM2c6ssdxFiurYb90GqxAX7cLVZuCtyvf Yw8vdFO1k9C9o1r+fP9f8IkCeMKKrakfnyHz1dWINXO6jJ+5e6YRGYardnqqfNCCtKkr Ka3JyE3+5WmtruluH5MFPTSMxVwDb+cj/ZH7IU5h2w4A6QOYnPEjOWkiK8sWRR5NaB1H FLYKTgliK4iO9aCIHFXLNGpjR0HzNsuxE8TR8IpiGIB/CD3j8tJ4dr6+hAlR7lnq8Ybs L56g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zjNdJq9UTy396FItVuuFMC2moUGC3uBW2PZHwqhq9dk=; b=rRD7VPu7idOGe437BqzTTnyKYNYUzpSoDlbegCla50sF8O6frFXWeyhifLimms3CT5 aXIIOTatPFOx6xNBCOKgl3EVeTc9HfNCpodcwzJG2T352DXYxWN1RWAzZu7htxZGGJXg rbTm4aOBzn2QPD+5FcHv+8W0FOZj0yubwCtaTpfka0x3ZMVhXGrgAMDGSODI4LnPtTTB G3n9rBW685wZSuWPhDi0w+TGLbLMmHEK/EIxAS84p/rEHHuvqYML/141v8CHaF4pjeA0 SZY9jnnoDeaW0zhlWRX6CnscynT4YifoDlJk43p0LtI/itSGx2Jzi3oLokwp4t22IRbA 8HIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JGXEitfW; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d33sor9142773pla.12.2019.03.05.12.54.48 for (Google Transport Security); Tue, 05 Mar 2019 12:54:48 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=JGXEitfW; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zjNdJq9UTy396FItVuuFMC2moUGC3uBW2PZHwqhq9dk=; b=JGXEitfWTC2Fx8f/vKAaxDImoKOjNAKc/WUp8LWBS8yzzGJu6Hvqz1ohHTFYRbieZ4 m68weAQQgYdRN2VJoAVodZOyQ5W1bcx6E4gncHyE7EtAcTHSFcDNeP2TxXWxGJZNC/ZH s36M3ziKhwOLNaNCW4TShZcu3qAs48TnDwwHwwAleoNlBnf5rzLayRZjf+oQ/IpuTMrL R0o8zQbdqfUPrZoZ+y1JdeO9oh754Zp3ozNzlhFhjZf8hRb+K6wT7ySImkuG6cdVybf9 wvTBnyAqTwihnTN+4qu0rV7YolTCP/cnBUXDWpjcLX37Vvfb0dDrzLN1L3QN2/+KJgCX alJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zjNdJq9UTy396FItVuuFMC2moUGC3uBW2PZHwqhq9dk=; b=idY1LlO4fW7gjy4y6TCuW0vcN5hPhcDXsRdbz/5rolBGfpa/YCJV6PuNUthuCUaXBI edEgfePlVGNRkGd2cHVp4e6dNCWjJhdpGU4Mk7e/thRbdBaSOx6PzdT6+WqCqfCg7luX voCFZWyy+EK84p7QJrCZMIKrGoroGxTTcsduwxQdFu6RXQJQs9lrPP1EoecaB+ur2Yog ILsm7A3g4i89s/P8yIMxWpWWqxwF2twrY2jAoVfMDb9wRsvJLSm5wHo+URA82Fb5y4Ii 0bIXLHsw877Bll9EZHDuSPk0VG6UZ0Z4EAAqi6nT8eCPGSZg/oAjZTwTaA9G1pDFOUAr 5QRA== X-Gm-Message-State: APjAAAVuUL+wHHVPHc9U4ogW0nvWotmeE8I/OMP7w1zyXzFdWnIWnB/S IpK55RAhfDeRT3TWvybst+SBnF8+ X-Google-Smtp-Source: APXvYqyCoIwrVw6dklhSs10riFYQ7lAejNtK1w2JcI8ZHD/fdINxXX4t16AcR3LX1QOc3azbJVEYYQ== X-Received: by 2002:a17:902:7613:: with SMTP id k19mr3118077pll.207.1551819288135; Tue, 05 Mar 2019 12:54:48 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id i4sm13411788pfo.158.2019.03.05.12.54.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 05 Mar 2019 12:54:46 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Greg KH , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss Date: Tue, 5 Mar 2019 12:54:32 -0800 Message-Id: <1551819273-640-5-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1551819273-640-1-git-send-email-john.stultz@linaro.org> References: <1551819273-640-1-git-send-email-john.stultz@linaro.org> This adds a CMA heap, which allows userspace to allocate a dma-buf of contiguous memory out of a CMA region. This code is an evolution of the Android ION implementation, so thanks to its original author and maintainters: Benjamin Gaignard, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Greg KH Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Switch allocate to return dmabuf fd * Simplify init code * Checkpatch fixups --- drivers/dma-buf/heaps/Kconfig | 8 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/cma_heap.c | 164 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 173 insertions(+) create mode 100644 drivers/dma-buf/heaps/cma_heap.c -- 2.7.4 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 2050527..a5eef06 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. + +config DMABUF_HEAPS_CMA + bool "DMA-BUF CMA Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CMA heap. This heap is backed + by the Contiguous Memory Allocator (CMA). If your system has these + regions, you should say Y here. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index d1808ec..6e54cde 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c new file mode 100644 index 0000000..33c18ec --- /dev/null +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -0,0 +1,164 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF CMA heap exporter + * + * Copyright (C) 2012, 2019 Linaro Ltd. + * Author: for ST-Ericsson. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + +struct cma_heap { + struct dma_heap heap; + struct cma *cma; +}; + + +#define to_cma_heap(x) container_of(x, struct cma_heap, heap) + + +static void cma_heap_free(struct heap_helper_buffer *buffer) +{ + struct cma_heap *cma_heap = to_cma_heap(buffer->heap_buffer.heap); + struct page *pages = buffer->priv_virt; + unsigned long nr_pages; + + nr_pages = PAGE_ALIGN(buffer->heap_buffer.size) >> PAGE_SHIFT; + + /* release memory */ + cma_release(cma_heap->cma, pages, nr_pages); + /* release sg table */ + sg_free_table(buffer->sg_table); + kfree(buffer->sg_table); + kfree(buffer); +} + +/* dmabuf heap CMA operations functions */ +static int cma_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long flags) +{ + struct cma_heap *cma_heap = to_cma_heap(heap); + struct heap_helper_buffer *helper_buffer; + struct sg_table *table; + struct page *pages; + size_t size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + int ret = -ENOMEM; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; + + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); + if (!helper_buffer) + return -ENOMEM; + + INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free); + helper_buffer->heap_buffer.flags = flags; + helper_buffer->heap_buffer.heap = heap; + helper_buffer->heap_buffer.size = len; + + + pages = cma_alloc(cma_heap->cma, nr_pages, align, false); + if (!pages) + goto free_buf; + + if (PageHighMem(pages)) { + unsigned long nr_clear_pages = nr_pages; + struct page *page = pages; + + while (nr_clear_pages > 0) { + void *vaddr = kmap_atomic(page); + + memset(vaddr, 0, PAGE_SIZE); + kunmap_atomic(vaddr); + page++; + nr_clear_pages--; + } + } else { + memset(page_address(pages), 0, size); + } + + table = kmalloc(sizeof(*table), GFP_KERNEL); + if (!table) + goto free_cma; + + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) + goto free_table; + + sg_set_page(table->sgl, pages, size, 0); + + /* create the dmabuf */ + exp_info.ops = &heap_helper_ops; + exp_info.size = len; + exp_info.flags = O_RDWR; + exp_info.priv = &helper_buffer->heap_buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto free_table; + } + + helper_buffer->heap_buffer.dmabuf = dmabuf; + helper_buffer->priv_virt = pages; + helper_buffer->sg_table = table; + + ret = dma_buf_fd(dmabuf, O_CLOEXEC); + if (ret < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + return ret; + } + + return ret; +free_table: + kfree(table); +free_cma: + cma_release(cma_heap->cma, pages, nr_pages); +free_buf: + kfree(helper_buffer); + return ret; +} + +static struct dma_heap_ops cma_heap_ops = { + .allocate = cma_heap_allocate, +}; + +static int __add_cma_heaps(struct cma *cma, void *data) +{ + struct cma_heap *cma_heap; + + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); + + if (!cma_heap) + return -ENOMEM; + + cma_heap->heap.name = cma_get_name(cma); + cma_heap->heap.ops = &cma_heap_ops; + cma_heap->cma = cma; + + dma_heap_add(&cma_heap->heap); + + return 0; +} + +static int add_cma_heaps(void) +{ + cma_for_each_area(__add_cma_heaps, NULL); + return 0; +} +device_initcall(add_cma_heaps); From patchwork Tue Mar 5 20:54:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 159715 Delivered-To: patches@linaro.org Received: by 2002:a02:5cc1:0:0:0:0:0 with SMTP id w62csp5410440jad; Tue, 5 Mar 2019 12:54:50 -0800 (PST) X-Received: by 2002:a65:4c01:: with SMTP id u1mr3171724pgq.116.1551819290221; Tue, 05 Mar 2019 12:54:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551819290; cv=none; d=google.com; s=arc-20160816; b=hIBFmH647m2UD3fX8RsFCNC/JbA8dorZQ8b4QUa/GUxqrzUfEfYY7K7SeVdYIs65Ga uTve9LcO3g41Vk0ts7+6Sj0sBuVTJUqfJ0KY/iTu1rOudKhdwTDHd0oLNZntmLXBrDVF XnRAPSJbsH3xSFli7yuWhpgFEH6qoAn4cw1P5LOawPDoUOWs/NubX5HWM1I5kuptIRoV qikArnIATOvySyzOCCypVThMeGpdQr0HcAK47krGLNUm8z/x0oKkRJzaYNLANpFNlAkj FFsv0GCRAp8X+AEjcwOmOvy2ksvMFZ/iSyeUZrjLMWYQM/G4dGvlXZORA7IikNLtRovZ /qaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yRzxxqyWBOZSHUPCrEH0/twSX4oAAXNjDpDvvJwFuKY=; b=BlXoZnLeYLEnHn+CgkC+8Dihk0OFlJakCiiccWUt/PJ9XTGwn5m66qjLh0zFACJNtB BPYesOiG4Lj1Dy9KYz7lx5d3rX2vs1gbXZFmcYaXD2F7nusqgTMqQ6MleqinLU613rOC YHuiVzdOCf48Rrz0LMQWYsqeu/xT6T1L/PFt3eFN1ocP2zOSXK2DLthzETE4D3KgqjB3 Gj579RZ6VHKpjxcYlzTeqdKd5HaZRnJmz843A/hSTA9beRQHy2CA9ZtmtsRygvDmlUwa PPs6UcmHtR6yi6aTnJU2HR9h5Ui4yP8Q7/JVK68teQ9Ys9VWGPLfV52h0m7Bx8GOAw2D aYgw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ig9NrpoL; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id i184sor15480414pge.78.2019.03.05.12.54.50 for (Google Transport Security); Tue, 05 Mar 2019 12:54:50 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Ig9NrpoL; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yRzxxqyWBOZSHUPCrEH0/twSX4oAAXNjDpDvvJwFuKY=; b=Ig9NrpoLNuMDfa1+uWlGupfOqBCwIQ+ARCCn0xpftFCT3CL10AADvXWK6+JhSnvz4W p3k6GJJ2Z1mlHzqRsm5OHk/PK4cC6Dk23pAfM300lbS6bVkUcxds/Pspgzvci2Si0/bJ y2o8svN7nDilfgoFvx4QdJvS7od2QiEIx6knwSOtqD18tZf3je3SQirbipJ1VicIE/YN 5YqWpTO+nOXUaf8LIxcqMDMEnEXrvTQue+vlk1lS0l+KCI+3g8eR/oiEmcNmeTkHFKAJ kM3pIRH9pB6Zwzc6w2xAocRDCHT/LJ7HDwAS7pjS4XowquDPevvezNeVxmLOQSos47n3 OjqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yRzxxqyWBOZSHUPCrEH0/twSX4oAAXNjDpDvvJwFuKY=; b=Qrv/bOrtV1KhLn2aQn/2/lvl4/Alw4e94ObNF5BJtEVfl+Oif7DBYr3KCcLiPGys/r 0shAHq6MEJLYRKk8P+FYe6B1k4R1Bf2BFWVm1D3049l3aeW2MW6UyFFm7jv9aYX/wNt6 ldQ/qTNE/ClatgauL5kG13A3m0H4vc+SwYp/bTNsIYaPJI62RscQ8G/urMt6dMI6iM7i Xm+RuPNnfK1uF5sxaX8JmFPROLl1e03m7BGJQC9/Bmgyk8xiZDeJusrTK3azhVDfD7fK UGpdcYtodzngsCG9ycHyjZNMVB0jwvUEtsPNbjZ1GzW2SQs6pCtE3qFcuyTopdkr+DK+ AlxA== X-Gm-Message-State: APjAAAVr8tW9Lbf9ezmlk7rYokZnqwhZbjkJvPgKHLjeMZ5Rjv7Au5Pg K2RBgce156+yjpTLdHsnmKoSIhwX X-Google-Smtp-Source: APXvYqxF7ykA/kf99QruolBopZYp5p4PGISA/FBp9efQscn5LVLJvQ9HEnAdWkYDlooRd84lKgzCzQ== X-Received: by 2002:a63:7444:: with SMTP id e4mr3119864pgn.398.1551819289844; Tue, 05 Mar 2019 12:54:49 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id i4sm13411788pfo.158.2019.03.05.12.54.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 05 Mar 2019 12:54:48 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Greg KH , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test Date: Tue, 5 Mar 2019 12:54:33 -0800 Message-Id: <1551819273-640-6-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1551819273-640-1-git-send-email-john.stultz@linaro.org> References: <1551819273-640-1-git-send-email-john.stultz@linaro.org> Add very trivial allocation test for dma-heaps. TODO: Need to actually do some validation on the returned dma-buf. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Greg KH Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: Switched to use reworked dma-heap apis --- tools/testing/selftests/dmabuf-heaps/Makefile | 11 +++ tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++ 2 files changed, 107 insertions(+) create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c -- 2.7.4 diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile new file mode 100644 index 0000000..c414ad3 --- /dev/null +++ b/tools/testing/selftests/dmabuf-heaps/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall +#LDLIBS += -lrt -lpthread -lm + +# these are all "safe" tests that don't modify +# system time or require escalated privileges +TEST_GEN_PROGS = dmabuf-heap + + +include ../lib.mk + diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c new file mode 100644 index 0000000..06837a4 --- /dev/null +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../../../../include/uapi/linux/dma-heap.h" + +#define DEVPATH "/dev/dma_heap" + +int dmabuf_heap_open(char *name) +{ + int ret, fd; + char buf[256]; + + ret = sprintf(buf, "%s/%s", DEVPATH, name); + if (ret < 0) { + printf("sprintf failed!\n"); + return ret; + } + + fd = open(buf, O_RDWR); + if (fd < 0) + printf("open %s failed!\n", buf); + return fd; +} + +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd) +{ + struct dma_heap_allocation_data data = { + .len = len, + .flags = flags, + }; + int ret; + + if (dmabuf_fd == NULL) + return -EINVAL; + + ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data); + if (ret < 0) + return ret; + *dmabuf_fd = (int)data.fd; + return ret; +} + +#define ONE_MEG (1024*1024) + +void do_test(char *heap_name) +{ + int heap_fd = -1, dmabuf_fd = -1; + int ret; + + printf("Testing heap: %s\n", heap_name); + + heap_fd = dmabuf_heap_open(heap_name); + if (heap_fd < 0) + return; + + printf("Allocating 1 MEG\n"); + ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd); + if (ret) + goto out; + + /* DO SOMETHING WITH THE DMABUF HERE? */ + +out: + if (dmabuf_fd >= 0) + close(dmabuf_fd); + if (heap_fd >= 0) + close(heap_fd); +} + + +int main(void) +{ + DIR *d; + struct dirent *dir; + + d = opendir(DEVPATH); + if (!d) { + printf("No %s directory?\n", DEVPATH); + return -1; + } + + while ((dir = readdir(d)) != NULL) + do_test(dir->d_name); + + + return 0; +}