From patchwork Thu Feb 21 07:40:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 158871 Delivered-To: patches@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp183438jaa; Wed, 20 Feb 2019 23:40:37 -0800 (PST) X-Received: by 2002:a17:902:29c9:: with SMTP id h67mr41344572plb.111.1550734836977; Wed, 20 Feb 2019 23:40:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550734836; cv=none; d=google.com; s=arc-20160816; b=HID3xM/8r5sXkpCLErg5T+3bnIqfqCby5FZNBXgChStan+lpof4XeT+nj6GPmnvdTW fPSwkyHKzUcu+iPyCDOFwhmQW2725AUCyPThT9R45IqeFQqhkKjM5bBEIlheC3Ig7bQ8 yZok150gA8lMKfXjdg/UZyf6hBFdyZ99TUFG1Dq1Ixu4TzoTja2cf+BNXIxdMB3rfkgx ZcII8foY5qrFIZGDsm95CelG9zYwAPWS2UTV3myJHwzYyyqibyhpyle3cXjMPQE9dAmy eXVolcU9leNxxP1xn3F21NS63z0p2W3K2oP9Sn6TglAGMu5xGrhnfkpQ4kT5/YIfxB6m uK4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PRkgft4XAP1SMNcQkbnK5I/GCSnaK8DnvAunKoCHrRE=; b=un1/rfkUQbZM+DlZd5w4MyvgrPf4Xm7/SIK4eE9y43ZKqjNPA9PQfLoilHia+jARrB pyT6kuWAAp1FHLHpQ7tKPcb77HKU/vBuTLeCnWijDKDmxQLJ6gqriTA3WKlQK+kBW5dh Oay7qP82yQcsOgwJBp+CImVPvP9qYSwg+Gk4ZcFSZCD3YhmaNF/cXQKwMDHp9Etlhjj7 cUNsRN7E/njavDgPkuTB4Xj4K8MxsKGinedPAQ4Xl5V889kf2wKGp72Q8rUipnIZSUCw 22DxvcAnJSqAR5Fx8n0MysRv7LpztpS12sazGVhUHcZHwd0Fe1P3tqd2q4f/kVgEzcwY LBrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=g1kvseGv; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id c24sor8945511plo.69.2019.02.20.23.40.36 for (Google Transport Security); Wed, 20 Feb 2019 23:40:36 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=g1kvseGv; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PRkgft4XAP1SMNcQkbnK5I/GCSnaK8DnvAunKoCHrRE=; b=g1kvseGvoG1H3A5+obzlDwGldjYK28VuVVL6zOWkjEQL0dJX2JkN816IQTcFOp4GwW xXE7rWFGo09kNCP6ghawlyBNTj4q4Cz4s2O5yn2KdUfHee6/st8huh7wE7L36t5m1jVc kuPffNYi87tDoLarpb3pmccR9m2whyurORIBXxp5KfeaO4PaGWRjoHA1kjGIhHO3IXTJ R2QJjm57X6zj/7ovMk5F4zOAVuGViAs+CyCABI6QU0Wse3Vpt9SzHLrs9a3/Z89qv5wc c4IOynF9u9JyCqAN36rNQnHVw7u1QFaukCB3xPbSeJ4jSZ8P3aEP5Bj9RJjUvpQWfISV 9FvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PRkgft4XAP1SMNcQkbnK5I/GCSnaK8DnvAunKoCHrRE=; b=QGgo31XurjvRLr037rFm37wju/zNl/ljxSxF7sdLDRS/JAqWb0AwyKbImWvPKT9J61 lR7ZUI+GnxGSsZGkk+0M89ePDjZC0d7YzeiNV2D0mB/C5qj5wbIO+2rAJJ/TajqCUdSj c3rEOlF2F8mgbi5nWtCAY3Tc5nEaq5v0FDOzF2+lvy1tCV55lgWYFsFlF1PTfpD00+sx w1KFxN+zKstkVOMb7ehTM8L2umGhISXsJgOj5wBwU7eO1fGJ/a7wcIm9mQYZPKlsUpa8 +VfFtrK9sryWlCnB8Ie4vlRfPG62D6NVjOxU61QL9fH+/z2rkbA5cPYyu4kQVPLwOE/u VICg== X-Gm-Message-State: AHQUAuYrorX+X1IgVifa6yp2BgSFjJuau1MegpPdWBAIs3hDYO4PEd6h 7/zHftFDAmUBpZUumMQtTVesvjRI X-Google-Smtp-Source: AHgI3IZHtzRefSKAKQjucRWT41NCXv/tq339PcJtIbDGhUJg3fwINvQT1eE4FZz1S8tGKS/DUVmXkQ== X-Received: by 2002:a17:902:7d92:: with SMTP id a18mr24431365plm.215.1550734836106; Wed, 20 Feb 2019 23:40:36 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g136sm35907237pfb.154.2019.02.20.23.40.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Feb 2019 23:40:35 -0800 (PST) From: John Stultz To: Laura Abbott Cc: John Stultz , Benjamin Gaignard , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [EARLY RFC][PATCH 1/4] dma-buf: Add dma-buf pools framework Date: Wed, 20 Feb 2019 23:40:27 -0800 Message-Id: <1550734830-23499-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> References: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> This patch introduces the dma-buf pools framework. This framework allows for different pool implementations to be created, which act as dma-buf exporters, allowing userland to allocate specific types of memory for use in dma-buf sharing. This resembles the Android ION framework in that it takes that code and renames most of the variables. (Following Rafael Wysocki's earlier theory that the "easiest way to sell a once rejected feature is to advertise it under a different name" :) However, the API has been greatly simplified compared to ION. This patchset extends some of my (and Benjamin's) earlier work with the ION per-heap device nodes. Each pool (previously "heap" in ION) has its own device node, which one can allocate from using the DMABUF_POOL_IOC_ALLOC which is very similar to the ION_IOC_ALLOC call. There is no equivalent ION_IOC_HEAP_QUERY interface, as the pools all have their own device nodes. Additionally, any unused code from ION was removed. NOTE: Reworked the per-pool devices to create a proper class so Android can have a nice /dev/dmabuf_pools/ directory. Its working but I'm almost sure if I did it wrong, as its much more complex then just using a miscdevice. Extra review would be helpful. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- MAINTAINERS | 13 + drivers/dma-buf/Kconfig | 2 + drivers/dma-buf/Makefile | 1 + drivers/dma-buf/pools/Kconfig | 10 + drivers/dma-buf/pools/Makefile | 2 + drivers/dma-buf/pools/dmabuf-pools.c | 670 +++++++++++++++++++++++++++++++++++ drivers/dma-buf/pools/dmabuf-pools.h | 244 +++++++++++++ drivers/dma-buf/pools/pool-helpers.c | 317 +++++++++++++++++ drivers/dma-buf/pools/pool-ioctl.c | 94 +++++ include/uapi/linux/dmabuf-pools.h | 59 +++ 10 files changed, 1412 insertions(+) create mode 100644 drivers/dma-buf/pools/Kconfig create mode 100644 drivers/dma-buf/pools/Makefile create mode 100644 drivers/dma-buf/pools/dmabuf-pools.c create mode 100644 drivers/dma-buf/pools/dmabuf-pools.h create mode 100644 drivers/dma-buf/pools/pool-helpers.c create mode 100644 drivers/dma-buf/pools/pool-ioctl.c create mode 100644 include/uapi/linux/dmabuf-pools.h -- 2.7.4 diff --git a/MAINTAINERS b/MAINTAINERS index 8e798ce..6bc3ab0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4614,6 +4614,19 @@ F: include/linux/*fence.h F: Documentation/driver-api/dma-buf.rst T: git git://anongit.freedesktop.org/drm/drm-misc +DMA-BUF POOLS FRAMEWORK +M: Laura Abbott +R: Liam Mark +R: Brian Starkey +R: "Andrew F. Davis" +R: John Stultz +S: Maintained +L: linux-media@vger.kernel.org +L: dri-devel@lists.freedesktop.org +L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers) +F: drivers/dma-buf/pools/* +T: git git://anongit.freedesktop.org/drm/drm-misc + DMA GENERIC OFFLOAD ENGINE SUBSYSTEM M: Vinod Koul L: dmaengine@vger.kernel.org diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig index 2e5a0fa..c746510 100644 --- a/drivers/dma-buf/Kconfig +++ b/drivers/dma-buf/Kconfig @@ -39,4 +39,6 @@ config UDMABUF A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers. +source "drivers/dma-buf/pools/Kconfig" + endmenu diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 0913a6c..9c295df 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1,4 +1,5 @@ obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o +obj-y += pools/ obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o obj-$(CONFIG_UDMABUF) += udmabuf.o diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig new file mode 100644 index 0000000..caa7eb8 --- /dev/null +++ b/drivers/dma-buf/pools/Kconfig @@ -0,0 +1,10 @@ +menuconfig DMABUF_POOLS + bool "DMA-BUF Userland Memory Pools" + depends on HAS_DMA && MMU + select GENERIC_ALLOCATOR + select DMA_SHARED_BUFFER + help + Choose this option to enable the DMA-BUF userland, memory pools, + which allow userspace to allocate dma-bufs that can be shared between + drivers. + If you're not using Android its probably safe to say N here. diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile new file mode 100644 index 0000000..6cb1284 --- /dev/null +++ b/drivers/dma-buf/pools/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o diff --git a/drivers/dma-buf/pools/dmabuf-pools.c b/drivers/dma-buf/pools/dmabuf-pools.c new file mode 100644 index 0000000..706b0eb --- /dev/null +++ b/drivers/dma-buf/pools/dmabuf-pools.c @@ -0,0 +1,670 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pools/dmabuf-pools.c + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "dmabuf-pools.h" + +#define DEVNAME "dmabuf_pools" + +#define NUM_POOL_MINORS 128 +static DEFINE_IDR(dmabuf_pool_idr); +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */ + +struct dmabuf_pool_device { + struct rw_semaphore lock; + struct plist_head pools; + struct dentry *debug_root; + dev_t device_devt; + struct class *pool_class; +}; + +static struct dmabuf_pool_device *internal_dev; +static int pool_id; + +/* this function should only be called while dev->lock is held */ +static struct dmabuf_pool_buffer *dmabuf_pool_buffer_create( + struct dmabuf_pool *pool, + struct dmabuf_pool_device *dev, + unsigned long len, + unsigned long flags) +{ + struct dmabuf_pool_buffer *buffer; + int ret; + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ERR_PTR(-ENOMEM); + + buffer->pool = pool; + buffer->flags = flags; + buffer->size = len; + + ret = pool->ops->allocate(pool, buffer, len, flags); + + if (ret) { + if (!(pool->flags & DMABUF_POOL_FLAG_DEFER_FREE)) + goto err2; + + dmabuf_pool_freelist_drain(pool, 0); + ret = pool->ops->allocate(pool, buffer, len, flags); + if (ret) + goto err2; + } + + if (!buffer->sg_table) { + WARN_ONCE(1, "This pool needs to set the sgtable"); + ret = -EINVAL; + goto err1; + } + + spin_lock(&pool->stat_lock); + pool->num_of_buffers++; + pool->num_of_alloc_bytes += len; + if (pool->num_of_alloc_bytes > pool->alloc_bytes_wm) + pool->alloc_bytes_wm = pool->num_of_alloc_bytes; + spin_unlock(&pool->stat_lock); + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + return buffer; + +err1: + pool->ops->free(buffer); +err2: + kfree(buffer); + return ERR_PTR(ret); +} + +void dmabuf_pool_buffer_destroy(struct dmabuf_pool_buffer *buffer) +{ + if (buffer->kmap_cnt > 0) { + pr_warn_once("%s: buffer still mapped in the kernel\n", + __func__); + buffer->pool->ops->unmap_kernel(buffer->pool, buffer); + } + buffer->pool->ops->free(buffer); + spin_lock(&buffer->pool->stat_lock); + buffer->pool->num_of_buffers--; + buffer->pool->num_of_alloc_bytes -= buffer->size; + spin_unlock(&buffer->pool->stat_lock); + + kfree(buffer); +} + +static void _dmabuf_pool_buffer_destroy(struct dmabuf_pool_buffer *buffer) +{ + struct dmabuf_pool *pool = buffer->pool; + + if (pool->flags & DMABUF_POOL_FLAG_DEFER_FREE) + dmabuf_pool_freelist_add(pool, buffer); + else + dmabuf_pool_buffer_destroy(buffer); +} + +static void *dmabuf_pool_buffer_kmap_get(struct dmabuf_pool_buffer *buffer) +{ + void *vaddr; + + if (buffer->kmap_cnt) { + buffer->kmap_cnt++; + return buffer->vaddr; + } + vaddr = buffer->pool->ops->map_kernel(buffer->pool, buffer); + if (WARN_ONCE(!vaddr, + "pool->ops->map_kernel should return ERR_PTR on error")) + return ERR_PTR(-EINVAL); + if (IS_ERR(vaddr)) + return vaddr; + buffer->vaddr = vaddr; + buffer->kmap_cnt++; + return vaddr; +} + +static void dmabuf_pool_buffer_kmap_put(struct dmabuf_pool_buffer *buffer) +{ + buffer->kmap_cnt--; + if (!buffer->kmap_cnt) { + buffer->pool->ops->unmap_kernel(buffer->pool, buffer); + buffer->vaddr = NULL; + } +} + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sg(table->sgl, sg, table->nents, i) { + memcpy(new_sg, sg, sizeof(*sg)); + new_sg->dma_address = 0; + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static void free_duped_table(struct sg_table *table) +{ + sg_free_table(table); + kfree(table); +} + +struct dmabuf_pools_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + enum dma_data_direction dir; +}; + +static int dmabuf_pool_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct dmabuf_pools_attachment *a; + struct sg_table *table; + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + a->dir = DMA_NONE; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void dmabuf_pool_detatch(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct dmabuf_pools_attachment *a = attachment->priv; + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + struct sg_table *table; + + if (!a) + return; + + table = a->table; + if (table) { + if (a->dir != DMA_NONE) + dma_unmap_sg(attachment->dev, table->sgl, table->nents, + a->dir); + sg_free_table(table); + } + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + free_duped_table(a->table); + + kfree(a); + attachment->priv = NULL; +} + +static struct sg_table *dmabuf_pool_map_dma_buf( + struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dmabuf_pools_attachment *a = attachment->priv; + struct sg_table *table; + + if (WARN_ON(direction == DMA_NONE || !a)) + return ERR_PTR(-EINVAL); + + if (a->dir == direction) + return a->table; + + if (WARN_ON(a->dir != DMA_NONE)) + return ERR_PTR(-EBUSY); + + table = a->table; + if (!IS_ERR(table)) { + if (!dma_map_sg(attachment->dev, table->sgl, table->nents, + direction)) { + table = ERR_PTR(-ENOMEM); + } else { + a->dir = direction; + } + } + return table; +} + +static void dmabuf_pool_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ +} + +static int dmabuf_pool_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + int ret = 0; + + if (!buffer->pool->ops->map_user) { + pr_err("%s: this pool does not define a method for mapping to userspace\n", + __func__); + return -EINVAL; + } + + if (!(buffer->flags & DMABUF_POOL_FLAG_CACHED)) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + + mutex_lock(&buffer->lock); + /* now map it to userspace */ + ret = buffer->pool->ops->map_user(buffer->pool, buffer, vma); + mutex_unlock(&buffer->lock); + + if (ret) + pr_err("%s: failure mapping buffer to userspace\n", + __func__); + + return ret; +} + +static void dmabuf_pool_dma_buf_release(struct dma_buf *dmabuf) +{ + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + + _dmabuf_pool_buffer_destroy(buffer); +} + +static void *dmabuf_pool_dma_buf_kmap(struct dma_buf *dmabuf, + unsigned long offset) +{ + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + + return buffer->vaddr + offset * PAGE_SIZE; +} + +static void dmabuf_pool_dma_buf_kunmap(struct dma_buf *dmabuf, + unsigned long offset, + void *ptr) +{ +} + +static int dmabuf_pool_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + void *vaddr; + struct dmabuf_pools_attachment *a; + int ret = 0; + + /* + * TODO: Move this elsewhere because we don't always need a vaddr + */ + if (buffer->pool->ops->map_kernel) { + mutex_lock(&buffer->lock); + vaddr = dmabuf_pool_buffer_kmap_get(buffer); + if (IS_ERR(vaddr)) { + ret = PTR_ERR(vaddr); + goto unlock; + } + mutex_unlock(&buffer->lock); + } + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, + direction); + } + +unlock: + mutex_unlock(&buffer->lock); + return ret; +} + +static int dmabuf_pool_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct dmabuf_pool_buffer *buffer = dmabuf->priv; + struct dmabuf_pools_attachment *a; + + if (buffer->pool->ops->map_kernel) { + mutex_lock(&buffer->lock); + dmabuf_pool_buffer_kmap_put(buffer); + mutex_unlock(&buffer->lock); + } + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents, + direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static const struct dma_buf_ops dma_buf_ops = { + .map_dma_buf = dmabuf_pool_map_dma_buf, + .unmap_dma_buf = dmabuf_pool_unmap_dma_buf, + .mmap = dmabuf_pool_mmap, + .release = dmabuf_pool_dma_buf_release, + .attach = dmabuf_pool_attach, + .detach = dmabuf_pool_detatch, + .begin_cpu_access = dmabuf_pool_dma_buf_begin_cpu_access, + .end_cpu_access = dmabuf_pool_dma_buf_end_cpu_access, + .map = dmabuf_pool_dma_buf_kmap, + .unmap = dmabuf_pool_dma_buf_kunmap, +}; + +int dmabuf_pool_alloc(struct dmabuf_pool *pool, size_t len, unsigned int flags) +{ + struct dmabuf_pool_device *dev = internal_dev; + struct dmabuf_pool_buffer *buffer = NULL; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + int fd; + struct dma_buf *dmabuf; + + pr_debug("%s: pool: %s len %zu flags %x\n", __func__, + pool->name, len, flags); + + len = PAGE_ALIGN(len); + + if (!len) + return -EINVAL; + + down_read(&dev->lock); + buffer = dmabuf_pool_buffer_create(pool, dev, len, flags); + up_read(&dev->lock); + + if (!buffer) + return -ENODEV; + + if (IS_ERR(buffer)) + return PTR_ERR(buffer); + + exp_info.ops = &dma_buf_ops; + exp_info.size = buffer->size; + exp_info.flags = O_RDWR; + exp_info.priv = buffer; + + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + _dmabuf_pool_buffer_destroy(buffer); + return PTR_ERR(dmabuf); + } + + fd = dma_buf_fd(dmabuf, O_CLOEXEC); + if (fd < 0) + dma_buf_put(dmabuf); + + return fd; +} + +static int dmabuf_pool_open(struct inode *inode, struct file *filp) +{ + struct dmabuf_pool *pool; + + mutex_lock(&minor_lock); + pool = idr_find(&dmabuf_pool_idr, iminor(inode)); + mutex_unlock(&minor_lock); + if (!pool) { + pr_debug("device: minor %d unknown.\n", iminor(inode)); + return -ENODEV; + } + + /* instance data as context */ + filp->private_data = pool; + nonseekable_open(inode, filp); + + return 0; +} + +static int dmabuf_pool_release(struct inode *inode, struct file *filp) +{ + filp->private_data = NULL; + + return 0; +} + + +static const struct file_operations dmabuf_pool_fops = { + .owner = THIS_MODULE, + .open = dmabuf_pool_open, + .release = dmabuf_pool_release, + .unlocked_ioctl = dmabuf_pool_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = dmabuf_pool_ioctl, +#endif +}; + +static int debug_shrink_set(void *data, u64 val) +{ + struct dmabuf_pool *pool = data; + struct shrink_control sc; + int objs; + + sc.gfp_mask = GFP_HIGHUSER; + sc.nr_to_scan = val; + + if (!val) { + objs = pool->shrinker.count_objects(&pool->shrinker, &sc); + sc.nr_to_scan = objs; + } + + pool->shrinker.scan_objects(&pool->shrinker, &sc); + return 0; +} + +static int debug_shrink_get(void *data, u64 *val) +{ + struct dmabuf_pool *pool = data; + struct shrink_control sc; + int objs; + + sc.gfp_mask = GFP_HIGHUSER; + sc.nr_to_scan = 0; + + objs = pool->shrinker.count_objects(&pool->shrinker, &sc); + *val = objs; + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(debug_shrink_fops, debug_shrink_get, + debug_shrink_set, "%llu\n"); + + +static int dmabuf_pool_get_minor(struct dmabuf_pool *pool) +{ + int retval = -ENOMEM; + + mutex_lock(&minor_lock); + retval = idr_alloc(&dmabuf_pool_idr, pool, 0, NUM_POOL_MINORS, + GFP_KERNEL); + if (retval >= 0) { + pool->minor = retval; + retval = 0; + } else if (retval == -ENOSPC) { + printk("%s: Too many dmabuf-pools\n", __func__); + retval = -EINVAL; + } + mutex_unlock(&minor_lock); + return retval; +} + +static void dmabuf_pool_free_minor(struct dmabuf_pool *pool) +{ + mutex_lock(&minor_lock); + idr_remove(&dmabuf_pool_idr, pool->minor); + mutex_unlock(&minor_lock); +} + + +void dmabuf_pool_add(struct dmabuf_pool *pool) +{ + struct dmabuf_pool_device *dev = internal_dev; + int ret; + struct device *dev_ret; + struct dentry *pool_root; + char debug_name[64]; + + if (!pool->ops->allocate || !pool->ops->free) + pr_err("%s: can not add pool with invalid ops struct.\n", + __func__); + + /* determ minor number */ + ret = dmabuf_pool_get_minor(pool); + if (ret) { + printk("%s: get minor number failed", __func__); + return; + } + + /* create device */ + pool->pool_devt = MKDEV(MAJOR(dev->device_devt), pool->minor); + dev_ret = device_create(dev->pool_class, + NULL, + pool->pool_devt, + NULL, + pool->name); + if (IS_ERR(dev_ret)) { + pr_err("dmabuf-pools: failed to create char device.\n"); + return; + } + + cdev_init(&pool->pool_dev, &dmabuf_pool_fops); + ret = cdev_add(&pool->pool_dev, dev->device_devt, NUM_POOL_MINORS); + if (ret < 0) { + device_destroy(dev->pool_class, pool->pool_devt); + pr_err("dmabuf-pools: failed to add char device.\n"); + } + + spin_lock_init(&pool->free_lock); + spin_lock_init(&pool->stat_lock); + pool->free_list_size = 0; + + if (pool->flags & DMABUF_POOL_FLAG_DEFER_FREE) + dmabuf_pool_init_deferred_free(pool); + + if ((pool->flags & DMABUF_POOL_FLAG_DEFER_FREE) || pool->ops->shrink) { + ret = dmabuf_pool_init_shrinker(pool); + if (ret) + pr_err("%s: Failed to register shrinker\n", __func__); + } + + pool->num_of_buffers = 0; + pool->num_of_alloc_bytes = 0; + pool->alloc_bytes_wm = 0; + + pool_root = debugfs_create_dir(pool->name, dev->debug_root); + debugfs_create_u64("num_of_buffers", + 0444, pool_root, + &pool->num_of_buffers); + debugfs_create_u64("num_of_alloc_bytes", + 0444, + pool_root, + &pool->num_of_alloc_bytes); + debugfs_create_u64("alloc_bytes_wm", + 0444, + pool_root, + &pool->alloc_bytes_wm); + + if (pool->shrinker.count_objects && + pool->shrinker.scan_objects) { + snprintf(debug_name, 64, "%s_shrink", pool->name); + debugfs_create_file(debug_name, + 0644, + pool_root, + pool, + &debug_shrink_fops); + } + + down_write(&dev->lock); + pool->id = pool_id++; + /* + * use negative pool->id to reverse the priority -- when traversing + * the list later attempt higher id numbers first + */ + plist_node_init(&pool->node, -pool->id); + plist_add(&pool->node, &dev->pools); + + up_write(&dev->lock); +} +EXPORT_SYMBOL(dmabuf_pool_add); + +static int dmabuf_pool_device_create(void) +{ + struct dmabuf_pool_device *idev; + int ret; + + idev = kzalloc(sizeof(*idev), GFP_KERNEL); + if (!idev) + return -ENOMEM; + + ret = alloc_chrdev_region(&idev->device_devt, 0, NUM_POOL_MINORS, + DEVNAME); + if (ret) + goto free_idev; + + idev->pool_class = class_create(THIS_MODULE, DEVNAME); + if (IS_ERR(idev->pool_class)) { + ret = PTR_ERR(idev->pool_class); + goto unreg_region; + } + + idev->debug_root = debugfs_create_dir("ion", NULL); + init_rwsem(&idev->lock); + plist_head_init(&idev->pools); + internal_dev = idev; + return 0; + +unreg_region: + unregister_chrdev_region(idev->device_devt, NUM_POOL_MINORS); +free_idev: + kfree(idev); + return ret; + +} +subsys_initcall(dmabuf_pool_device_create); diff --git a/drivers/dma-buf/pools/dmabuf-pools.h b/drivers/dma-buf/pools/dmabuf-pools.h new file mode 100644 index 0000000..12110f2 --- /dev/null +++ b/drivers/dma-buf/pools/dmabuf-pools.h @@ -0,0 +1,244 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * drivers/dma-buf/pools/dmabuf-pools.h + * + * Copyright (C) 2011 Google, Inc. + */ + +#ifndef _DMABUF_POOLS_H +#define _DMABUF_POOLS_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/** + * struct dmabuf_pool_buffer - metadata for a particular buffer + * @pool: back pointer to the pool the buffer came from + * @flags: buffer specific flags + * @private_flags: internal buffer specific flags + * @size: size of the buffer + * @priv_virt: private data to the buffer representable as + * a void * + * @lock: protects the buffers cnt fields + * @kmap_cnt: number of times the buffer is mapped to the kernel + * @vaddr: the kernel mapping if kmap_cnt is not zero + * @sg_table: the sg table for the buffer if dmap_cnt is not zero + * @attachments: list head for device attachments + * @list: list head for deferred freeing + */ +struct dmabuf_pool_buffer { + struct dmabuf_pool *pool; + unsigned long flags; + unsigned long private_flags; + size_t size; + void *priv_virt; + struct mutex lock; + int kmap_cnt; + void *vaddr; + struct sg_table *sg_table; + struct list_head attachments; + struct list_head list; +}; + + +/** + * struct dmabuf_pool_ops - ops to operate on a given pool + * @allocate: allocate memory + * @free: free memory + * @map_kernel map memory to the kernel + * @unmap_kernel unmap memory to the kernel + * @map_user map memory to userspace + * @shrink shrinker hook to reduce pool memory usage + * + * allocate, phys, and map_user return 0 on success, -errno on error. + * map_dma and map_kernel return pointer on success, ERR_PTR on + * error. @free will be called with DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE set in + * the buffer's private_flags when called from a shrinker. In that + * case, the pages being free'd must be truly free'd back to the + * system, not put in a page pool or otherwise cached. + */ +struct dmabuf_pool_ops { + int (*allocate)(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, unsigned long len, + unsigned long flags); + void (*free)(struct dmabuf_pool_buffer *buffer); + void * (*map_kernel)(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer); + void (*unmap_kernel)(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer); + int (*map_user)(struct dmabuf_pool *mapper, + struct dmabuf_pool_buffer *buffer, + struct vm_area_struct *vma); + int (*shrink)(struct dmabuf_pool *pool, gfp_t gfp_mask, + int nr_to_scan); +}; + +/** + * pool flags - flags between the dmabuf pools and core dmabuf code + */ +#define DMABUF_POOL_FLAG_DEFER_FREE BIT(0) + +/** + * private flags - flags internal to dmabuf_pools + */ +/* + * Buffer is being freed from a shrinker function. Skip any possible + * pool-specific caching mechanism (e.g. page pools). Guarantees that + * any buffer storage that came from the system allocator will be + * returned to the system allocator. + */ +#define DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE BIT(0) + +/** + * struct dmabuf_pool - represents a dmabuf pool in the system + * @node: rb node to put the pool on the device's tree of pools + * @pool_dev miscdevice for pool devicenode + * @ops: ops struct as above + * @flags: flags + * @id: id of pool + * @name: used for debugging/device-node name + * @shrinker: a shrinker for the pool + * @free_list: free list head if deferred free is used + * @free_list_size size of the deferred free list in bytes + * @lock: protects the free list + * @waitqueue: queue to wait on from deferred free thread + * @task: task struct of deferred free thread + * @num_of_buffers the number of currently allocated buffers + * @num_of_alloc_bytes the number of allocated bytes + * @alloc_bytes_wm the number of allocated bytes watermarka + * @stat_lock lock for pool statistics + * + * Represents a pool of memory from which buffers can be made. In some + * systems the only pool is regular system memory allocated via vmalloc. + * On others, some blocks might require large physically contiguous buffers + * that are allocated from a specially reserved pool. + */ +struct dmabuf_pool { + struct plist_node node; + dev_t pool_devt; + struct cdev pool_dev; + unsigned int minor; + struct dmabuf_pool_ops *ops; + unsigned long flags; + unsigned int id; + const char *name; + struct shrinker shrinker; + struct list_head free_list; + size_t free_list_size; + spinlock_t free_lock; + wait_queue_head_t waitqueue; + struct task_struct *task; + u64 num_of_buffers; + u64 num_of_alloc_bytes; + u64 alloc_bytes_wm; + spinlock_t stat_lock; +}; + +/** + * dmabuf_pool_add - adds a pool to dmabuf pools + * @pool: the pool to add + */ +void dmabuf_pool_add(struct dmabuf_pool *pool); + +/** + * some helpers for common operations on buffers using the sg_table + * and vaddr fields + */ +void *dmabuf_pool_map_kernel(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer); +void dmabuf_pool_unmap_kernel(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer); +int dmabuf_pool_map_user(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + struct vm_area_struct *vma); +int dmabuf_pool_buffer_zero(struct dmabuf_pool_buffer *buffer); +int dmabuf_pool_pages_zero(struct page *page, size_t size, pgprot_t pgprot); +int dmabuf_pool_alloc(struct dmabuf_pool *pool, size_t len, + unsigned int flags); +void dmabuf_pool_buffer_destroy(struct dmabuf_pool_buffer *buffer); + +/** + * dmabuf_pool_init_shrinker + * @pool: the pool + * + * If a pool sets the DMABUF_POOL_FLAG_DEFER_FREE flag or defines the shrink op + * this function will be called to setup a shrinker to shrink the freelists + * and call the pool's shrink op. + */ +int dmabuf_pool_init_shrinker(struct dmabuf_pool *pool); + +/** + * dmabuf_pool_init_deferred_free -- initialize deferred free functionality + * @pool: the pool + * + * If a pool sets the DMABUF_POOL_FLAG_DEFER_FREE flag this function will + * be called to setup deferred frees. Calls to free the buffer will + * return immediately and the actual free will occur some time later + */ +int dmabuf_pool_init_deferred_free(struct dmabuf_pool *pool); + +/** + * dmabuf_pool_freelist_add - add a buffer to the deferred free list + * @pool: the pool + * @buffer: the buffer + * + * Adds an item to the deferred freelist. + */ +void dmabuf_pool_freelist_add(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer); + +/** + * dmabuf_pool_freelist_drain - drain the deferred free list + * @pool: the pool + * @size: amount of memory to drain in bytes + * + * Drains the indicated amount of memory from the deferred freelist immediately. + * Returns the total amount freed. The total freed may be higher depending + * on the size of the items in the list, or lower if there is insufficient + * total memory on the freelist. + */ +size_t dmabuf_pool_freelist_drain(struct dmabuf_pool *pool, size_t size); + +/** + * dmabuf_pool_freelist_shrink - drain the deferred free + * list, skipping any pool-specific + * pooling or caching mechanisms + * + * @pool: the pool + * @size: amount of memory to drain in bytes + * + * Drains the indicated amount of memory from the deferred freelist immediately. + * Returns the total amount freed. The total freed may be higher depending + * on the size of the items in the list, or lower if there is insufficient + * total memory on the freelist. + * + * Unlike with @dmabuf_pool_freelist_drain, don't put any pages back into + * page pools or otherwise cache the pages. Everything must be + * genuinely free'd back to the system. If you're free'ing from a + * shrinker you probably want to use this. Note that this relies on + * the pool.ops.free callback honoring the DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE + * flag. + */ +size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool, + size_t size); + +/** + * dmabuf_pool_freelist_size - returns the size of the freelist in bytes + * @pool: the pool + */ +size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool); + + +long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); + +#endif /* _DMABUF_POOLS_H */ diff --git a/drivers/dma-buf/pools/pool-helpers.c b/drivers/dma-buf/pools/pool-helpers.c new file mode 100644 index 0000000..d8bdfb9 --- /dev/null +++ b/drivers/dma-buf/pools/pool-helpers.c @@ -0,0 +1,317 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pool/pool-helpers.c + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "dmabuf-pools.h" + +void *dmabuf_pool_map_kernel(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer) +{ + struct scatterlist *sg; + int i, j; + void *vaddr; + pgprot_t pgprot; + struct sg_table *table = buffer->sg_table; + int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE; + struct page **pages = vmalloc(array_size(npages, + sizeof(struct page *))); + struct page **tmp = pages; + + if (!pages) + return ERR_PTR(-ENOMEM); + + if (buffer->flags & DMABUF_POOL_FLAG_CACHED) + pgprot = PAGE_KERNEL; + else + pgprot = pgprot_writecombine(PAGE_KERNEL); + + for_each_sg(table->sgl, sg, table->nents, i) { + int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE; + struct page *page = sg_page(sg); + + WARN_ON(i >= npages); + for (j = 0; j < npages_this_entry; j++) + *(tmp++) = page++; + } + vaddr = vmap(pages, npages, VM_MAP, pgprot); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +void dmabuf_pool_unmap_kernel(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer) +{ + vunmap(buffer->vaddr); +} + +int dmabuf_pool_map_user(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + struct vm_area_struct *vma) +{ + struct sg_table *table = buffer->sg_table; + unsigned long addr = vma->vm_start; + unsigned long offset = vma->vm_pgoff * PAGE_SIZE; + struct scatterlist *sg; + int i; + int ret; + + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page = sg_page(sg); + unsigned long remainder = vma->vm_end - addr; + unsigned long len = sg->length; + + if (offset >= sg->length) { + offset -= sg->length; + continue; + } else if (offset) { + page += offset / PAGE_SIZE; + len = sg->length - offset; + offset = 0; + } + len = min(len, remainder); + ret = remap_pfn_range(vma, addr, page_to_pfn(page), len, + vma->vm_page_prot); + if (ret) + return ret; + addr += len; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static int dmabuf_pool_clear_pages(struct page **pages, int num, + pgprot_t pgprot) +{ + void *addr = vm_map_ram(pages, num, -1, pgprot); + + if (!addr) + return -ENOMEM; + memset(addr, 0, PAGE_SIZE * num); + vm_unmap_ram(addr, num); + + return 0; +} + +static int dmabuf_pool_sglist_zero(struct scatterlist *sgl, unsigned int nents, + pgprot_t pgprot) +{ + int p = 0; + int ret = 0; + struct sg_page_iter piter; + struct page *pages[32]; + + for_each_sg_page(sgl, &piter, nents, 0) { + pages[p++] = sg_page_iter_page(&piter); + if (p == ARRAY_SIZE(pages)) { + ret = dmabuf_pool_clear_pages(pages, p, pgprot); + if (ret) + return ret; + p = 0; + } + } + if (p) + ret = dmabuf_pool_clear_pages(pages, p, pgprot); + + return ret; +} + +int dmabuf_pool_buffer_zero(struct dmabuf_pool_buffer *buffer) +{ + struct sg_table *table = buffer->sg_table; + pgprot_t pgprot; + + if (buffer->flags & DMABUF_POOL_FLAG_CACHED) + pgprot = PAGE_KERNEL; + else + pgprot = pgprot_writecombine(PAGE_KERNEL); + + return dmabuf_pool_sglist_zero(table->sgl, table->nents, pgprot); +} + +int dmabuf_pool_pages_zero(struct page *page, size_t size, pgprot_t pgprot) +{ + struct scatterlist sg; + + sg_init_table(&sg, 1); + sg_set_page(&sg, page, size, 0); + return dmabuf_pool_sglist_zero(&sg, 1, pgprot); +} + +void dmabuf_pool_freelist_add(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer) +{ + spin_lock(&pool->free_lock); + list_add(&buffer->list, &pool->free_list); + pool->free_list_size += buffer->size; + spin_unlock(&pool->free_lock); + wake_up(&pool->waitqueue); +} + +size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool) +{ + size_t size; + + spin_lock(&pool->free_lock); + size = pool->free_list_size; + spin_unlock(&pool->free_lock); + + return size; +} + +static size_t _dmabuf_pool_freelist_drain(struct dmabuf_pool *pool, size_t size, + bool skip_pools) +{ + struct dmabuf_pool_buffer *buffer; + size_t total_drained = 0; + + if (dmabuf_pool_freelist_size(pool) == 0) + return 0; + + spin_lock(&pool->free_lock); + if (size == 0) + size = pool->free_list_size; + + while (!list_empty(&pool->free_list)) { + if (total_drained >= size) + break; + buffer = list_first_entry(&pool->free_list, + struct dmabuf_pool_buffer, + list); + list_del(&buffer->list); + pool->free_list_size -= buffer->size; + if (skip_pools) + buffer->private_flags |= + DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE; + total_drained += buffer->size; + spin_unlock(&pool->free_lock); + dmabuf_pool_buffer_destroy(buffer); + spin_lock(&pool->free_lock); + } + spin_unlock(&pool->free_lock); + + return total_drained; +} + +size_t dmabuf_pool_freelist_drain(struct dmabuf_pool *pool, size_t size) +{ + return _dmabuf_pool_freelist_drain(pool, size, false); +} + +size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool, size_t size) +{ + return _dmabuf_pool_freelist_drain(pool, size, true); +} + +static int dmabuf_pool_deferred_free(void *data) +{ + struct dmabuf_pool *pool = data; + + while (true) { + struct dmabuf_pool_buffer *buffer; + + wait_event_freezable(pool->waitqueue, + dmabuf_pool_freelist_size(pool) > 0); + + spin_lock(&pool->free_lock); + if (list_empty(&pool->free_list)) { + spin_unlock(&pool->free_lock); + continue; + } + buffer = list_first_entry(&pool->free_list, + struct dmabuf_pool_buffer, + list); + list_del(&buffer->list); + pool->free_list_size -= buffer->size; + spin_unlock(&pool->free_lock); + dmabuf_pool_buffer_destroy(buffer); + } + + return 0; +} + +int dmabuf_pool_init_deferred_free(struct dmabuf_pool *pool) +{ + struct sched_param param = { .sched_priority = 0 }; + + INIT_LIST_HEAD(&pool->free_list); + init_waitqueue_head(&pool->waitqueue); + pool->task = kthread_run(dmabuf_pool_deferred_free, pool, + "%s", pool->name); + if (IS_ERR(pool->task)) { + pr_err("%s: creating thread for deferred free failed\n", + __func__); + return PTR_ERR_OR_ZERO(pool->task); + } + sched_setscheduler(pool->task, SCHED_IDLE, ¶m); + return 0; +} + +static unsigned long dmabuf_pool_shrink_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct dmabuf_pool *pool = container_of(shrinker, struct dmabuf_pool, + shrinker); + int total = 0; + + total = dmabuf_pool_freelist_size(pool) / PAGE_SIZE; + if (pool->ops->shrink) + total += pool->ops->shrink(pool, sc->gfp_mask, 0); + return total; +} + +static unsigned long dmabuf_pool_shrink_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct dmabuf_pool *pool = container_of(shrinker, struct dmabuf_pool, + shrinker); + int freed = 0; + int to_scan = sc->nr_to_scan; + + if (to_scan == 0) + return 0; + + /* + * shrink the free list first, no point in zeroing the memory if we're + * just going to reclaim it. Also, skip any possible page pooling. + */ + if (pool->flags & DMABUF_POOL_FLAG_DEFER_FREE) { + freed = dmabuf_pool_freelist_shrink(pool, to_scan * PAGE_SIZE); + freed /= PAGE_SIZE; + } + + to_scan -= freed; + if (to_scan <= 0) + return freed; + + if (pool->ops->shrink) + freed += pool->ops->shrink(pool, sc->gfp_mask, to_scan); + return freed; +} + +int dmabuf_pool_init_shrinker(struct dmabuf_pool *pool) +{ + pool->shrinker.count_objects = dmabuf_pool_shrink_count; + pool->shrinker.scan_objects = dmabuf_pool_shrink_scan; + pool->shrinker.seeks = DEFAULT_SEEKS; + pool->shrinker.batch = 0; + + return register_shrinker(&pool->shrinker); +} diff --git a/drivers/dma-buf/pools/pool-ioctl.c b/drivers/dma-buf/pools/pool-ioctl.c new file mode 100644 index 0000000..53153fc --- /dev/null +++ b/drivers/dma-buf/pools/pool-ioctl.c @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2011 Google, Inc. + */ + +#include +#include +#include +#include +#include "dmabuf-pools.h" + +union pool_ioctl_arg { + struct dmabuf_pool_allocation_data pool_allocation; +}; + +static int validate_ioctl_arg(unsigned int cmd, union pool_ioctl_arg *arg) +{ + switch (cmd) { + case DMABUF_POOL_IOC_ALLOC: + if (arg->pool_allocation.reserved0 || + arg->pool_allocation.reserved1) + return -EINVAL; + default: + break; + } + + return 0; +} + +/* fix up the cases where the ioctl direction bits are incorrect */ +static unsigned int pool_ioctl_dir(unsigned int cmd) +{ + switch (cmd) { + default: + return _IOC_DIR(cmd); + } +} + +long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) +{ + int ret = 0; + unsigned int dir; + union pool_ioctl_arg data; + + dir = pool_ioctl_dir(cmd); + + if (_IOC_SIZE(cmd) > sizeof(data)) + return -EINVAL; + + /* + * The copy_from_user is unconditional here for both read and write + * to do the validate. If there is no write for the ioctl, the + * buffer is cleared + */ + if (copy_from_user(&data, (void __user *)arg, _IOC_SIZE(cmd))) + return -EFAULT; + + ret = validate_ioctl_arg(cmd, &data); + if (ret) { + pr_warn_once("%s: ioctl validate failed\n", __func__); + return ret; + } + + if (!(dir & _IOC_WRITE)) + memset(&data, 0, sizeof(data)); + + switch (cmd) { + case DMABUF_POOL_IOC_ALLOC: + { + struct cdev *cdev = filp->private_data; + struct dmabuf_pool *pool; + int fd; + + pool = container_of(cdev, struct dmabuf_pool, pool_dev); + + fd = dmabuf_pool_alloc(pool, data.pool_allocation.len, + data.pool_allocation.flags); + if (fd < 0) + return fd; + + data.pool_allocation.fd = fd; + + break; + } + default: + return -ENOTTY; + } + + if (dir & _IOC_READ) { + if (copy_to_user((void __user *)arg, &data, _IOC_SIZE(cmd))) + return -EFAULT; + } + return ret; +} diff --git a/include/uapi/linux/dmabuf-pools.h b/include/uapi/linux/dmabuf-pools.h new file mode 100644 index 0000000..bad9b11 --- /dev/null +++ b/include/uapi/linux/dmabuf-pools.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * drivers/staging/android/uapi/ion.h + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ +#ifndef _UAPI_LINUX_DMABUF_POOL_H +#define _UAPI_LINUX_DMABUF_POOL_H + +#include +#include + +/** + * allocation flags - the lower 16 bits are used by core dmabuf pools, the + * upper 16 bits are reserved for use by the pools themselves. + */ + +/* + * mappings of this buffer should be cached, dmabuf pools will do cache + * maintenance when the buffer is mapped for dma + */ +#define DMABUF_POOL_FLAG_CACHED 1 + +/** + * DOC: DMABUF Pool Userspace API + * + */ + +/** + * struct dmabuf_pool_allocation_data - metadata passed from userspace for + * allocations + * @len: size of the allocation + * @flags: flags passed to pool + * @fd: will be populated with a fd which provdes the + * handle to the allocated dma-buf + * + * Provided by userspace as an argument to the ioctl + */ +struct dmabuf_pool_allocation_data { + __u64 len; + __u32 flags; + __u32 fd; + __u32 reserved0; + __u32 reserved1; +}; + +#define DMABUF_POOL_IOC_MAGIC 'P' + +/** + * DOC: DMABUF_POOL_IOC_ALLOC - allocate memory from pool + * + * Takes an dmabuf_pool_allocation_data struct and returns it with the fd field + * populated with the dmabuf handle of the allocation. + */ +#define DMABUF_POOL_IOC_ALLOC _IOWR(DMABUF_POOL_IOC_MAGIC, 0, \ + struct dmabuf_pool_allocation_data) + +#endif /* _UAPI_LINUX_DMABUF_POOL_H */ From patchwork Thu Feb 21 07:40:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 158872 Delivered-To: patches@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp183452jaa; Wed, 20 Feb 2019 23:40:37 -0800 (PST) X-Received: by 2002:a63:f556:: with SMTP id e22mr14256pgk.321.1550734837805; Wed, 20 Feb 2019 23:40:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550734837; cv=none; d=google.com; s=arc-20160816; b=YbeXw/S6/+JDII7OSI65Cy732upLUZ7qWCy5QhIvHiJrdfisXEvKpTdMh6EJJYN963 vIn+5vQfeWUX9c5N0sJfrWGmmaIUjzh4FMvH1lYzF0rJC6w9YAhbbZ0sVWDjg+Z8lnWE tyZibX6HPZ4v22JandO0Sw1W5+UKuif86ugpNLfwAhnTb3IcxTZZJnPYMYGACutnnNQI 3LnQ1GpvEWRUt3/ZP+/nGYlJfuiXcImh+y3TIrp5j3t25z75ymtDWvt10klLBMx7jZGu 7DdwxqOkVmPR6g06Nzlwi4kIj9fDcjNYsjnLc0SUEI4Yf+F8BG2M7po4UnnQpJ4/XjCd QfvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=S/srEAl6gW0XbSemL+xGNlvOW4CEkECucRRk7dlopH4=; b=nmJr5oeEwxqw1/yigjIF4Yrb/SgxjvG3EKGRWvpYvd7UFJxIotHIqfoGC6SYoEfkxZ QTdKCJ/HyTaqnFjYj5+MmzHIt3k7/1hvyqAr6OUcYjXJQA4B+LEArdaGJ4prOhl+jDg9 xxmKOaSkPYu1B4d3Ib4Ok024CIwjxBbybiU9SPsOIX8Ur+QnZLFk1Wjks9NcdpF/o/c6 YcjPsHlvvYMfLoSEheKTUeVWOUlYunGIQuXpGPrN7dzS6nIuIjxzXyGo1otfNCegQkwq qo/WG6z/ve3doSYccXaZv/2mnifQp1+w9rI7X2QG0tJYggqZp7NcxTefsBdw3kqp+3GQ z/XA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Vqmobc7g; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id i6sor18142674pfa.30.2019.02.20.23.40.37 for (Google Transport Security); Wed, 20 Feb 2019 23:40:37 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Vqmobc7g; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=S/srEAl6gW0XbSemL+xGNlvOW4CEkECucRRk7dlopH4=; b=Vqmobc7gLbTpYohmWiL9qfACpOfM2LdTaO+hYJSNAizJazKQeXHOrVvtUkPMPzbJ9Y oWbrxreRjctncFs6JWc1hyDEWEhtsPuCz2sUnietbgrTwROwFJh8xqswoo38TR/xbLvt z9rGlYnxjMkJrAfWzfZYxkDu4bTxER9G1jQe0FX1j54MYMroMn1uboCcPzyYynNIExNV noGBxfOtLI8+nfWaX1UpcgXGa2224/KGjm+p4AWE0KkdkRzs51x+HfP9mFJT0DaUCT0W /ivQl4J03QhH2F6IrRzSlfn31WsbBm3RhD18aqgy8VN5DWmLHGUimDkQcZGFiDo9uo4i AHjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=S/srEAl6gW0XbSemL+xGNlvOW4CEkECucRRk7dlopH4=; b=gn52alcgDUO8VY3KwRT11n0V4WP+32Bw3Igk+wbOMfjC9Dhgog9u4zxDKQqpSRlyHW W3n+wZ/D63N6zsJYxywaanFh8WrvFZgJ7iq2m4Uhki9OzUXFfrV5C2qvNEqYL2xr8YOv fX2erDshXPE6HJdsySHAf26IoBWk5w6UkRmEncy+r5mTRDliwfql0gp6pgAM8ely4bDg PrrpHCAxcbSz+iHl6MFYX2NrR/szbjN2mmD8OuLsmy2u8HsogGRefTHr2rNn+ve2LpHR xnl8jbw5X509r4v6LIk5OC/PlLsheeX7rijA59NU/AaeCtQ/s0MI8icsVbK2GmwrE3QK WiXQ== X-Gm-Message-State: AHQUAub7UhDLecVgOifWqpeL1TV4GTULaAmWcNIEHP+cncl11dSvIBdY fgBbe0LAvt14trF9bfRQAnQ5K3lm X-Google-Smtp-Source: AHgI3IZHufupvbBtgk+/f9+/Al14/QIlNCXEScMJZH1wo2CcbRsyaZE+ocWwBTwTOVLE+VTxFnRa1g== X-Received: by 2002:a62:6f49:: with SMTP id k70mr38623651pfc.7.1550734837354; Wed, 20 Feb 2019 23:40:37 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g136sm35907237pfb.154.2019.02.20.23.40.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Feb 2019 23:40:36 -0800 (PST) From: John Stultz To: Laura Abbott Cc: John Stultz , Benjamin Gaignard , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [EARLY RFC][PATCH 2/4] dma-buf: pools: Add page-pool for dma-buf pools Date: Wed, 20 Feb 2019 23:40:28 -0800 Message-Id: <1550734830-23499-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> References: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> This adds the page-pool logic to the dma-buf pools which allows a pool to keep pre-allocated/flushed pages around which can speed up allocation performance. NOTE: The page-pools name is term preserved from ION, but it has potential to be easily confused with dma-buf pools. Suggestions for alternatives here would be great. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/pools/Makefile | 2 +- drivers/dma-buf/pools/dmabuf-pools.h | 51 ++++++++++++ drivers/dma-buf/pools/page_pool.c | 157 +++++++++++++++++++++++++++++++++++ 3 files changed, 209 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/pools/page_pool.c -- 2.7.4 diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile index 6cb1284..a51ec25 100644 --- a/drivers/dma-buf/pools/Makefile +++ b/drivers/dma-buf/pools/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o +obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o diff --git a/drivers/dma-buf/pools/dmabuf-pools.h b/drivers/dma-buf/pools/dmabuf-pools.h index 12110f2..e3a0aac 100644 --- a/drivers/dma-buf/pools/dmabuf-pools.h +++ b/drivers/dma-buf/pools/dmabuf-pools.h @@ -238,6 +238,57 @@ size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool, */ size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool); +/** + * functions for creating and destroying a page pool -- allows you + * to keep a page pool of pre allocated memory to use from your pool. Keeping + * a page pool of memory that is ready for dma, ie any cached mapping have been + * invalidated from the cache, provides a significant performance benefit on + * many systems + */ + +/** + * struct dmabuf_page_pool - pagepool struct + * @high_count: number of highmem items in the pool + * @low_count: number of lowmem items in the pool + * @high_items: list of highmem items + * @low_items: list of lowmem items + * @mutex: lock protecting this struct and especially the count + * item list + * @gfp_mask: gfp_mask to use from alloc + * @order: order of pages in the pool + * @list: plist node for list of pools + * + * Allows you to keep a page pool of pre allocated pages to use from your pool. + * Keeping a pool of pages that is ready for dma, ie any cached mapping have + * been invalidated from the cache, provides a significant performance benefit + * on many systems + */ +struct dmabuf_page_pool { + int high_count; + int low_count; + struct list_head high_items; + struct list_head low_items; + struct mutex mutex; + gfp_t gfp_mask; + unsigned int order; + struct plist_node list; +}; + +struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask, + unsigned int order); +void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool); +struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool); +void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page); + +/** dmabuf_page_pool_shrink - shrinks the size of the memory cached in the pool + * @pool: the page pool + * @gfp_mask: the memory type to reclaim + * @nr_to_scan: number of items to shrink in pages + * + * returns the number of items freed in pages + */ +int dmabuf_page_pool_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask, + int nr_to_scan); long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); diff --git a/drivers/dma-buf/pools/page_pool.c b/drivers/dma-buf/pools/page_pool.c new file mode 100644 index 0000000..c1fe994 --- /dev/null +++ b/drivers/dma-buf/pools/page_pool.c @@ -0,0 +1,157 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pools/page_pool.c + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include + +#include "dmabuf-pools.h" + +static inline struct page *dmabuf_page_pool_alloc_pages( + struct dmabuf_page_pool *pool) +{ + return alloc_pages(pool->gfp_mask, pool->order); +} + +static void dmabuf_page_pool_free_pages(struct dmabuf_page_pool *pool, + struct page *page) +{ + __free_pages(page, pool->order); +} + +static void dmabuf_page_pool_add(struct dmabuf_page_pool *pool, + struct page *page) +{ + mutex_lock(&pool->mutex); + if (PageHighMem(page)) { + list_add_tail(&page->lru, &pool->high_items); + pool->high_count++; + } else { + list_add_tail(&page->lru, &pool->low_items); + pool->low_count++; + } + + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + 1 << pool->order); + mutex_unlock(&pool->mutex); +} + +static struct page *dmabuf_page_pool_remove(struct dmabuf_page_pool *pool, + bool high) +{ + struct page *page; + + if (high) { + WARN_ON(!pool->high_count); + page = list_first_entry(&pool->high_items, struct page, lru); + pool->high_count--; + } else { + WARN_ON(!pool->low_count); + page = list_first_entry(&pool->low_items, struct page, lru); + pool->low_count--; + } + + list_del(&page->lru); + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + -(1 << pool->order)); + return page; +} + +struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool) +{ + struct page *page = NULL; + + WARN_ON(!pool); + + mutex_lock(&pool->mutex); + if (pool->high_count) + page = dmabuf_page_pool_remove(pool, true); + else if (pool->low_count) + page = dmabuf_page_pool_remove(pool, false); + mutex_unlock(&pool->mutex); + + if (!page) + page = dmabuf_page_pool_alloc_pages(pool); + + return page; +} + +void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page) +{ + WARN_ON(pool->order != compound_order(page)); + + dmabuf_page_pool_add(pool, page); +} + +static int dmabuf_page_pool_total(struct dmabuf_page_pool *pool, bool high) +{ + int count = pool->low_count; + + if (high) + count += pool->high_count; + + return count << pool->order; +} + +int dmabuf_page_pool_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask, + int nr_to_scan) +{ + int freed = 0; + bool high; + + if (current_is_kswapd()) + high = true; + else + high = !!(gfp_mask & __GFP_HIGHMEM); + + if (nr_to_scan == 0) + return dmabuf_page_pool_total(pool, high); + + while (freed < nr_to_scan) { + struct page *page; + + mutex_lock(&pool->mutex); + if (pool->low_count) { + page = dmabuf_page_pool_remove(pool, false); + } else if (high && pool->high_count) { + page = dmabuf_page_pool_remove(pool, true); + } else { + mutex_unlock(&pool->mutex); + break; + } + mutex_unlock(&pool->mutex); + dmabuf_page_pool_free_pages(pool, page); + freed += (1 << pool->order); + } + + return freed; +} + +struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask, + unsigned int order) +{ + struct dmabuf_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL); + + if (!pool) + return NULL; + pool->high_count = 0; + pool->low_count = 0; + INIT_LIST_HEAD(&pool->low_items); + INIT_LIST_HEAD(&pool->high_items); + pool->gfp_mask = gfp_mask | __GFP_COMP; + pool->order = order; + mutex_init(&pool->mutex); + plist_node_init(&pool->list, order); + + return pool; +} + +void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool) +{ + kfree(pool); +} From patchwork Thu Feb 21 07:40:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 158873 Delivered-To: patches@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp183471jaa; Wed, 20 Feb 2019 23:40:39 -0800 (PST) X-Received: by 2002:a17:902:6b47:: with SMTP id g7mr8186496plt.100.1550734839137; Wed, 20 Feb 2019 23:40:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550734839; cv=none; d=google.com; s=arc-20160816; b=L5w6dwRNb3fRPPul8t+pKFj7QRc3NwdLvoeYBNYkq50nS91npjOGD/gI1c5tuX3PAB P5IGE46vmpue6HE04ET5P2qKV3KDGarYc3qnsGQE18zEpiDtP+CYCyn4NwmkG0qitKOB 1zmXhBeeBUjQWrIMDOiKHwmilYQgQz0qonVWDpwaFra5lYqgPIwQ7zldFH+rE936Hqcx X2sz/s7YdW2hIBAO95/n63fBktBglR3EJgD3UvkSDg++eKxzYWiqvxxxsCOmjO2dvUtZ RZ0tNtbALInA8P2IVvPU5AlBH0jBnTUl+xcoEMTj0abqy55gvXnd97VotnJwtstV7usr 45LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/665M2+WyPhY4rDWuX1SCJf083lCjlR0gsuUnKLekSY=; b=yyJHYqdzZslA8quJhlZ8Rqf+f1LNxOB0TlMp9w3VKDkEftFZDw7k4khcWRq7sZueE6 ugLVkQ0CM5XuAtvWFIq9T1QZ5qCnypG26fz6gwdPwOE0jgMhaAS3oq+ZoWAnR7M3Q7HU xcxDtziZ7QauOe6ScwLLrUPR6rp0o3SO2aGuYo+UqLAfW1bVa/RNVr5CGELKyuywPgKG vKNjQeahjcKsuUJgHPqR+7Kyonpdhr65PigzZ4knFoBcV+I+CdgsdALepD5LM0G/irhH 09K23SR9TUXZmBzAqGBK+RlgMSuUnuWo4Zv8NlkSjLFALHmmKtY1UhjzBfX3T1gWI+Qk EUog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pfyr1vxW; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id n3sor31319665pgp.41.2019.02.20.23.40.39 for (Google Transport Security); Wed, 20 Feb 2019 23:40:39 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pfyr1vxW; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/665M2+WyPhY4rDWuX1SCJf083lCjlR0gsuUnKLekSY=; b=pfyr1vxWIoouUMwP95zk6DnBkI3Vk2A4DYu8vSN4WBN32bnl65ncDuwR/k98VAL6Fe 6HTtP73SirwPP/eb1RHetR3xEDC5FJlkUPknCYE0wOikW8JanxJ8o58/yLdpLcC4Gz11 qxlmCkn876JCt1pfK2q0rLqi2Ey0BGJaeEO7gniaocO1kXObSheQwsqTwCQbyJjQsZ6F 9gsTKRaN6hCiIJCDs3291PTwGqtRYBQzyIyB7l4Kq2LDTk/1/PoHIN2I9vKLnMn0vXCA nKx9NJBSkDFp4ZPFDISUarIz91CBU38p1hC6mhIeiaK/wEuM4S8RCkTai6kQUCRbFzY7 leFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/665M2+WyPhY4rDWuX1SCJf083lCjlR0gsuUnKLekSY=; b=o8QmHdKuKUpVx7ZyjYIPwOGCwOWuJthoM/DSfaKpfjf1ptcnFabFuWGwYWVxhw92t1 vEyJlQO2uGW7HuZgJ7dd0Cr9C1bN5n1F326bi/E+/wecbvcV4gz5i29rvFtGhUpOqTlD FbeokQ3pj4YVIlsR9eC06vTIZj+vnpW/jTeSzn0ueFy0sS9vw71Kl/+t8Ul5Q12w+ZEj B0t8YfTN/TTA3LOZPrJ1rQxb0EOyVUQK5cStv3L1DRUcDx5fAovHHZIO4gAKp8o8UWpX kdoLGHL8Zm+QEAXEpWsGfUgvpGHFdwYr8kt+knEgxrsHle6IMewy5wb8Sw/TmGmAiiQT /PGw== X-Gm-Message-State: AHQUAuZFQZtxQCQvhjvxia0BiZofGd3QLdLXyuZXRUxuExxdKg4t7JL1 osH2oivWXZZd34bzGC8H1NQyt6ee X-Google-Smtp-Source: AHgI3IZRkaR4dPPSz7PiDYKuo7lUF+Ue/qE3cLuAhsB3sUSRmWiv14yPIA1xdZQd2872Eq4N4IDPAQ== X-Received: by 2002:a63:9149:: with SMTP id l70mr33172151pge.65.1550734838720; Wed, 20 Feb 2019 23:40:38 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g136sm35907237pfb.154.2019.02.20.23.40.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Feb 2019 23:40:37 -0800 (PST) From: John Stultz To: Laura Abbott Cc: John Stultz , Benjamin Gaignard , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [EARLY RFC][PATCH 3/4] dma-buf: pools: Add system/system-contig pools to dmabuf pools Date: Wed, 20 Feb 2019 23:40:29 -0800 Message-Id: <1550734830-23499-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> References: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> This patch adds system and system-contig pools to the dma-buf pools framework. This allows applications to get a page-allocator backed dma-buf, of either non-contiguous or contiguous memory. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/pools/Kconfig | 7 + drivers/dma-buf/pools/Makefile | 1 + drivers/dma-buf/pools/system_pool.c | 374 ++++++++++++++++++++++++++++++++++++ 3 files changed, 382 insertions(+) create mode 100644 drivers/dma-buf/pools/system_pool.c -- 2.7.4 diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig index caa7eb8..787b2a6 100644 --- a/drivers/dma-buf/pools/Kconfig +++ b/drivers/dma-buf/pools/Kconfig @@ -8,3 +8,10 @@ menuconfig DMABUF_POOLS which allow userspace to allocate dma-bufs that can be shared between drivers. If you're not using Android its probably safe to say N here. + +config DMABUF_POOLS_SYSTEM + bool "DMA-BUF System Pool" + depends on DMABUF_POOLS + help + Choose this option to enable the system dmabuf pool. The system pool + is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile index a51ec25..2ccf2a1 100644 --- a/drivers/dma-buf/pools/Makefile +++ b/drivers/dma-buf/pools/Makefile @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o +obj-$(CONFIG_DMABUF_POOLS_SYSTEM) += system_pool.o diff --git a/drivers/dma-buf/pools/system_pool.c b/drivers/dma-buf/pools/system_pool.c new file mode 100644 index 0000000..1756990 --- /dev/null +++ b/drivers/dma-buf/pools/system_pool.c @@ -0,0 +1,374 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pools/system_pool.c + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include "dmabuf-pools.h" + +#define NUM_ORDERS ARRAY_SIZE(orders) + +static gfp_t high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN | + __GFP_NORETRY) & ~__GFP_RECLAIM; +static gfp_t low_order_gfp_flags = GFP_HIGHUSER | __GFP_ZERO; +static const unsigned int orders[] = {8, 4, 0}; + +static int order_to_index(unsigned int order) +{ + int i; + + for (i = 0; i < NUM_ORDERS; i++) + if (order == orders[i]) + return i; + WARN_ON(1); + return -1; +} + +static inline unsigned int order_to_size(int order) +{ + return PAGE_SIZE << order; +} + +struct system_pool { + struct dmabuf_pool pool; + struct dmabuf_page_pool *page_pools[NUM_ORDERS]; +}; + +static struct page *alloc_buffer_page(struct system_pool *sys_pool, + struct dmabuf_pool_buffer *buffer, + unsigned long order) +{ + struct dmabuf_page_pool *pagepool = + sys_pool->page_pools[order_to_index(order)]; + + return dmabuf_page_pool_alloc(pagepool); +} + +static void free_buffer_page(struct system_pool *sys_pool, + struct dmabuf_pool_buffer *buffer, + struct page *page) +{ + struct dmabuf_page_pool *pagepool; + unsigned int order = compound_order(page); + + /* go to system */ + if (buffer->private_flags & DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE) { + __free_pages(page, order); + return; + } + + pagepool = sys_pool->page_pools[order_to_index(order)]; + + dmabuf_page_pool_free(pagepool, page); +} + +static struct page *alloc_largest_available(struct system_pool *sys_pool, + struct dmabuf_pool_buffer *buffer, + unsigned long size, + unsigned int max_order) +{ + struct page *page; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + if (size < order_to_size(orders[i])) + continue; + if (max_order < orders[i]) + continue; + + page = alloc_buffer_page(sys_pool, buffer, orders[i]); + if (!page) + continue; + + return page; + } + + return NULL; +} + +static int system_pool_allocate(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + unsigned long size, + unsigned long flags) +{ + struct system_pool *sys_pool = container_of(pool, + struct system_pool, + pool); + struct sg_table *table; + struct scatterlist *sg; + struct list_head pages; + struct page *page, *tmp_page; + int i = 0; + unsigned long size_remaining = PAGE_ALIGN(size); + unsigned int max_order = orders[0]; + + if (size / PAGE_SIZE > totalram_pages() / 2) + return -ENOMEM; + + INIT_LIST_HEAD(&pages); + while (size_remaining > 0) { + page = alloc_largest_available(sys_pool, buffer, size_remaining, + max_order); + if (!page) + goto free_pages; + list_add_tail(&page->lru, &pages); + size_remaining -= PAGE_SIZE << compound_order(page); + max_order = compound_order(page); + i++; + } + table = kmalloc(sizeof(*table), GFP_KERNEL); + if (!table) + goto free_pages; + + if (sg_alloc_table(table, i, GFP_KERNEL)) + goto free_table; + + sg = table->sgl; + list_for_each_entry_safe(page, tmp_page, &pages, lru) { + sg_set_page(sg, page, PAGE_SIZE << compound_order(page), 0); + sg = sg_next(sg); + list_del(&page->lru); + } + + buffer->sg_table = table; + return 0; + +free_table: + kfree(table); +free_pages: + list_for_each_entry_safe(page, tmp_page, &pages, lru) + free_buffer_page(sys_pool, buffer, page); + return -ENOMEM; +} + +static void system_pool_free(struct dmabuf_pool_buffer *buffer) +{ + struct system_pool *sys_pool = container_of(buffer->pool, + struct system_pool, + pool); + struct sg_table *table = buffer->sg_table; + struct scatterlist *sg; + int i; + + /* zero the buffer before goto page pool */ + if (!(buffer->private_flags & DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE)) + dmabuf_pool_buffer_zero(buffer); + + for_each_sg(table->sgl, sg, table->nents, i) + free_buffer_page(sys_pool, buffer, sg_page(sg)); + sg_free_table(table); + kfree(table); +} + +static int system_pool_shrink(struct dmabuf_pool *pool, gfp_t gfp_mask, + int nr_to_scan) +{ + struct dmabuf_page_pool *page_pool; + struct system_pool *sys_pool; + int nr_total = 0; + int i, nr_freed; + int only_scan = 0; + + sys_pool = container_of(pool, struct system_pool, pool); + + if (!nr_to_scan) + only_scan = 1; + + for (i = 0; i < NUM_ORDERS; i++) { + page_pool = sys_pool->page_pools[i]; + + if (only_scan) { + nr_total += dmabuf_page_pool_shrink(page_pool, + gfp_mask, + nr_to_scan); + + } else { + nr_freed = dmabuf_page_pool_shrink(page_pool, + gfp_mask, + nr_to_scan); + nr_to_scan -= nr_freed; + nr_total += nr_freed; + if (nr_to_scan <= 0) + break; + } + } + return nr_total; +} + +static struct dmabuf_pool_ops system_pool_ops = { + .allocate = system_pool_allocate, + .free = system_pool_free, + .map_kernel = dmabuf_pool_map_kernel, + .unmap_kernel = dmabuf_pool_unmap_kernel, + .map_user = dmabuf_pool_map_user, + .shrink = system_pool_shrink, +}; + +static void system_pool_destroy_pools(struct dmabuf_page_pool **page_pools) +{ + int i; + + for (i = 0; i < NUM_ORDERS; i++) + if (page_pools[i]) + dmabuf_page_pool_destroy(page_pools[i]); +} + +static int system_pool_create_pools(struct dmabuf_page_pool **page_pools) +{ + int i; + gfp_t gfp_flags = low_order_gfp_flags; + + for (i = 0; i < NUM_ORDERS; i++) { + struct dmabuf_page_pool *pool; + + if (orders[i] > 4) + gfp_flags = high_order_gfp_flags; + + pool = dmabuf_page_pool_create(gfp_flags, orders[i]); + if (!pool) + goto err_create_pool; + page_pools[i] = pool; + } + return 0; + +err_create_pool: + system_pool_destroy_pools(page_pools); + return -ENOMEM; +} + +static struct dmabuf_pool *__system_pool_create(void) +{ + struct system_pool *sys_pool; + + sys_pool = kzalloc(sizeof(*sys_pool), GFP_KERNEL); + if (!sys_pool) + return ERR_PTR(-ENOMEM); + sys_pool->pool.ops = &system_pool_ops; + sys_pool->pool.flags = DMABUF_POOL_FLAG_DEFER_FREE; + + if (system_pool_create_pools(sys_pool->page_pools)) + goto free_pool; + + return &sys_pool->pool; + +free_pool: + kfree(sys_pool); + return ERR_PTR(-ENOMEM); +} + +static int system_pool_create(void) +{ + struct dmabuf_pool *pool; + + pool = __system_pool_create(); + if (IS_ERR(pool)) + return PTR_ERR(pool); + pool->name = "system_pool"; + + dmabuf_pool_add(pool); + return 0; +} +device_initcall(system_pool_create); + +static int system_contig_pool_allocate(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + unsigned long len, + unsigned long flags) +{ + int order = get_order(len); + struct page *page; + struct sg_table *table; + unsigned long i; + int ret; + + page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order); + if (!page) + return -ENOMEM; + + split_page(page, order); + + len = PAGE_ALIGN(len); + for (i = len >> PAGE_SHIFT; i < (1 << order); i++) + __free_page(page + i); + + table = kmalloc(sizeof(*table), GFP_KERNEL); + if (!table) { + ret = -ENOMEM; + goto free_pages; + } + + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) + goto free_table; + + sg_set_page(table->sgl, page, len, 0); + + buffer->sg_table = table; + + return 0; + +free_table: + kfree(table); +free_pages: + for (i = 0; i < len >> PAGE_SHIFT; i++) + __free_page(page + i); + + return ret; +} + +static void system_contig_pool_free(struct dmabuf_pool_buffer *buffer) +{ + struct sg_table *table = buffer->sg_table; + struct page *page = sg_page(table->sgl); + unsigned long pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; + unsigned long i; + + for (i = 0; i < pages; i++) + __free_page(page + i); + sg_free_table(table); + kfree(table); +} + +static struct dmabuf_pool_ops kmalloc_ops = { + .allocate = system_contig_pool_allocate, + .free = system_contig_pool_free, + .map_kernel = dmabuf_pool_map_kernel, + .unmap_kernel = dmabuf_pool_unmap_kernel, + .map_user = dmabuf_pool_map_user, +}; + +static struct dmabuf_pool *__system_contig_pool_create(void) +{ + struct dmabuf_pool *pool; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + pool->ops = &kmalloc_ops; + pool->name = "system_contig_pool"; + return pool; +} + +static int system_contig_pool_create(void) +{ + struct dmabuf_pool *pool; + + pool = __system_contig_pool_create(); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + dmabuf_pool_add(pool); + return 0; +} +device_initcall(system_contig_pool_create); + From patchwork Thu Feb 21 07:40:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 158874 Delivered-To: patches@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp183497jaa; Wed, 20 Feb 2019 23:40:40 -0800 (PST) X-Received: by 2002:a62:560f:: with SMTP id k15mr38646812pfb.231.1550734840773; Wed, 20 Feb 2019 23:40:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550734840; cv=none; d=google.com; s=arc-20160816; b=bFvtBceBJTCFVCN8V33ubD6scA6zJF4Hm8OAjHYBeIYaHTAo3hsBOXz1pGZWDTIFkC BphdoWJZVEcLmohDBd3lTb/kfF2YGrWf2VPFXg2VHgBzzvYeEnmLBvrkCo2fnxkd5Ioh tb1pAeirtOFXyP91w7ydmI2nXlr3DfL5CcZ7C6LoQG9QkJXa0ZcYW64Xob+nSHxsHM/r sbX/+pJdY+DcM3GO9DKmHY/ugYIvlKDphlfMX3mYt20XqQl0sTKUtDa2dxAfG3tw35st RTtffarhdqAZ6VeMXydrIvH6zG9U06hqusQRWEvAK+prbs4fQWpl5g16XQxfa+W3N+LE yQ5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DKgUdz/J1rmM8dQ93ENbc8BkFgGujdvCLx05VHfoMYA=; b=XSrpjhjLDrGyRzNRdg1+AXnAoLOTG6LqdSpuea8tj/Vc2NFL0rsdlUhNtmslk+ZKeu UiruTCqH2tUHLmN+Z90jho6QNBnoQ7tAcWu+vDNpzn7Uo/7EjuAHhRhi6YdpHMyKZUtT MSeqGTCAqu9lEOjpaYCvWFEX4hV9kSpYlCPQz7l3LBzeDoJUVCqWgMs5apVPMO0mCY4g 08QZkEp1QJTemzF1ntPRWouhUffVNVPvxD2Y6z0KWL2rEuMAk8GpHW7ppP/wRdNZoBjs 0ZQoulEKrpo7Dksht8dX9kAy0JNAYrYPvX33aHH9NGxJpNFxlU0RRoWfGPDbMWuhCrqS jO2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=njrOiJQQ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id i6sor18142807pfa.30.2019.02.20.23.40.40 for (Google Transport Security); Wed, 20 Feb 2019 23:40:40 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=njrOiJQQ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DKgUdz/J1rmM8dQ93ENbc8BkFgGujdvCLx05VHfoMYA=; b=njrOiJQQokGCyPXqt71mDK9pN8GSnoXIrEavDjhEs6vOqHSRzZ4F1YPelwnY/93qvq IyBsf5KDLBpos005/DcjtUvrEu/5K1SeyzG6g9MM+4CROUSdDMNKak0tlidG/YzL/PQe S3+nDjfgjpDTq3smGnkhA0wdnIo2f3PNn7ST0iOjP4lO01zIUn2zDsT3WpLk2vuOob7D lHHJp+gznzJ7j+T4I/9lkE+Kp1xJPWfYSKYRNignwzlBYVRiZOh8Ckh63aBAu/7J7ygj x5F/cCwiyw7Hx5trU/5McIra3OBF/Z6j3TqzeoUUHct0ncxPi6GFsixJTcB0Y27RnsnY J2Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DKgUdz/J1rmM8dQ93ENbc8BkFgGujdvCLx05VHfoMYA=; b=RaeJGmofzWzRm9esldvVd1lrLKV2+SaFwm/A/qEouc+VTsI/jgt6EbeqK9+74DnMjl 3TbfTxiJEV+x3k/8N8XWLH9C9cJNlbH+O33XEss6MhRfO59q0axlRITmxtxhydwlDlTl OHqaynLEQjiH4BL2FSuJ96aIElXhNBMztXhaQCqx/id/BgonDfpxxGIDwa59GVSX2AyZ P2gbm3CM5Gr9CZQRpZl57T/bA7AwP1k/4MRAdoLfw10FqQptmboEyjQCT+rs2sszg3Ki rNQ1+A8rOdBAO2Quit62uK7dCIxEOTF7n5rKMEX90rWozCHlAGoByIm0vZOoB8CDRcDx SfKQ== X-Gm-Message-State: AHQUAuZ3b2Z4X4/aVVq11FI6n+zl+5/LaPwHeygodazPc+iecI17KWz1 ZBn34h44QpqZY+8KX6axveLStDet X-Google-Smtp-Source: AHgI3IbeNDJ4zEHbia3wBqcvGvKLWAoDN4EHsj4U0T6kamRL971JDiS30zGk46+Mum03GKJVmm9eBA== X-Received: by 2002:a62:503:: with SMTP id 3mr38275144pff.176.1550734840365; Wed, 20 Feb 2019 23:40:40 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g136sm35907237pfb.154.2019.02.20.23.40.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Feb 2019 23:40:39 -0800 (PST) From: John Stultz To: Laura Abbott Cc: John Stultz , Benjamin Gaignard , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [EARLY RFC][PATCH 4/4] dma-buf: pools: Add CMA pool to dmabuf pools Date: Wed, 20 Feb 2019 23:40:30 -0800 Message-Id: <1550734830-23499-5-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> References: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> This adds a CMA pool, which allows userspace to allocate a dma-buf of contiguous memory out of a CMA region. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/pools/Kconfig | 8 +++ drivers/dma-buf/pools/Makefile | 1 + drivers/dma-buf/pools/cma_pool.c | 143 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 152 insertions(+) create mode 100644 drivers/dma-buf/pools/cma_pool.c -- 2.7.4 diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig index 787b2a6..e984304 100644 --- a/drivers/dma-buf/pools/Kconfig +++ b/drivers/dma-buf/pools/Kconfig @@ -15,3 +15,11 @@ config DMABUF_POOLS_SYSTEM help Choose this option to enable the system dmabuf pool. The system pool is backed by pages from the buddy allocator. If in doubt, say Y. + +config DMABUF_POOLS_CMA + bool "DMA-BUF CMA Pool" + depends on DMABUF_POOLS && DMA_CMA + help + Choose this option to enable dma-buf CMA pools. This pool is backed + by the Contiguous Memory Allocator (CMA). If your system has these + regions, you should say Y here. diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile index 2ccf2a1..ac8aa28 100644 --- a/drivers/dma-buf/pools/Makefile +++ b/drivers/dma-buf/pools/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o obj-$(CONFIG_DMABUF_POOLS_SYSTEM) += system_pool.o +obj-$(CONFIG_DMABUF_POOLS_CMA) += cma_pool.o diff --git a/drivers/dma-buf/pools/cma_pool.c b/drivers/dma-buf/pools/cma_pool.c new file mode 100644 index 0000000..0bd783f --- /dev/null +++ b/drivers/dma-buf/pools/cma_pool.c @@ -0,0 +1,143 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pools/cma_pool.c + * + * Copyright (C) 2012, 2019 Linaro Ltd. + * Author: for ST-Ericsson. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "dmabuf-pools.h" + +struct cma_pool { + struct dmabuf_pool pool; + struct cma *cma; +}; + +#define to_cma_pool(x) container_of(x, struct cma_pool, pool) + +/* dmabuf pool CMA operations functions */ +static int cma_pool_allocate(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + unsigned long len, + unsigned long flags) +{ + struct cma_pool *cma_pool = to_cma_pool(pool); + struct sg_table *table; + struct page *pages; + unsigned long size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); + int ret; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; + + pages = cma_alloc(cma_pool->cma, nr_pages, align, false); + if (!pages) + return -ENOMEM; + + if (PageHighMem(pages)) { + unsigned long nr_clear_pages = nr_pages; + struct page *page = pages; + + while (nr_clear_pages > 0) { + void *vaddr = kmap_atomic(page); + + memset(vaddr, 0, PAGE_SIZE); + kunmap_atomic(vaddr); + page++; + nr_clear_pages--; + } + } else { + memset(page_address(pages), 0, size); + } + + table = kmalloc(sizeof(*table), GFP_KERNEL); + if (!table) + goto err; + + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) + goto free_mem; + + sg_set_page(table->sgl, pages, size, 0); + + buffer->priv_virt = pages; + buffer->sg_table = table; + return 0; + +free_mem: + kfree(table); +err: + cma_release(cma_pool->cma, pages, nr_pages); + return -ENOMEM; +} + +static void cma_pool_free(struct dmabuf_pool_buffer *buffer) +{ + struct cma_pool *cma_pool = to_cma_pool(buffer->pool); + struct page *pages = buffer->priv_virt; + unsigned long nr_pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; + + /* release memory */ + cma_release(cma_pool->cma, pages, nr_pages); + /* release sg table */ + sg_free_table(buffer->sg_table); + kfree(buffer->sg_table); +} + +static struct dmabuf_pool_ops cma_pool_ops = { + .allocate = cma_pool_allocate, + .free = cma_pool_free, + .map_user = dmabuf_pool_map_user, + .map_kernel = dmabuf_pool_map_kernel, + .unmap_kernel = dmabuf_pool_unmap_kernel, +}; + +static struct dmabuf_pool *__cma_pool_create(struct cma *cma) +{ + struct cma_pool *cma_pool; + + cma_pool = kzalloc(sizeof(*cma_pool), GFP_KERNEL); + + if (!cma_pool) + return ERR_PTR(-ENOMEM); + + cma_pool->pool.ops = &cma_pool_ops; + /* + * get device from private pools data, later it will be + * used to make the link with reserved CMA memory + */ + cma_pool->cma = cma; + return &cma_pool->pool; +} + +static int __add_cma_pools(struct cma *cma, void *data) +{ + struct dmabuf_pool *pool; + + pool = __cma_pool_create(cma); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + pool->name = cma_get_name(cma); + + dmabuf_pool_add(pool); + + return 0; +} + +static int add_cma_pools(void) +{ + cma_for_each_area(__add_cma_pools, NULL); + return 0; +} +device_initcall(add_cma_pools);