From patchwork Wed Jul 24 00:36:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 169586 Delivered-To: patches@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp9527460ilk; Tue, 23 Jul 2019 17:37:07 -0700 (PDT) X-Received: by 2002:a62:1bca:: with SMTP id b193mr8200578pfb.57.1563928627504; Tue, 23 Jul 2019 17:37:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563928627; cv=none; d=google.com; s=arc-20160816; b=Ob3o59xOpKgt0ACrltuKTImVVFLAlyUiimrcUihGdkDfgE3hkYbedb0oaVx3LTY5vS ch6uAbuJhR9hinHEVkaxMhitPGEVRzZL4VWrWZX851ALCfkvZPCaIxogWL7j2OVS5fyy thqtF03F9/vWwO7pYbR4hhr5qNfiANBV7XMT4Cz0Iqlnv67d4VZJ+JuEjJC0KY/byFrn 0IlLov42tazHocXjQpQLi07xVQ3WTSGRt3kUgclQz+aCoNdZ0rRoyek3jVPVq05YlJUc hspPPZTgmvRPzBBfvll656xjr9M0RsQNRP3QRQ8VpuYfAFsr7Q5fB6H+NnpQFCh/2vwB C5aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eCJcdrdV5vXslXNmNWdX1+xjDhPGdq+9No5ztEZ9ag0=; b=0R3CoiR5Gn8MBGnckYxivX2I3sB8AesS4ZYaGgNaGCCAbPwCtmrq4d4L3Np1sCEgqz d6rr2qmc6JhwNVbXmhh2pQ64Lhf3PFxkJrsmeH3Q/K3vI0hGXufTTZ/l1fAA9IqxRr9H 0JQjqN/N/etGd8d6jnOv78qx+flc3Fvf7Q+2J3oav8ec20/yTR9IEnPwK5nlOZEyFPc3 bZqnCPD+KX+Ov39//zGSlukHScOV8dV+hlg3K8MCKndKG7hQZUFlnuV9aUHy6p5jjURy WVbI4NBEny1GO8D6zv2nZR1rKprRxdCT3MPoCoSnleCwWFHdNGDQ0+GaOPr5FkdqHxUj U69w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="utRo4D/H"; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 125sor5229861pgi.63.2019.07.23.17.37.07 for (Google Transport Security); Tue, 23 Jul 2019 17:37:07 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="utRo4D/H"; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eCJcdrdV5vXslXNmNWdX1+xjDhPGdq+9No5ztEZ9ag0=; b=utRo4D/Hb3sozHOY2wm7Ih9x75qhFMAxKs6pLS5yxMcVNkTfhD2EXP0nyyJyGX72+x 8uYkguOPqCD1B84l8AFU9VfdLzGLSNojOXg19cvPcqAWBGTeVgWhjOZ5C/PqW6mZh2gP 1oLaVTF1BoNVsT61rIj0H/mKI4aQXl5z6HsuT8Rflh+4thKYszbvJP0o6/iy2pTzPDw2 AR93+qEvJKNUYX4tszYCHcY5dEPydIWtI0oIUWH1KMYuqTu2K6v6cOEh8hHZbxj2ZNvS 50gnQcOFE2SzbTuuZ90Fb87rwLXbitOTEL/IRHb4etDsh8n4tf5OwYn71tFdYKPKDSuU Amzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eCJcdrdV5vXslXNmNWdX1+xjDhPGdq+9No5ztEZ9ag0=; b=FElkCatuyMfkvUqOcev/2e36VfwmoZ8ZvuDRfmkPbtOOu9YEVgpG7ldOFAPqPIe6Hk MaoZ1YE6dUBQEpogHnSIj6sQjuMSQgGO9TEk/CqoLji9feJtJpvmEfNJPZa2sV5yKgkT F7AYEj/q6+ZlAewRk6Zz2QllXDLgmyjDExGBNkIYAbHjd2lofMbv/S9nmmLe/ZbKCSBp /W5dUDAHGwgMkhVU5DQPY80N924FgmZwhAb/aOOD7sQQWM/hbOeq6OB5H3gbDQ+pimYI Q4xe4uLCrNNeaE5EUAbI2yyGdwGJ9T1/r6w0xaBMA5McI7dyDeKbx7SqbzJ643DiQu+/ 9UUw== X-Gm-Message-State: APjAAAXGuO7XR8aOiVacwje7AauVAhpLec758f5FcC2wyu+YIv4WBF1j oH6t51qs1WoHEtFke8fE0EBMzcGg X-Google-Smtp-Source: APXvYqyq3aYiWmQkvR/qAy48XbEOER2AONFHRLatEgTPkG/5N9BpkPPLuGCZ+dj/0MBVF7/W3sIJQQ== X-Received: by 2002:a63:2c8:: with SMTP id 191mr77747242pgc.139.1563928627003; Tue, 23 Jul 2019 17:37:07 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id x26sm58890022pfq.69.2019.07.23.17.37.05 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 17:37:06 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Sumit Semwal , Liam Mark , Pratik Patel , Brian Starkey , Vincent Donnefort , Sudipto Paul , "Andrew F . Davis" , Christoph Hellwig , Chenbo Feng , Alistair Strachan , Hridya Valsaraju , dri-devel@lists.freedesktop.org Subject: [PATCH v7 2/5] dma-buf: heaps: Add heap helpers Date: Wed, 24 Jul 2019 00:36:53 +0000 Message-Id: <20190724003656.59780-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190724003656.59780-1-john.stultz@linaro.org> References: <20190724003656.59780-1-john.stultz@linaro.org> Add generic helper dmabuf ops for dma heaps, so we can reduce the amount of duplicative code for the exported dmabufs. This code is an evolution of the Android ION implementation, so thanks to its original authors and maintainters: Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Pratik Patel Cc: Brian Starkey Cc: Vincent Donnefort Cc: Sudipto Paul Cc: Andrew F. Davis Cc: Christoph Hellwig Cc: Chenbo Feng Cc: Alistair Strachan Cc: Hridya Valsaraju Cc: dri-devel@lists.freedesktop.org Reviewed-by: Benjamin Gaignard Signed-off-by: John Stultz --- v2: * Removed cache management performance hack that I had accidentally folded in. * Removed stats code that was in helpers * Lots of checkpatch cleanups v3: * Uninline INIT_HEAP_HELPER_BUFFER (suggested by Christoph) * Switch to WARN on buffer destroy failure (suggested by Brian) * buffer->kmap_cnt decrementing cleanup (suggested by Christoph) * Extra buffer->vaddr checking in dma_heap_dma_buf_kmap (suggested by Brian) * Switch to_helper_buffer from macro to inline function (suggested by Benjamin) * Rename kmap->vmap (folded in from Andrew) * Use vmap for vmapping - not begin_cpu_access (folded in from Andrew) * Drop kmap for now, as its optional (folded in from Andrew) * Fold dma_heap_map_user into the single caller (foled in from Andrew) * Folded in patch from Andrew to track page list per heap not sglist, which simplifies the tracking logic v4: * Moved dma-heap.h change out to previous patch v6: * Minor cleanups and typo fixes suggested by Brian v7: * Removed stray ; * Make init_heap_helper_buffer lowercase, as suggested by Christoph * Add dmabuf export helper to reduce boilerplate code --- drivers/dma-buf/Makefile | 1 + drivers/dma-buf/heaps/Makefile | 2 + drivers/dma-buf/heaps/heap-helpers.c | 276 +++++++++++++++++++++++++++ drivers/dma-buf/heaps/heap-helpers.h | 58 ++++++ 4 files changed, 337 insertions(+) create mode 100644 drivers/dma-buf/heaps/Makefile create mode 100644 drivers/dma-buf/heaps/heap-helpers.c create mode 100644 drivers/dma-buf/heaps/heap-helpers.h -- 2.17.1 diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile index 1cb3dd104825..e3e3dca29e46 100644 --- a/drivers/dma-buf/Makefile +++ b/drivers/dma-buf/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \ reservation.o seqno-fence.o +obj-$(CONFIG_DMABUF_HEAPS) += heaps/ obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o obj-$(CONFIG_SYNC_FILE) += sync_file.o obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile new file mode 100644 index 000000000000..de49898112db --- /dev/null +++ b/drivers/dma-buf/heaps/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-y += heap-helpers.o diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c new file mode 100644 index 000000000000..df1bd3cf7262 --- /dev/null +++ b/drivers/dma-buf/heaps/heap-helpers.c @@ -0,0 +1,276 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + +void init_heap_helper_buffer(struct heap_helper_buffer *buffer, + void (*free)(struct heap_helper_buffer *)) +{ + buffer->private_flags = 0; + buffer->priv_virt = NULL; + mutex_init(&buffer->lock); + buffer->vmap_cnt = 0; + buffer->vaddr = NULL; + buffer->pagecount = 0; + buffer->pages = NULL; + INIT_LIST_HEAD(&buffer->attachments); + buffer->free = free; +} + +struct dma_buf *heap_helper_export_dmabuf( + struct heap_helper_buffer *helper_buffer, + int fd_flags) +{ + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + + exp_info.ops = &heap_helper_ops; + exp_info.size = helper_buffer->heap_buffer.size; + exp_info.flags = fd_flags; + exp_info.priv = &helper_buffer->heap_buffer; + + return dma_buf_export(&exp_info); +} + +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer) +{ + void *vaddr; + + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer) +{ + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + if (buffer->vmap_cnt > 0) { + WARN("%s: buffer still mapped in the kernel\n", + __func__); + vunmap(buffer->vaddr); + } + + buffer->free(buffer); +} + +static void *dma_heap_buffer_vmap_get(struct dma_heap_buffer *heap_buffer) +{ + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + void *vaddr; + + if (buffer->vmap_cnt) { + buffer->vmap_cnt++; + return buffer->vaddr; + } + vaddr = dma_heap_map_kernel(buffer); + if (WARN_ONCE(!vaddr, + "heap->ops->map_kernel should return ERR_PTR on error")) + return ERR_PTR(-EINVAL); + if (IS_ERR(vaddr)) + return vaddr; + buffer->vaddr = vaddr; + buffer->vmap_cnt++; + return vaddr; +} + +static void dma_heap_buffer_vmap_put(struct dma_heap_buffer *heap_buffer) +{ + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } +} + +struct dma_heaps_attachment { + struct device *dev; + struct sg_table table; + struct list_head list; +}; + +static int dma_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct dma_heaps_attachment *a; + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = sg_alloc_table_from_pages(&a->table, buffer->pages, + buffer->pagecount, 0, + buffer->pagecount << PAGE_SHIFT, + GFP_KERNEL); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + INIT_LIST_HEAD(&a->list); + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void dma_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct dma_heaps_attachment *a = attachment->priv; + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table *dma_heap_map_dma_buf( + struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct dma_heaps_attachment *a = attachment->priv; + struct sg_table *table; + + table = &a->table; + + if (!dma_map_sg(attachment->dev, table->sgl, table->nents, + direction)) + table = ERR_PTR(-ENOMEM); + return table; +} + +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); +} + +static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct heap_helper_buffer *buffer = vma->vm_private_data; + + vmf->page = buffer->pages[vmf->pgoff]; + get_page(vmf->page); + + return 0; +} + +static const struct vm_operations_struct dma_heap_vm_ops = { + .fault = dma_heap_vm_fault, +}; + +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) + return -EINVAL; + + vma->vm_ops = &dma_heap_vm_ops; + vma->vm_private_data = buffer; + + return 0; +} + +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct dma_heap_buffer *buffer = dmabuf->priv; + + dma_heap_buffer_destroy(buffer); +} + +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + struct dma_heaps_attachment *a; + int ret = 0; + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents, + direction); + } + mutex_unlock(&buffer->lock); + + return ret; +} + +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + struct dma_heaps_attachment *a; + + mutex_lock(&buffer->lock); + list_for_each_entry(a, &buffer->attachments, list) { + dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents, + direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + void *vaddr; + + mutex_lock(&buffer->lock); + vaddr = dma_heap_buffer_vmap_get(heap_buffer); + mutex_unlock(&buffer->lock); + + return vaddr; +} + +static void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr) +{ + struct dma_heap_buffer *heap_buffer = dmabuf->priv; + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer); + + mutex_lock(&buffer->lock); + dma_heap_buffer_vmap_put(heap_buffer); + mutex_unlock(&buffer->lock); +} + +const struct dma_buf_ops heap_helper_ops = { + .map_dma_buf = dma_heap_map_dma_buf, + .unmap_dma_buf = dma_heap_unmap_dma_buf, + .mmap = dma_heap_mmap, + .release = dma_heap_dma_buf_release, + .attach = dma_heap_attach, + .detach = dma_heap_detach, + .begin_cpu_access = dma_heap_dma_buf_begin_cpu_access, + .end_cpu_access = dma_heap_dma_buf_end_cpu_access, + .vmap = dma_heap_dma_buf_vmap, + .vunmap = dma_heap_dma_buf_vunmap, +}; diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h new file mode 100644 index 000000000000..741c7ec013f5 --- /dev/null +++ b/drivers/dma-buf/heaps/heap-helpers.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DMABUF Heaps helper code + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#ifndef _HEAP_HELPERS_H +#define _HEAP_HELPERS_H + +#include +#include + +/** + * struct dma_heap_buffer - metadata for a particular buffer + * @heap: back pointer to the heap the buffer came from + * @dmabuf: backing dma-buf for this buffer + * @size: size of the buffer + * @flags: buffer specific flags + */ +struct dma_heap_buffer { + struct dma_heap *heap; + struct dma_buf *dmabuf; + size_t size; + unsigned long flags; +}; + +struct heap_helper_buffer { + struct dma_heap_buffer heap_buffer; + + unsigned long private_flags; + void *priv_virt; + struct mutex lock; + int vmap_cnt; + void *vaddr; + pgoff_t pagecount; + struct page **pages; + struct list_head attachments; + + void (*free)(struct heap_helper_buffer *buffer); +}; + +static inline struct heap_helper_buffer *to_helper_buffer( + struct dma_heap_buffer *h) +{ + return container_of(h, struct heap_helper_buffer, heap_buffer); +} + +void init_heap_helper_buffer(struct heap_helper_buffer *buffer, + void (*free)(struct heap_helper_buffer *)); +struct dma_buf *heap_helper_export_dmabuf( + struct heap_helper_buffer *helper_buffer, + int fd_flags); +extern const struct dma_buf_ops heap_helper_ops; + + +#endif /* _HEAP_HELPERS_H */