From patchwork Tue Dec 3 17:26:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 180732 Delivered-To: patches@linaro.org Received: by 2002:a92:3001:0:0:0:0:0 with SMTP id x1csp1797967ile; Tue, 3 Dec 2019 09:26:50 -0800 (PST) X-Received: by 2002:a63:6fca:: with SMTP id k193mr6719741pgc.416.1575394010125; Tue, 03 Dec 2019 09:26:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575394010; cv=none; d=google.com; s=arc-20160816; b=vsWvvXa8YjjmGLUD64Zb1cAXW4/HIxdqvt+lrchfyKsbI5S9+42dQRgaT2O2SuWROE qUCIstjzm22Qg6JROe7PpdvNEryCWiGqw/ESxNbQNoSPRUxuS6fJFQ5wDRbqX5zUJfCV T5yrdx6Q3F6d0nUZJ3cVzFQXciYI3xBXeQMdAjWP7Nwll0MGnif8xBAg7/yq8SDU0ocB hajGh0Rg2DU2QxZ+sT5gluaNJgI2djBfA8uuMlqURTzQtw0ro/vQJRKNCZA5j4xxVS1l vICZDPfGPAKljO3oCGI6+IUlWBFfLso6AWsCrcXgIyEbdhd6nBWVfz/NI4J7xfoKBI2a ctdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=g2tf2P5EW6ccOOB6IuRLEpkXKMEElN2z8PaZC3yQjBk=; b=rmVge9lqWBY/69HVWaOhyJBKhob1OLDM050Synahg6a/hGV8Z34jy8DC6cLwUa3Z3D +53CRnpOU3jnOqVKY6P4DrsTjGPWs1TWdR1+aIgiQh7qhtmvbfyGlPTk+dI60SbSbzPp vVN72MynB/h0qCtm321tpxqy+Re2o3m4OGZ9apF3UJV5z0C41jtGSofysO5g4CZiLEKd DcFNtMPcIaXjvU8pC6RpXjsT9eHz14c5e+5gJ2XgDahmLebHhi7tLvlmPA9ZWknxr7jb jTz5R1au4qycCty5TO54ICH/NG2Vlpqz/gP/q1v67dZ8xV/3P+O5FHy7qaPCvohG94XF wERw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=THxzq+Wf; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o24sor4263138pjp.1.2019.12.03.09.26.50 for (Google Transport Security); Tue, 03 Dec 2019 09:26:50 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=THxzq+Wf; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=g2tf2P5EW6ccOOB6IuRLEpkXKMEElN2z8PaZC3yQjBk=; b=THxzq+WfZQ2K8WaJ+VoflSOKfd9OwR+NLRTYyv6FIHEUor0lCDbee9j3MOfbkENgz8 uDoKayJRBdWXd8OGKIYNb2Qf9/JNOMgX+dRmuEtc8vsoVaAA1PKaOPgEoHtba1rdiia/ M23zQ0r+GZfa2bUQJXUbLbHU/dqlPvZZqLypXNu3yldip+BJGSHQIU+PPuxGolk/Jur2 IDyNwtr02ND0xkTUA2kuJlIkN8h8oGndkqQIJ9CUKg6YFKdCp9p/afs4ekN2t6foj2S2 bURz4UuW83PgpGoagIsbfyqVLKzbo6gnprnk1PDrj/YNdhxsmq83XiuCVF1gdttMeE1A jX0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=g2tf2P5EW6ccOOB6IuRLEpkXKMEElN2z8PaZC3yQjBk=; b=mwlOxne5tEE5Yf/NUVXS8GP7/a8E0WzggphspWw4DBrMo3y1qrr+tU+Plb+Mg1lCY5 Vb+1e8b1+pwXgd1MMOg7ScDfUoY0Grw4+yC0xAnNY/10/+qh1GD/1wfWyZUq2ylL8mdW pgZhQpBBjjpsU1Dx7q5QgbGK0zi6uzptqc8dryvSALN+KS2gkHqUS0wU5FAWznT+lraQ 2xZtOJQzz4qtos43yvO/BU7mRCa9xaoNBjLn6h8MgG0VLuloSIJMI4qRR63Xd9AYUs8O IjIOa9G/F+RGnnlE6x41me7aAP7B3cRrvRTz9trwnCQA02ke+K2YzCWeljZ5vgvX1ov6 Ll8A== X-Gm-Message-State: APjAAAXSi3ZCLr3byPIIRwlk4XC3qTj1tuVh/EThLy4Qs5h6f777QGdV C4DC6e4PGVUiKK833T9qMn6Ohh73 X-Google-Smtp-Source: APXvYqxSI7S1nCYZZUEHToMmZ8+W6VRv7mYgXl+ItV6GIQWzK14wTV9p3dBGmZxBGQmIlbzmUR3Epg== X-Received: by 2002:a17:90a:804a:: with SMTP id e10mr6769479pjw.41.1575394009690; Tue, 03 Dec 2019 09:26:49 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id l9sm4066177pgh.34.2019.12.03.09.26.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Dec 2019 09:26:49 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Sumit Semwal , Liam Mark , Pratik Patel , Brian Starkey , Vincent Donnefort , Sudipto Paul , "Andrew F . Davis" , Christoph Hellwig , Chenbo Feng , Alistair Strachan , Hridya Valsaraju , Sandeep Patil , Hillf Danton , Dave Airlie , dri-devel@lists.freedesktop.org Subject: [RESEND][PATCH v16 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps Date: Tue, 3 Dec 2019 17:26:40 +0000 Message-Id: <20191203172641.66642-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191203172641.66642-1-john.stultz@linaro.org> References: <20191203172641.66642-1-john.stultz@linaro.org> This adds a CMA heap, which allows userspace to allocate a dma-buf of contiguous memory out of a CMA region. This code is an evolution of the Android ION implementation, so thanks to its original author and maintainters: Benjamin Gaignard, Laura Abbott, and others! NOTE: This patch only adds the default CMA heap. We will enable selectively adding other CMA memory regions to the dmabuf heaps interface with a later patch (which requires a dt binding) Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Pratik Patel Cc: Brian Starkey Cc: Vincent Donnefort Cc: Sudipto Paul Cc: Andrew F. Davis Cc: Christoph Hellwig Cc: Chenbo Feng Cc: Alistair Strachan Cc: Hridya Valsaraju Cc: Sandeep Patil Cc: Hillf Danton Cc: Dave Airlie Cc: dri-devel@lists.freedesktop.org Reviewed-by: Benjamin Gaignard Reviewed-by: Brian Starkey Acked-by: Sandeep Patil Acked-by: Laura Abbott Tested-by: Ayan Kumar Halder Signed-off-by: John Stultz --- v2: * Switch allocate to return dmabuf fd * Simplify init code * Checkpatch fixups v3: * Switch to inline function for to_cma_heap() * Minor cleanups suggested by Brian * Fold in new registration style from Andrew * Folded in changes from Andrew to use simplified page list from the heap helpers v4: * Use the fd_flags when creating dmabuf fd (Suggested by Benjamin) * Use precalculated pagecount (Suggested by Andrew) v6: * Changed variable names to improve clarity, as suggested by Brian v7: * Use newly lower-cased init_heap_helper_buffer helper * Use new dmabuf export helper v8: * Make struct dma_heap_ops const (Suggested by Christoph) * Condense dma_heap_buffer and heap_helper_buffer (suggested by Christoph) * Checkpatch whitespace fixups v9: * Removing needless check noted by Brian Starkey * Rename dma_heap_get_data->dma_heap_get_drvdata suggested by Hilf Danton * Check signals after clearing memory pages to avoid doing needless work if the task is killed as suggested by Hilf v12: * Rework to only add the default CMA heap v15: * Drop unused flags field from heap_helper_buffer as suggested by Sandeep Patil --- drivers/dma-buf/heaps/Kconfig | 8 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/cma_heap.c | 177 +++++++++++++++++++++++++++++++ 3 files changed, 186 insertions(+) create mode 100644 drivers/dma-buf/heaps/cma_heap.c -- 2.17.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 205052744169..a5eef06c4226 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. + +config DMABUF_HEAPS_CMA + bool "DMA-BUF CMA Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CMA heap. This heap is backed + by the Contiguous Memory Allocator (CMA). If your system has these + regions, you should say Y here. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index d1808eca2581..6e54cdec3da0 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += heap-helpers.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c new file mode 100644 index 000000000000..626cf7fd033a --- /dev/null +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -0,0 +1,177 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF CMA heap exporter + * + * Copyright (C) 2012, 2019 Linaro Ltd. + * Author: for ST-Ericsson. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + +struct cma_heap { + struct dma_heap *heap; + struct cma *cma; +}; + +static void cma_heap_free(struct heap_helper_buffer *buffer) +{ + struct cma_heap *cma_heap = dma_heap_get_drvdata(buffer->heap); + unsigned long nr_pages = buffer->pagecount; + struct page *cma_pages = buffer->priv_virt; + + /* free page list */ + kfree(buffer->pages); + /* release memory */ + cma_release(cma_heap->cma, cma_pages, nr_pages); + kfree(buffer); +} + +/* dmabuf heap CMA operations functions */ +static int cma_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + struct cma_heap *cma_heap = dma_heap_get_drvdata(heap); + struct heap_helper_buffer *helper_buffer; + struct page *cma_pages; + size_t size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); + struct dma_buf *dmabuf; + int ret = -ENOMEM; + pgoff_t pg; + + if (align > CONFIG_CMA_ALIGNMENT) + align = CONFIG_CMA_ALIGNMENT; + + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); + if (!helper_buffer) + return -ENOMEM; + + init_heap_helper_buffer(helper_buffer, cma_heap_free); + helper_buffer->heap = heap; + helper_buffer->size = len; + + cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false); + if (!cma_pages) + goto free_buf; + + if (PageHighMem(cma_pages)) { + unsigned long nr_clear_pages = nr_pages; + struct page *page = cma_pages; + + while (nr_clear_pages > 0) { + void *vaddr = kmap_atomic(page); + + memset(vaddr, 0, PAGE_SIZE); + kunmap_atomic(vaddr); + /* + * Avoid wasting time zeroing memory if the process + * has been killed by by SIGKILL + */ + if (fatal_signal_pending(current)) + goto free_cma; + + page++; + nr_clear_pages--; + } + } else { + memset(page_address(cma_pages), 0, size); + } + + helper_buffer->pagecount = nr_pages; + helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, + sizeof(*helper_buffer->pages), + GFP_KERNEL); + if (!helper_buffer->pages) { + ret = -ENOMEM; + goto free_cma; + } + + for (pg = 0; pg < helper_buffer->pagecount; pg++) + helper_buffer->pages[pg] = &cma_pages[pg]; + + /* create the dmabuf */ + dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto free_pages; + } + + helper_buffer->dmabuf = dmabuf; + helper_buffer->priv_virt = cma_pages; + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + return ret; + } + + return ret; + +free_pages: + kfree(helper_buffer->pages); +free_cma: + cma_release(cma_heap->cma, cma_pages, nr_pages); +free_buf: + kfree(helper_buffer); + return ret; +} + +static const struct dma_heap_ops cma_heap_ops = { + .allocate = cma_heap_allocate, +}; + +static int __add_cma_heap(struct cma *cma, void *data) +{ + struct cma_heap *cma_heap; + struct dma_heap_export_info exp_info; + + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); + if (!cma_heap) + return -ENOMEM; + cma_heap->cma = cma; + + exp_info.name = cma_get_name(cma); + exp_info.ops = &cma_heap_ops; + exp_info.priv = cma_heap; + + cma_heap->heap = dma_heap_add(&exp_info); + if (IS_ERR(cma_heap->heap)) { + int ret = PTR_ERR(cma_heap->heap); + + kfree(cma_heap); + return ret; + } + + return 0; +} + +static int add_default_cma_heap(void) +{ + struct cma *default_cma = dev_get_cma_area(NULL); + int ret = 0; + + if (default_cma) + ret = __add_cma_heap(default_cma, NULL); + + return ret; +} +module_init(add_default_cma_heap); +MODULE_DESCRIPTION("DMA-BUF CMA Heap"); +MODULE_LICENSE("GPL v2");