From patchwork Mon May 13 18:37:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 164046 Delivered-To: patches@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp1683592ili; Mon, 13 May 2019 11:37:37 -0700 (PDT) X-Received: by 2002:aa7:9a1d:: with SMTP id w29mr14428681pfj.81.1557772657581; Mon, 13 May 2019 11:37:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557772657; cv=none; d=google.com; s=arc-20160816; b=yhL5rjxkCh8PmPAop1WWATAvsxYWocTizpWtNiF/N/akkGCQnIk7MCWr5pphID8Tek GsBsgGK5deysOxv4mns0fSN7kQeijH9GG3pRvcWN1V1ciyKg6CV2sk7bENJUTyiky803 i3HKBYEYwXJDzOC26O0CSWTrTEQZmhea+oy2tityS0DEkTDVHupE65aN96/fyUiNS2qC ThZr4Fuv894UNZD89kZSjZcZNATvp4pRv7SEPkWg8/OavuNbic9gH8FdqYf/VvjCT/9H FhJu6gA/52fkkhuAA0cNhlPgedfGZFksRg21B+vKnezF9X8AaxBz4+ZRR74Gc5q7cBot QO3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pksOvnyHpZN4MdrMWr5EO61V7mUvaoL7ZYP9ufezzeQ=; b=sEiLfNl6cCB5ASm78//REYN5ZQ2rA1UcnYoQpnMQ/TZrYlnrch8lki83gYKnjHQatb ctVU2wq0wSW+Emoj3UghGncWWX5LMi5fbRVODU8v7H6JSnbzmQwbfSBAOzG5i4rN/YDX 0hLfdGYgPz/hliea6KHSq02mSdOdk0AbEfg2gqGDwnf5I4ird0B9jbZd3umAL4SbiVqm UnfKgzUa/XDb1TCOFRUTltMcGhFnqzVQkUsHJSv+9Ei4c8NTpf5MOaYv7WZRwYqYy9tS kmSgOtPszKawBLoo33uByvL6bkuhaIqx4t33JNVWVckGbfcaTgwoADAMXd9viwr8VTvG ejhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CMVPwHtJ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id f3sor15402895pgv.57.2019.05.13.11.37.37 for (Google Transport Security); Mon, 13 May 2019 11:37:37 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CMVPwHtJ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pksOvnyHpZN4MdrMWr5EO61V7mUvaoL7ZYP9ufezzeQ=; b=CMVPwHtJ6snzx4UzujJmTFtWfRTVyZtIFTIpOCAQkdON+8elJ6pq5QtNNZkqUpgrce daZsvXrkwTo/wkKpFngOEUs0EI+RA1rFNV6RpmA77m0VrfabRxEv+hhRhGMrIIwWffb1 k/usWGAzeJap3XwUL83aWRvhnOJcCpMCl7TDFH1FAnokwx0msaB5TmECI+hw1kTYTwcg sEq7Yap1XSYRLf+DVo2ODXCtovP1m34e5xRk26glMXKhLofw9kv2i0iFqooZelfbdv8d rVZaCiZvpYjMSPWuaEipIq1FTyp8VwxmooKE4KyM42i5qkxBQlGrvXLhqp5/q+UYfggI AKPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pksOvnyHpZN4MdrMWr5EO61V7mUvaoL7ZYP9ufezzeQ=; b=Cb8Pc+jgbgpVtPviwuT9rsXofBzcwi6lmGNxoP55IuG7J93DmyY1XQKu4qs6YD7IO3 d3tkLuBxbOaGYSg+7J2w7/gZtl94MNikLNMaj/SHy7OaFNEh3CQO45C5Ry93/1vQwds2 y8UQqFdDaQ5gCzkfw5/g3Rx0KgyY1f+mrdClqBc3362tnWz24BWIywwabnF30wYS1r6D 2Hyn2bkkK5bTOCF+gjKGOxEx13pvUueri0oNFU1819MibwvpCq0TU+/mxLrmag0Yq+Lb rqWw37dbXlFEijt0RibJs0HhY1dmZDeWilEdFsyyxvHC/DWjewNHwcxl1tPmdV5uxTS0 MAoA== X-Gm-Message-State: APjAAAXDJO0/Ocg6ALD5fLEIUH3yOwOZ8l9mlNaXfIi1OR3wViz8HL14 UkNntgkrEwT+If5Sb4Ddj6wBOrWf X-Google-Smtp-Source: APXvYqxO+PQl4VMbDmDSLqWr2bN+xGoO+r1774xAKAHLpGcL+wPoVuj0udn5c8luvn3KwMtNtDMjkA== X-Received: by 2002:a63:1650:: with SMTP id 16mr33397157pgw.164.1557772657122; Mon, 13 May 2019 11:37:37 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id u11sm17334881pfh.130.2019.05.13.11.37.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 13 May 2019 11:37:36 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Laura Abbott , Benjamin Gaignard , Sumit Semwal , Liam Mark , Pratik Patel , Brian Starkey , Vincent Donnefort , Sudipto Paul , "Andrew F . Davis" , Xu YiPing , "Chenfeng (puck)" , butao , "Xiaqing (A)" , Yudongbin , Christoph Hellwig , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [RFC][PATCH 3/5 v4] dma-buf: heaps: Add system heap to dmabuf heaps Date: Mon, 13 May 2019 11:37:25 -0700 Message-Id: <20190513183727.15755-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190513183727.15755-1-john.stultz@linaro.org> References: <20190513183727.15755-1-john.stultz@linaro.org> This patch adds system heap to the dma-buf heaps framework. This allows applications to get a page-allocator backed dma-buf for non-contiguous memory. This code is an evolution of the Android ION implementation, so thanks to its original authors and maintainters: Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others! Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Pratik Patel Cc: Brian Starkey Cc: Vincent Donnefort Cc: Sudipto Paul Cc: Andrew F. Davis Cc: Xu YiPing Cc: "Chenfeng (puck)" Cc: butao Cc: "Xiaqing (A)" Cc: Yudongbin Cc: Christoph Hellwig Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Reviewed-by: Benjamin Gaignard Signed-off-by: John Stultz --- v2: * Switch allocate to return dmabuf fd * Simplify init code * Checkpatch fixups * Droped dead system-contig code v3: * Whitespace fixups from Benjamin * Make sure we're zeroing the allocated pages (from Liam) * Use PAGE_ALIGN() consistently (suggested by Brian) * Fold in new registration style from Andrew * Avoid needless dynamic allocation of sys_heap (suggested by Christoph) * Minor cleanups * Folded in changes from Andrew to use simplified page list from the heap helpers v4: * Optimization to allocate pages in chunks, similar to old pagepool code * Use fd_flags when creating dmabuf fd (Suggested by Benjamin) --- drivers/dma-buf/Kconfig | 2 + drivers/dma-buf/heaps/Kconfig | 6 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/system_heap.c | 162 ++++++++++++++++++++++++++++ 4 files changed, 171 insertions(+) create mode 100644 drivers/dma-buf/heaps/Kconfig create mode 100644 drivers/dma-buf/heaps/system_heap.c -- 2.17.1 diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig index 8344cdaaa328..e34b6e61b702 100644 --- a/drivers/dma-buf/Kconfig +++ b/drivers/dma-buf/Kconfig @@ -46,4 +46,6 @@ menuconfig DMABUF_HEAPS this allows userspace to allocate dma-bufs that can be shared between drivers. +source "drivers/dma-buf/heaps/Kconfig" + endmenu diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig new file mode 100644 index 000000000000..205052744169 --- /dev/null +++ b/drivers/dma-buf/heaps/Kconfig @@ -0,0 +1,6 @@ +config DMABUF_HEAPS_SYSTEM + bool "DMA-BUF System Heap" + depends on DMABUF_HEAPS + help + Choose this option to enable the system dmabuf heap. The system heap + is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index de49898112db..d1808eca2581 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += heap-helpers.o +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c new file mode 100644 index 000000000000..60ee531e1a24 --- /dev/null +++ b/drivers/dma-buf/heaps/system_heap.c @@ -0,0 +1,162 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF System heap exporter + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "heap-helpers.h" + +struct system_heap { + struct dma_heap *heap; +} sys_heap; + + +#define NUM_ORDERS ARRAY_SIZE(orders) +#define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \ + | __GFP_NORETRY) & ~__GFP_RECLAIM) \ + | __GFP_COMP) +#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP) +static gfp_t order_flags[] = {HIGH_ORDER_GFP, LOW_ORDER_GFP, LOW_ORDER_GFP}; +static const unsigned int orders[] = {8, 4, 0}; + + +static void system_heap_free(struct heap_helper_buffer *buffer) +{ + pgoff_t pg; + + for (pg = 0; pg < buffer->pagecount; pg++) + __free_page(buffer->pages[pg]); + kfree(buffer->pages); + kfree(buffer); +} + +static struct page *alloc_largest_available(unsigned long size, + unsigned int max_order) +{ + struct page *page; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + if (size < (PAGE_SIZE << orders[i])) + continue; + if (max_order < orders[i]) + continue; + + page = alloc_pages(order_flags[i], orders[i]); + if (!page) + continue; + return page; + } + return NULL; +} + + +static int system_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + struct heap_helper_buffer *helper_buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + unsigned long size_remaining = len; + unsigned int max_order = orders[0]; + struct dma_buf *dmabuf; + int ret = -ENOMEM; + pgoff_t pg; + + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL); + if (!helper_buffer) + return -ENOMEM; + + INIT_HEAP_HELPER_BUFFER(helper_buffer, system_heap_free); + helper_buffer->heap_buffer.flags = heap_flags; + helper_buffer->heap_buffer.heap = heap; + helper_buffer->heap_buffer.size = len; + + helper_buffer->pagecount = len / PAGE_SIZE; + helper_buffer->pages = kmalloc_array(helper_buffer->pagecount, + sizeof(*helper_buffer->pages), + GFP_KERNEL); + if (!helper_buffer->pages) { + ret = -ENOMEM; + goto err0; + } + + pg = 0; + while (size_remaining > 0) { + struct page *page = alloc_largest_available(size_remaining, + max_order); + int i; + + if (!page) + goto err1; + size_remaining -= PAGE_SIZE << compound_order(page); + max_order = compound_order(page); + for (i = 0; i < 1 << max_order; i++) + helper_buffer->pages[pg++] = page++; + } + + /* create the dmabuf */ + exp_info.ops = &heap_helper_ops; + exp_info.size = len; + exp_info.flags = fd_flags; + exp_info.priv = &helper_buffer->heap_buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err1; + } + + helper_buffer->heap_buffer.dmabuf = dmabuf; + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + /* just return, as put will call release and that will free */ + return ret; + } + + return ret; + +err1: + while (pg > 0) + __free_page(helper_buffer->pages[--pg]); + kfree(helper_buffer->pages); +err0: + kfree(helper_buffer); + + return -ENOMEM; +} + +static struct dma_heap_ops system_heap_ops = { + .allocate = system_heap_allocate, +}; + +static int system_heap_create(void) +{ + struct dma_heap_export_info exp_info; + int ret = 0; + + exp_info.name = "system_heap"; + exp_info.ops = &system_heap_ops; + exp_info.priv = &sys_heap; + + sys_heap.heap = dma_heap_add(&exp_info); + if (IS_ERR(sys_heap.heap)) + ret = PTR_ERR(sys_heap.heap); + + return ret; +} +device_initcall(system_heap_create);