From patchwork Thu Feb 21 07:40:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 158873 Delivered-To: patches@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp183471jaa; Wed, 20 Feb 2019 23:40:39 -0800 (PST) X-Received: by 2002:a17:902:6b47:: with SMTP id g7mr8186496plt.100.1550734839137; Wed, 20 Feb 2019 23:40:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550734839; cv=none; d=google.com; s=arc-20160816; b=L5w6dwRNb3fRPPul8t+pKFj7QRc3NwdLvoeYBNYkq50nS91npjOGD/gI1c5tuX3PAB P5IGE46vmpue6HE04ET5P2qKV3KDGarYc3qnsGQE18zEpiDtP+CYCyn4NwmkG0qitKOB 1zmXhBeeBUjQWrIMDOiKHwmilYQgQz0qonVWDpwaFra5lYqgPIwQ7zldFH+rE936Hqcx X2sz/s7YdW2hIBAO95/n63fBktBglR3EJgD3UvkSDg++eKxzYWiqvxxxsCOmjO2dvUtZ RZ0tNtbALInA8P2IVvPU5AlBH0jBnTUl+xcoEMTj0abqy55gvXnd97VotnJwtstV7usr 45LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/665M2+WyPhY4rDWuX1SCJf083lCjlR0gsuUnKLekSY=; b=yyJHYqdzZslA8quJhlZ8Rqf+f1LNxOB0TlMp9w3VKDkEftFZDw7k4khcWRq7sZueE6 ugLVkQ0CM5XuAtvWFIq9T1QZ5qCnypG26fz6gwdPwOE0jgMhaAS3oq+ZoWAnR7M3Q7HU xcxDtziZ7QauOe6ScwLLrUPR6rp0o3SO2aGuYo+UqLAfW1bVa/RNVr5CGELKyuywPgKG vKNjQeahjcKsuUJgHPqR+7Kyonpdhr65PigzZ4knFoBcV+I+CdgsdALepD5LM0G/irhH 09K23SR9TUXZmBzAqGBK+RlgMSuUnuWo4Zv8NlkSjLFALHmmKtY1UhjzBfX3T1gWI+Qk EUog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pfyr1vxW; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id n3sor31319665pgp.41.2019.02.20.23.40.39 for (Google Transport Security); Wed, 20 Feb 2019 23:40:39 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pfyr1vxW; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/665M2+WyPhY4rDWuX1SCJf083lCjlR0gsuUnKLekSY=; b=pfyr1vxWIoouUMwP95zk6DnBkI3Vk2A4DYu8vSN4WBN32bnl65ncDuwR/k98VAL6Fe 6HTtP73SirwPP/eb1RHetR3xEDC5FJlkUPknCYE0wOikW8JanxJ8o58/yLdpLcC4Gz11 qxlmCkn876JCt1pfK2q0rLqi2Ey0BGJaeEO7gniaocO1kXObSheQwsqTwCQbyJjQsZ6F 9gsTKRaN6hCiIJCDs3291PTwGqtRYBQzyIyB7l4Kq2LDTk/1/PoHIN2I9vKLnMn0vXCA nKx9NJBSkDFp4ZPFDISUarIz91CBU38p1hC6mhIeiaK/wEuM4S8RCkTai6kQUCRbFzY7 leFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/665M2+WyPhY4rDWuX1SCJf083lCjlR0gsuUnKLekSY=; b=o8QmHdKuKUpVx7ZyjYIPwOGCwOWuJthoM/DSfaKpfjf1ptcnFabFuWGwYWVxhw92t1 vEyJlQO2uGW7HuZgJ7dd0Cr9C1bN5n1F326bi/E+/wecbvcV4gz5i29rvFtGhUpOqTlD FbeokQ3pj4YVIlsR9eC06vTIZj+vnpW/jTeSzn0ueFy0sS9vw71Kl/+t8Ul5Q12w+ZEj B0t8YfTN/TTA3LOZPrJ1rQxb0EOyVUQK5cStv3L1DRUcDx5fAovHHZIO4gAKp8o8UWpX kdoLGHL8Zm+QEAXEpWsGfUgvpGHFdwYr8kt+knEgxrsHle6IMewy5wb8Sw/TmGmAiiQT /PGw== X-Gm-Message-State: AHQUAuZFQZtxQCQvhjvxia0BiZofGd3QLdLXyuZXRUxuExxdKg4t7JL1 osH2oivWXZZd34bzGC8H1NQyt6ee X-Google-Smtp-Source: AHgI3IZRkaR4dPPSz7PiDYKuo7lUF+Ue/qE3cLuAhsB3sUSRmWiv14yPIA1xdZQd2872Eq4N4IDPAQ== X-Received: by 2002:a63:9149:: with SMTP id l70mr33172151pge.65.1550734838720; Wed, 20 Feb 2019 23:40:38 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g136sm35907237pfb.154.2019.02.20.23.40.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Feb 2019 23:40:37 -0800 (PST) From: John Stultz To: Laura Abbott Cc: John Stultz , Benjamin Gaignard , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [EARLY RFC][PATCH 3/4] dma-buf: pools: Add system/system-contig pools to dmabuf pools Date: Wed, 20 Feb 2019 23:40:29 -0800 Message-Id: <1550734830-23499-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> References: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> This patch adds system and system-contig pools to the dma-buf pools framework. This allows applications to get a page-allocator backed dma-buf, of either non-contiguous or contiguous memory. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/pools/Kconfig | 7 + drivers/dma-buf/pools/Makefile | 1 + drivers/dma-buf/pools/system_pool.c | 374 ++++++++++++++++++++++++++++++++++++ 3 files changed, 382 insertions(+) create mode 100644 drivers/dma-buf/pools/system_pool.c -- 2.7.4 diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig index caa7eb8..787b2a6 100644 --- a/drivers/dma-buf/pools/Kconfig +++ b/drivers/dma-buf/pools/Kconfig @@ -8,3 +8,10 @@ menuconfig DMABUF_POOLS which allow userspace to allocate dma-bufs that can be shared between drivers. If you're not using Android its probably safe to say N here. + +config DMABUF_POOLS_SYSTEM + bool "DMA-BUF System Pool" + depends on DMABUF_POOLS + help + Choose this option to enable the system dmabuf pool. The system pool + is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile index a51ec25..2ccf2a1 100644 --- a/drivers/dma-buf/pools/Makefile +++ b/drivers/dma-buf/pools/Makefile @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o +obj-$(CONFIG_DMABUF_POOLS_SYSTEM) += system_pool.o diff --git a/drivers/dma-buf/pools/system_pool.c b/drivers/dma-buf/pools/system_pool.c new file mode 100644 index 0000000..1756990 --- /dev/null +++ b/drivers/dma-buf/pools/system_pool.c @@ -0,0 +1,374 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pools/system_pool.c + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include "dmabuf-pools.h" + +#define NUM_ORDERS ARRAY_SIZE(orders) + +static gfp_t high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN | + __GFP_NORETRY) & ~__GFP_RECLAIM; +static gfp_t low_order_gfp_flags = GFP_HIGHUSER | __GFP_ZERO; +static const unsigned int orders[] = {8, 4, 0}; + +static int order_to_index(unsigned int order) +{ + int i; + + for (i = 0; i < NUM_ORDERS; i++) + if (order == orders[i]) + return i; + WARN_ON(1); + return -1; +} + +static inline unsigned int order_to_size(int order) +{ + return PAGE_SIZE << order; +} + +struct system_pool { + struct dmabuf_pool pool; + struct dmabuf_page_pool *page_pools[NUM_ORDERS]; +}; + +static struct page *alloc_buffer_page(struct system_pool *sys_pool, + struct dmabuf_pool_buffer *buffer, + unsigned long order) +{ + struct dmabuf_page_pool *pagepool = + sys_pool->page_pools[order_to_index(order)]; + + return dmabuf_page_pool_alloc(pagepool); +} + +static void free_buffer_page(struct system_pool *sys_pool, + struct dmabuf_pool_buffer *buffer, + struct page *page) +{ + struct dmabuf_page_pool *pagepool; + unsigned int order = compound_order(page); + + /* go to system */ + if (buffer->private_flags & DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE) { + __free_pages(page, order); + return; + } + + pagepool = sys_pool->page_pools[order_to_index(order)]; + + dmabuf_page_pool_free(pagepool, page); +} + +static struct page *alloc_largest_available(struct system_pool *sys_pool, + struct dmabuf_pool_buffer *buffer, + unsigned long size, + unsigned int max_order) +{ + struct page *page; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + if (size < order_to_size(orders[i])) + continue; + if (max_order < orders[i]) + continue; + + page = alloc_buffer_page(sys_pool, buffer, orders[i]); + if (!page) + continue; + + return page; + } + + return NULL; +} + +static int system_pool_allocate(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + unsigned long size, + unsigned long flags) +{ + struct system_pool *sys_pool = container_of(pool, + struct system_pool, + pool); + struct sg_table *table; + struct scatterlist *sg; + struct list_head pages; + struct page *page, *tmp_page; + int i = 0; + unsigned long size_remaining = PAGE_ALIGN(size); + unsigned int max_order = orders[0]; + + if (size / PAGE_SIZE > totalram_pages() / 2) + return -ENOMEM; + + INIT_LIST_HEAD(&pages); + while (size_remaining > 0) { + page = alloc_largest_available(sys_pool, buffer, size_remaining, + max_order); + if (!page) + goto free_pages; + list_add_tail(&page->lru, &pages); + size_remaining -= PAGE_SIZE << compound_order(page); + max_order = compound_order(page); + i++; + } + table = kmalloc(sizeof(*table), GFP_KERNEL); + if (!table) + goto free_pages; + + if (sg_alloc_table(table, i, GFP_KERNEL)) + goto free_table; + + sg = table->sgl; + list_for_each_entry_safe(page, tmp_page, &pages, lru) { + sg_set_page(sg, page, PAGE_SIZE << compound_order(page), 0); + sg = sg_next(sg); + list_del(&page->lru); + } + + buffer->sg_table = table; + return 0; + +free_table: + kfree(table); +free_pages: + list_for_each_entry_safe(page, tmp_page, &pages, lru) + free_buffer_page(sys_pool, buffer, page); + return -ENOMEM; +} + +static void system_pool_free(struct dmabuf_pool_buffer *buffer) +{ + struct system_pool *sys_pool = container_of(buffer->pool, + struct system_pool, + pool); + struct sg_table *table = buffer->sg_table; + struct scatterlist *sg; + int i; + + /* zero the buffer before goto page pool */ + if (!(buffer->private_flags & DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE)) + dmabuf_pool_buffer_zero(buffer); + + for_each_sg(table->sgl, sg, table->nents, i) + free_buffer_page(sys_pool, buffer, sg_page(sg)); + sg_free_table(table); + kfree(table); +} + +static int system_pool_shrink(struct dmabuf_pool *pool, gfp_t gfp_mask, + int nr_to_scan) +{ + struct dmabuf_page_pool *page_pool; + struct system_pool *sys_pool; + int nr_total = 0; + int i, nr_freed; + int only_scan = 0; + + sys_pool = container_of(pool, struct system_pool, pool); + + if (!nr_to_scan) + only_scan = 1; + + for (i = 0; i < NUM_ORDERS; i++) { + page_pool = sys_pool->page_pools[i]; + + if (only_scan) { + nr_total += dmabuf_page_pool_shrink(page_pool, + gfp_mask, + nr_to_scan); + + } else { + nr_freed = dmabuf_page_pool_shrink(page_pool, + gfp_mask, + nr_to_scan); + nr_to_scan -= nr_freed; + nr_total += nr_freed; + if (nr_to_scan <= 0) + break; + } + } + return nr_total; +} + +static struct dmabuf_pool_ops system_pool_ops = { + .allocate = system_pool_allocate, + .free = system_pool_free, + .map_kernel = dmabuf_pool_map_kernel, + .unmap_kernel = dmabuf_pool_unmap_kernel, + .map_user = dmabuf_pool_map_user, + .shrink = system_pool_shrink, +}; + +static void system_pool_destroy_pools(struct dmabuf_page_pool **page_pools) +{ + int i; + + for (i = 0; i < NUM_ORDERS; i++) + if (page_pools[i]) + dmabuf_page_pool_destroy(page_pools[i]); +} + +static int system_pool_create_pools(struct dmabuf_page_pool **page_pools) +{ + int i; + gfp_t gfp_flags = low_order_gfp_flags; + + for (i = 0; i < NUM_ORDERS; i++) { + struct dmabuf_page_pool *pool; + + if (orders[i] > 4) + gfp_flags = high_order_gfp_flags; + + pool = dmabuf_page_pool_create(gfp_flags, orders[i]); + if (!pool) + goto err_create_pool; + page_pools[i] = pool; + } + return 0; + +err_create_pool: + system_pool_destroy_pools(page_pools); + return -ENOMEM; +} + +static struct dmabuf_pool *__system_pool_create(void) +{ + struct system_pool *sys_pool; + + sys_pool = kzalloc(sizeof(*sys_pool), GFP_KERNEL); + if (!sys_pool) + return ERR_PTR(-ENOMEM); + sys_pool->pool.ops = &system_pool_ops; + sys_pool->pool.flags = DMABUF_POOL_FLAG_DEFER_FREE; + + if (system_pool_create_pools(sys_pool->page_pools)) + goto free_pool; + + return &sys_pool->pool; + +free_pool: + kfree(sys_pool); + return ERR_PTR(-ENOMEM); +} + +static int system_pool_create(void) +{ + struct dmabuf_pool *pool; + + pool = __system_pool_create(); + if (IS_ERR(pool)) + return PTR_ERR(pool); + pool->name = "system_pool"; + + dmabuf_pool_add(pool); + return 0; +} +device_initcall(system_pool_create); + +static int system_contig_pool_allocate(struct dmabuf_pool *pool, + struct dmabuf_pool_buffer *buffer, + unsigned long len, + unsigned long flags) +{ + int order = get_order(len); + struct page *page; + struct sg_table *table; + unsigned long i; + int ret; + + page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order); + if (!page) + return -ENOMEM; + + split_page(page, order); + + len = PAGE_ALIGN(len); + for (i = len >> PAGE_SHIFT; i < (1 << order); i++) + __free_page(page + i); + + table = kmalloc(sizeof(*table), GFP_KERNEL); + if (!table) { + ret = -ENOMEM; + goto free_pages; + } + + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) + goto free_table; + + sg_set_page(table->sgl, page, len, 0); + + buffer->sg_table = table; + + return 0; + +free_table: + kfree(table); +free_pages: + for (i = 0; i < len >> PAGE_SHIFT; i++) + __free_page(page + i); + + return ret; +} + +static void system_contig_pool_free(struct dmabuf_pool_buffer *buffer) +{ + struct sg_table *table = buffer->sg_table; + struct page *page = sg_page(table->sgl); + unsigned long pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; + unsigned long i; + + for (i = 0; i < pages; i++) + __free_page(page + i); + sg_free_table(table); + kfree(table); +} + +static struct dmabuf_pool_ops kmalloc_ops = { + .allocate = system_contig_pool_allocate, + .free = system_contig_pool_free, + .map_kernel = dmabuf_pool_map_kernel, + .unmap_kernel = dmabuf_pool_unmap_kernel, + .map_user = dmabuf_pool_map_user, +}; + +static struct dmabuf_pool *__system_contig_pool_create(void) +{ + struct dmabuf_pool *pool; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + pool->ops = &kmalloc_ops; + pool->name = "system_contig_pool"; + return pool; +} + +static int system_contig_pool_create(void) +{ + struct dmabuf_pool *pool; + + pool = __system_contig_pool_create(); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + dmabuf_pool_add(pool); + return 0; +} +device_initcall(system_contig_pool_create); +