From patchwork Thu Feb 21 07:40:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 158872 Delivered-To: patches@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp183452jaa; Wed, 20 Feb 2019 23:40:37 -0800 (PST) X-Received: by 2002:a63:f556:: with SMTP id e22mr14256pgk.321.1550734837805; Wed, 20 Feb 2019 23:40:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550734837; cv=none; d=google.com; s=arc-20160816; b=YbeXw/S6/+JDII7OSI65Cy732upLUZ7qWCy5QhIvHiJrdfisXEvKpTdMh6EJJYN963 vIn+5vQfeWUX9c5N0sJfrWGmmaIUjzh4FMvH1lYzF0rJC6w9YAhbbZ0sVWDjg+Z8lnWE tyZibX6HPZ4v22JandO0Sw1W5+UKuif86ugpNLfwAhnTb3IcxTZZJnPYMYGACutnnNQI 3LnQ1GpvEWRUt3/ZP+/nGYlJfuiXcImh+y3TIrp5j3t25z75ymtDWvt10klLBMx7jZGu 7DdwxqOkVmPR6g06Nzlwi4kIj9fDcjNYsjnLc0SUEI4Yf+F8BG2M7po4UnnQpJ4/XjCd QfvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=S/srEAl6gW0XbSemL+xGNlvOW4CEkECucRRk7dlopH4=; b=nmJr5oeEwxqw1/yigjIF4Yrb/SgxjvG3EKGRWvpYvd7UFJxIotHIqfoGC6SYoEfkxZ QTdKCJ/HyTaqnFjYj5+MmzHIt3k7/1hvyqAr6OUcYjXJQA4B+LEArdaGJ4prOhl+jDg9 xxmKOaSkPYu1B4d3Ib4Ok024CIwjxBbybiU9SPsOIX8Ur+QnZLFk1Wjks9NcdpF/o/c6 YcjPsHlvvYMfLoSEheKTUeVWOUlYunGIQuXpGPrN7dzS6nIuIjxzXyGo1otfNCegQkwq qo/WG6z/ve3doSYccXaZv/2mnifQp1+w9rI7X2QG0tJYggqZp7NcxTefsBdw3kqp+3GQ z/XA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Vqmobc7g; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id i6sor18142674pfa.30.2019.02.20.23.40.37 for (Google Transport Security); Wed, 20 Feb 2019 23:40:37 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Vqmobc7g; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=S/srEAl6gW0XbSemL+xGNlvOW4CEkECucRRk7dlopH4=; b=Vqmobc7gLbTpYohmWiL9qfACpOfM2LdTaO+hYJSNAizJazKQeXHOrVvtUkPMPzbJ9Y oWbrxreRjctncFs6JWc1hyDEWEhtsPuCz2sUnietbgrTwROwFJh8xqswoo38TR/xbLvt z9rGlYnxjMkJrAfWzfZYxkDu4bTxER9G1jQe0FX1j54MYMroMn1uboCcPzyYynNIExNV noGBxfOtLI8+nfWaX1UpcgXGa2224/KGjm+p4AWE0KkdkRzs51x+HfP9mFJT0DaUCT0W /ivQl4J03QhH2F6IrRzSlfn31WsbBm3RhD18aqgy8VN5DWmLHGUimDkQcZGFiDo9uo4i AHjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=S/srEAl6gW0XbSemL+xGNlvOW4CEkECucRRk7dlopH4=; b=gn52alcgDUO8VY3KwRT11n0V4WP+32Bw3Igk+wbOMfjC9Dhgog9u4zxDKQqpSRlyHW W3n+wZ/D63N6zsJYxywaanFh8WrvFZgJ7iq2m4Uhki9OzUXFfrV5C2qvNEqYL2xr8YOv fX2erDshXPE6HJdsySHAf26IoBWk5w6UkRmEncy+r5mTRDliwfql0gp6pgAM8ely4bDg PrrpHCAxcbSz+iHl6MFYX2NrR/szbjN2mmD8OuLsmy2u8HsogGRefTHr2rNn+ve2LpHR xnl8jbw5X509r4v6LIk5OC/PlLsheeX7rijA59NU/AaeCtQ/s0MI8icsVbK2GmwrE3QK WiXQ== X-Gm-Message-State: AHQUAub7UhDLecVgOifWqpeL1TV4GTULaAmWcNIEHP+cncl11dSvIBdY fgBbe0LAvt14trF9bfRQAnQ5K3lm X-Google-Smtp-Source: AHgI3IZHufupvbBtgk+/f9+/Al14/QIlNCXEScMJZH1wo2CcbRsyaZE+ocWwBTwTOVLE+VTxFnRa1g== X-Received: by 2002:a62:6f49:: with SMTP id k70mr38623651pfc.7.1550734837354; Wed, 20 Feb 2019 23:40:37 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:4e72:b9ff:fe99:466a]) by smtp.gmail.com with ESMTPSA id g136sm35907237pfb.154.2019.02.20.23.40.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Feb 2019 23:40:36 -0800 (PST) From: John Stultz To: Laura Abbott Cc: John Stultz , Benjamin Gaignard , Sumit Semwal , Liam Mark , Brian Starkey , "Andrew F . Davis" , Chenbo Feng , Alistair Strachan , dri-devel@lists.freedesktop.org Subject: [EARLY RFC][PATCH 2/4] dma-buf: pools: Add page-pool for dma-buf pools Date: Wed, 20 Feb 2019 23:40:28 -0800 Message-Id: <1550734830-23499-3-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> References: <1550734830-23499-1-git-send-email-john.stultz@linaro.org> This adds the page-pool logic to the dma-buf pools which allows a pool to keep pre-allocated/flushed pages around which can speed up allocation performance. NOTE: The page-pools name is term preserved from ION, but it has potential to be easily confused with dma-buf pools. Suggestions for alternatives here would be great. Cc: Laura Abbott Cc: Benjamin Gaignard Cc: Sumit Semwal Cc: Liam Mark Cc: Brian Starkey Cc: Andrew F. Davis Cc: Chenbo Feng Cc: Alistair Strachan Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/dma-buf/pools/Makefile | 2 +- drivers/dma-buf/pools/dmabuf-pools.h | 51 ++++++++++++ drivers/dma-buf/pools/page_pool.c | 157 +++++++++++++++++++++++++++++++++++ 3 files changed, 209 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/pools/page_pool.c -- 2.7.4 diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile index 6cb1284..a51ec25 100644 --- a/drivers/dma-buf/pools/Makefile +++ b/drivers/dma-buf/pools/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o +obj-$(CONFIG_DMABUF_POOLS) += dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o diff --git a/drivers/dma-buf/pools/dmabuf-pools.h b/drivers/dma-buf/pools/dmabuf-pools.h index 12110f2..e3a0aac 100644 --- a/drivers/dma-buf/pools/dmabuf-pools.h +++ b/drivers/dma-buf/pools/dmabuf-pools.h @@ -238,6 +238,57 @@ size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool, */ size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool); +/** + * functions for creating and destroying a page pool -- allows you + * to keep a page pool of pre allocated memory to use from your pool. Keeping + * a page pool of memory that is ready for dma, ie any cached mapping have been + * invalidated from the cache, provides a significant performance benefit on + * many systems + */ + +/** + * struct dmabuf_page_pool - pagepool struct + * @high_count: number of highmem items in the pool + * @low_count: number of lowmem items in the pool + * @high_items: list of highmem items + * @low_items: list of lowmem items + * @mutex: lock protecting this struct and especially the count + * item list + * @gfp_mask: gfp_mask to use from alloc + * @order: order of pages in the pool + * @list: plist node for list of pools + * + * Allows you to keep a page pool of pre allocated pages to use from your pool. + * Keeping a pool of pages that is ready for dma, ie any cached mapping have + * been invalidated from the cache, provides a significant performance benefit + * on many systems + */ +struct dmabuf_page_pool { + int high_count; + int low_count; + struct list_head high_items; + struct list_head low_items; + struct mutex mutex; + gfp_t gfp_mask; + unsigned int order; + struct plist_node list; +}; + +struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask, + unsigned int order); +void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool); +struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool); +void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page); + +/** dmabuf_page_pool_shrink - shrinks the size of the memory cached in the pool + * @pool: the page pool + * @gfp_mask: the memory type to reclaim + * @nr_to_scan: number of items to shrink in pages + * + * returns the number of items freed in pages + */ +int dmabuf_page_pool_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask, + int nr_to_scan); long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); diff --git a/drivers/dma-buf/pools/page_pool.c b/drivers/dma-buf/pools/page_pool.c new file mode 100644 index 0000000..c1fe994 --- /dev/null +++ b/drivers/dma-buf/pools/page_pool.c @@ -0,0 +1,157 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * drivers/dma-buf/pools/page_pool.c + * + * Copyright (C) 2011 Google, Inc. + * Copyright (C) 2019 Linaro Ltd. + */ + +#include +#include +#include + +#include "dmabuf-pools.h" + +static inline struct page *dmabuf_page_pool_alloc_pages( + struct dmabuf_page_pool *pool) +{ + return alloc_pages(pool->gfp_mask, pool->order); +} + +static void dmabuf_page_pool_free_pages(struct dmabuf_page_pool *pool, + struct page *page) +{ + __free_pages(page, pool->order); +} + +static void dmabuf_page_pool_add(struct dmabuf_page_pool *pool, + struct page *page) +{ + mutex_lock(&pool->mutex); + if (PageHighMem(page)) { + list_add_tail(&page->lru, &pool->high_items); + pool->high_count++; + } else { + list_add_tail(&page->lru, &pool->low_items); + pool->low_count++; + } + + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + 1 << pool->order); + mutex_unlock(&pool->mutex); +} + +static struct page *dmabuf_page_pool_remove(struct dmabuf_page_pool *pool, + bool high) +{ + struct page *page; + + if (high) { + WARN_ON(!pool->high_count); + page = list_first_entry(&pool->high_items, struct page, lru); + pool->high_count--; + } else { + WARN_ON(!pool->low_count); + page = list_first_entry(&pool->low_items, struct page, lru); + pool->low_count--; + } + + list_del(&page->lru); + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + -(1 << pool->order)); + return page; +} + +struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool) +{ + struct page *page = NULL; + + WARN_ON(!pool); + + mutex_lock(&pool->mutex); + if (pool->high_count) + page = dmabuf_page_pool_remove(pool, true); + else if (pool->low_count) + page = dmabuf_page_pool_remove(pool, false); + mutex_unlock(&pool->mutex); + + if (!page) + page = dmabuf_page_pool_alloc_pages(pool); + + return page; +} + +void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page) +{ + WARN_ON(pool->order != compound_order(page)); + + dmabuf_page_pool_add(pool, page); +} + +static int dmabuf_page_pool_total(struct dmabuf_page_pool *pool, bool high) +{ + int count = pool->low_count; + + if (high) + count += pool->high_count; + + return count << pool->order; +} + +int dmabuf_page_pool_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask, + int nr_to_scan) +{ + int freed = 0; + bool high; + + if (current_is_kswapd()) + high = true; + else + high = !!(gfp_mask & __GFP_HIGHMEM); + + if (nr_to_scan == 0) + return dmabuf_page_pool_total(pool, high); + + while (freed < nr_to_scan) { + struct page *page; + + mutex_lock(&pool->mutex); + if (pool->low_count) { + page = dmabuf_page_pool_remove(pool, false); + } else if (high && pool->high_count) { + page = dmabuf_page_pool_remove(pool, true); + } else { + mutex_unlock(&pool->mutex); + break; + } + mutex_unlock(&pool->mutex); + dmabuf_page_pool_free_pages(pool, page); + freed += (1 << pool->order); + } + + return freed; +} + +struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask, + unsigned int order) +{ + struct dmabuf_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL); + + if (!pool) + return NULL; + pool->high_count = 0; + pool->low_count = 0; + INIT_LIST_HEAD(&pool->low_items); + INIT_LIST_HEAD(&pool->high_items); + pool->gfp_mask = gfp_mask | __GFP_COMP; + pool->order = order; + mutex_init(&pool->mutex); + plist_node_init(&pool->list, order); + + return pool; +} + +void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool) +{ + kfree(pool); +}