From patchwork Wed Jun 30 01:34:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 468672 Delivered-To: patches@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp5477374jao; Tue, 29 Jun 2021 18:34:32 -0700 (PDT) X-Received: by 2002:a17:902:ee8d:b029:129:182:1bd9 with SMTP id a13-20020a170902ee8db029012901821bd9mr4883945pld.45.1625016871856; Tue, 29 Jun 2021 18:34:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625016871; cv=none; d=google.com; s=arc-20160816; b=m0vDJduZD8MMQ0jdlvmVaGHRWl1hoy4zcD3Gifz36uxtR+0kjmuXzBjPrq0JVDb02B 3Vg303hqVqPMdtvVjf1uwQCx2cElfIRA9m5FD6CWM1YOBfx5pmNxpkvpXaj9yL+1vAIc jjIONlawS3QqV2zxeev0Cgsg/1FC8kUGrzSmXg+IWxP7Q5EnlhAMGcCJO4uO52/BCyhQ DOGU9fzTtMYB+OvbrGewD8kirA9dWiVoAYb6nsW61fJNwB3d/KIjdeogMoglFxDNLJ+4 oXtFikKodeYFrGqChqTY+DuFz3rIk+xEwqmoUjCf/++S4TyhF+GULMK52j/gTYQZwtYI g8Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=UQZ07rpmRfbC0AgTi5yXuUE7gB8O5W4feOgGAG87EuE=; b=0sy79UaS1wRZIIpOfT/AJEUyYhLvjObxfE0rcmck83CvFrzy4HgZF0xb41dUsE1Bh7 rovyq/upWZ3umpDgVOaSLLwlCMGmZs4KPq1+2g8okTvFBbnMwNcs6OJjhnLzrRVk2iMs t/ak1AzzldD5b8mVA7gd3cvZrhp91kXVQFGOo4VQQd6KTQZDNgKBQJct45jJmp46JcIe f6o0RnLfevmPRtJS2vIwLWEJRTUtj5lVNokkyNSO6M+Nnlo/K631a34WleBnEKGZlFt8 BpGPm8g+iSKxmxciJZ5lGOHrsBZ50t4jdAedTeGh4Yku48bvyaO9TtONVblfziAvddEK cH7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lI4aTLFV; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id r5sor9530542pls.68.2021.06.29.18.34.31 for (Google Transport Security); Tue, 29 Jun 2021 18:34:31 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lI4aTLFV; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UQZ07rpmRfbC0AgTi5yXuUE7gB8O5W4feOgGAG87EuE=; b=lI4aTLFVPfy3aDa/WPXf6U9K2UtE4k3b0Yguy0hbp6hS2U+BLPP62Hi80UtchaZxij Y1bkau+GAlBQr/JpELHPPrBoJJrbSJ+8rx3beyRdgIye5ef0nMxCF70YkuwquwrkIto3 r4SkX1A6lLFV/HGICKCzSKMpRpyOYCfvFTRaXmqegic/u8ocgz0YcpbzTQ64byDw/sFY 3uq2PGFX47h175CsbqvafjEiOfBudegy0Wj2fHQf26j3DgdQuasRbnz47xtKUchGad35 5GPa/0oWnbTxwuVRYo71q8xL6I3sTYoXMCyjUrkxG5H1AJyHNkzZOqs+lK/9x5goq0JL uSJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UQZ07rpmRfbC0AgTi5yXuUE7gB8O5W4feOgGAG87EuE=; b=kh258/F9RVY2QdB5NXdj2YB0q72DqCZCowx/igbjMLybxn5c+4GXN88xnfCDNJ4hl7 p2s3bNtb+J2Qn3ZYj6dFheeHFmMWjTGDzkZ2Cv+SMQbfwadMXVnI4/6nZ0L9iHf4YJFq bWwJP/5unv5Kpj6Pbt9p2HIod7b3k+uUNCT8eNvOGUiQKHcPH5QSHx4ZwDryQU3OWFfj tXwoHsxDm40p8MeP34YTpD5M63rfUiyLHV0dilgk3CePtyeqyhk0i+PmYuWPSk47Styl 7aJbsWHwX0nbD60XfhBYuAFzSPVdge6za0OrNZ8x/mTIAxOwN3NXQiQLgAJDaL1VrGA4 4zVQ== X-Gm-Message-State: AOAM532E9oUn9xMbF+kV4kOhNDeG1wmCuJC3cYpL8Felv+mG8yvNE7hO yWrqgk/FNjgaRMuoT5ws6uDZ6QUQ X-Google-Smtp-Source: ABdhPJzxuY8XMWMewO/Jp0JI4aaqWdpv5THu0uBWtq1Ey7qw6x5RoV/BZhpDv6UYzo2biHMOt34RBg== X-Received: by 2002:a17:902:ed95:b029:ee:aa46:547a with SMTP id e21-20020a170902ed95b02900eeaa46547amr30530303plj.27.1625016871442; Tue, 29 Jun 2021 18:34:31 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id g8sm20252901pja.14.2021.06.29.18.34.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 18:34:30 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation Date: Wed, 30 Jun 2021 01:34:17 +0000 Message-Id: <20210630013421.735092-2-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org> References: <20210630013421.735092-1-john.stultz@linaro.org> MIME-Version: 1.0 This adds a shrinker controlled page pool, extracted out of the ttm_pool logic, and abstracted out a bit so it can be used by other non-ttm drivers. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v8: * Completely rewritten from scratch, using only the ttm_pool logic so it can be dual licensed. v9: * Add Kerneldoc comments similar to tmm implementation as suggested by ChristianK * Mark some functions static as suggested by ChristianK * Fix locking issue ChristianK pointed out * Add methods to block the shrinker so users can make atomic calculations across multiple pools, as suggested by ChristianK * Fix up Kconfig dependency issue as Reported-by: kernel test robot --- drivers/gpu/drm/Kconfig | 3 + drivers/gpu/drm/Makefile | 2 + drivers/gpu/drm/page_pool.c | 297 ++++++++++++++++++++++++++++++++++++ include/drm/page_pool.h | 68 +++++++++ 4 files changed, 370 insertions(+) create mode 100644 drivers/gpu/drm/page_pool.c create mode 100644 include/drm/page_pool.h -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 3c16bd1afd87..52d9ba92b35e 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -177,6 +177,9 @@ config DRM_DP_CEC Note: not all adapters support this feature, and even for those that do support this they often do not hook up the CEC pin. +config DRM_PAGE_POOL + bool + config DRM_TTM tristate depends on DRM && MMU diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 5279db4392df..affa4ca3a08e 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -39,6 +39,8 @@ obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o drm_ttm_helper-y := drm_gem_ttm_helper.o obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o +drm-$(CONFIG_DRM_PAGE_POOL) += page_pool.o + drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o drm_dp_helper.o \ drm_dsc.o drm_probe_helper.o \ drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \ diff --git a/drivers/gpu/drm/page_pool.c b/drivers/gpu/drm/page_pool.c new file mode 100644 index 000000000000..c07bbe3afc32 --- /dev/null +++ b/drivers/gpu/drm/page_pool.c @@ -0,0 +1,297 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Sharable page pool implementation + * + * Extracted from drivers/gpu/drm/ttm/ttm_pool.c + * Copyright 2020 Advanced Micro Devices, Inc. + * Copyright 2021 Linaro Ltd. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Christian König, John Stultz + */ + +#include +#include +#include +#include + +static unsigned long page_pool_size; /* max size of the pool */ + +MODULE_PARM_DESC(page_pool_size, "Number of pages in the drm page pool"); +module_param(page_pool_size, ulong, 0644); + +static atomic_long_t nr_managed_pages; + +static struct mutex shrinker_lock; +static struct list_head shrinker_list; +static struct shrinker mm_shrinker; + +/** + * drm_page_pool_set_max - Sets maximum size of all pools + * + * Sets the maximum number of pages allows in all pools. + * This can only be set once, and the first caller wins. + */ +void drm_page_pool_set_max(unsigned long max) +{ + if (!page_pool_size) + page_pool_size = max; +} + +/** + * drm_page_pool_get_max - Maximum size of all pools + * + * Return the maximum number of pages allows in all pools + */ +unsigned long drm_page_pool_get_max(void) +{ + return page_pool_size; +} + +/** + * drm_page_pool_get_total - Current size of all pools + * + * Return the number of pages in all managed pools + */ +unsigned long drm_page_pool_get_total(void) +{ + return atomic_long_read(&nr_managed_pages); +} + +/** + * drm_page_pool_get_size - Get the number of pages in the pool + * + * @pool: Pool to calculate the size of + * + * Return the number of pages in the specified pool + */ +unsigned long drm_page_pool_get_size(struct drm_page_pool *pool) +{ + unsigned long size; + + spin_lock(&pool->lock); + size = pool->page_count; + spin_unlock(&pool->lock); + return size; +} + +/** + * drm_page_pool_add - Add a page to a pool + * + * @pool: Pool to add page to + * @page: Page to be added to the pool + * + * Gives specified page into a specific pool + */ +void drm_page_pool_add(struct drm_page_pool *pool, struct page *p) +{ + unsigned int i, num_pages = 1 << pool->order; + + /* Make sure we won't grow larger then the max pool size */ + if (page_pool_size && + ((drm_page_pool_get_total()) + num_pages > page_pool_size)) { + pool->free(pool, p); + return; + } + + /* Be sure to zero pages before adding them to the pool */ + for (i = 0; i < num_pages; ++i) { + if (PageHighMem(p)) + clear_highpage(p + i); + else + clear_page(page_address(p + i)); + } + + spin_lock(&pool->lock); + list_add(&p->lru, &pool->pages); + pool->page_count += 1 << pool->order; + spin_unlock(&pool->lock); + atomic_long_add(1 << pool->order, &nr_managed_pages); + +} + +/** + * drm_page_pool_remove - Remove page from pool + * + * @pool: Pool to pull the page from + * + * Take pages from a specific pool, return NULL when nothing available + */ +struct page *drm_page_pool_remove(struct drm_page_pool *pool) +{ + struct page *p; + + spin_lock(&pool->lock); + p = list_first_entry_or_null(&pool->pages, typeof(*p), lru); + if (p) { + atomic_long_sub(1 << pool->order, &nr_managed_pages); + pool->page_count -= 1 << pool->order; + list_del(&p->lru); + } + spin_unlock(&pool->lock); + + return p; +} + +/** + * drm_page_pool_init - Initialize a pool + * + * @pool: the pool to initialize + * @order: page allocation order + * @free_page: function pointer to free the pool's pages + * + * Initialize and add a pool type to the global shrinker list + */ +void drm_page_pool_init(struct drm_page_pool *pool, unsigned int order, + void (*free_page)(struct drm_page_pool *pool, struct page *p)) +{ + pool->order = order; + spin_lock_init(&pool->lock); + INIT_LIST_HEAD(&pool->pages); + pool->free = free_page; + pool->page_count = 0; + + mutex_lock(&shrinker_lock); + list_add_tail(&pool->shrinker_list, &shrinker_list); + mutex_unlock(&shrinker_lock); +} + +/** + * drm_page_pool_fini - Cleanup a pool + * + * @pool: the pool to clean up + * + * Remove a pool_type from the global shrinker list and free all pages + */ +void drm_page_pool_fini(struct drm_page_pool *pool) +{ + struct page *p; + + mutex_lock(&shrinker_lock); + list_del(&pool->shrinker_list); + mutex_unlock(&shrinker_lock); + + while ((p = drm_page_pool_remove(pool))) + pool->free(pool, p); +} + +/** + * drm_page_pool_shrink - Shrink the drm page pool + * + * Free pages using the global shrinker list. Returns + * the number of pages free + */ +unsigned int drm_page_pool_shrink(void) +{ + struct drm_page_pool *pool; + unsigned int num_freed; + struct page *p; + + mutex_lock(&shrinker_lock); + pool = list_first_entry(&shrinker_list, typeof(*pool), shrinker_list); + + p = drm_page_pool_remove(pool); + if (p) { + pool->free(pool, p); + num_freed = 1 << pool->order; + } else { + num_freed = 0; + } + + list_move_tail(&pool->shrinker_list, &shrinker_list); + mutex_unlock(&shrinker_lock); + + return num_freed; +} + +/* As long as pages are available make sure to release at least one */ +static unsigned long drm_page_pool_shrinker_scan(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long num_freed = 0; + + do + num_freed += drm_page_pool_shrink(); + while (!num_freed && atomic_long_read(&nr_managed_pages)); + + return num_freed; +} + +/* Return the number of pages available or SHRINK_EMPTY if we have none */ +static unsigned long drm_page_pool_shrinker_count(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long num_pages = atomic_long_read(&nr_managed_pages); + + return num_pages ? num_pages : SHRINK_EMPTY; +} + +/** + * dma_page_pool_lock_shrinker - Take the shrinker lock + * + * Takes the shrinker lock, preventing the shrinker from making + * changes to the pools + */ +void dma_page_pool_lock_shrinker(void) +{ + mutex_lock(&shrinker_lock); +} + +/** + * dma_page_pool_unlock_shrinker - Release the shrinker lock + * + * Releases the shrinker lock, allowing the shrinker to free + * pages + */ +void dma_page_pool_unlock_shrinker(void) +{ + mutex_unlock(&shrinker_lock); +} + +/** + * drm_page_pool_shrinker_init - Initialize globals + * + * Initialize the global locks and lists for the shrinker. + */ +static int drm_page_pool_shrinker_init(void) +{ + mutex_init(&shrinker_lock); + INIT_LIST_HEAD(&shrinker_list); + + mm_shrinker.count_objects = drm_page_pool_shrinker_count; + mm_shrinker.scan_objects = drm_page_pool_shrinker_scan; + mm_shrinker.seeks = 1; + return register_shrinker(&mm_shrinker); +} + +/** + * drm_page_pool_shrinker_fini - Finalize globals + * + * Unregister the shrinker. + */ +static void drm_page_pool_shrinker_fini(void) +{ + unregister_shrinker(&mm_shrinker); + WARN_ON(!list_empty(&shrinker_list)); +} + +module_init(drm_page_pool_shrinker_init); +module_exit(drm_page_pool_shrinker_fini); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/include/drm/page_pool.h b/include/drm/page_pool.h new file mode 100644 index 000000000000..860cf6c92aab --- /dev/null +++ b/include/drm/page_pool.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Extracted from include/drm/ttm/ttm_pool.h + * Copyright 2020 Advanced Micro Devices, Inc. + * Copyright 2021 Linaro Ltd + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Christian König, John Stultz + */ + +#ifndef _DRM_PAGE_POOL_H_ +#define _DRM_PAGE_POOL_H_ + +#include +#include +#include + +/** + * drm_page_pool - Page Pool for a certain memory type + * + * @order: the allocation order our pages have + * @pages: the list of pages in the pool + * @shrinker_list: our place on the global shrinker list + * @lock: protection of the page list + * @page_count: number of pages currently in the pool + * @free: Function pointer to free the page + */ +struct drm_page_pool { + unsigned int order; + struct list_head pages; + struct list_head shrinker_list; + spinlock_t lock; + + unsigned long page_count; + void (*free)(struct drm_page_pool *pool, struct page *p); +}; + +void drm_page_pool_set_max(unsigned long max); +unsigned long drm_page_pool_get_max(void); +unsigned long drm_page_pool_get_total(void); +unsigned int drm_page_pool_shrink(void); +unsigned long drm_page_pool_get_size(struct drm_page_pool *pool); +void drm_page_pool_add(struct drm_page_pool *pool, struct page *p); +struct page *drm_page_pool_remove(struct drm_page_pool *pool); +void dma_page_pool_lock_shrinker(void); +void dma_page_pool_unlock_shrinker(void); +void drm_page_pool_init(struct drm_page_pool *pool, unsigned int order, + void (*free_page)(struct drm_page_pool *pool, struct page *p)); +void drm_page_pool_fini(struct drm_page_pool *pool); + +#endif From patchwork Wed Jun 30 01:34:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 468674 Delivered-To: patches@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp5477389jao; Tue, 29 Jun 2021 18:34:33 -0700 (PDT) X-Received: by 2002:a17:90a:6394:: with SMTP id f20mr1773076pjj.80.1625016873258; Tue, 29 Jun 2021 18:34:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625016873; cv=none; d=google.com; s=arc-20160816; b=Tq5u6CEcxXmjY1WP/h7BJb9gkitrwwRRQwAEwm++5F2jk+n27aNkcyENLD7ybxmwxP eVXghe6iNekZCuSv4+StN1yvox3RVieOBEVY3Ln2jk1Fx4ipf8SyTOIwAJoy+U6tnGhe zsk3Y7bEHjIyhGa3/71YDNjknz8asb9IZIZo58XCQmJI6f4p5wimHuEgEaQoEJigM7KF 333deku/3bVqV2Ux16A5LMaZ1ZqGoDdAvfuhOW+NOoH9v7Kzs0L43JdGGLnqYXeLvkea mDeGWcxXvj3Gc1XbefhVi8j+SKGAalWimsk7lQqSGIibWkC6BAKTGyJ56Gw6U3f6UEtD RwkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=f7FvesnFZtEkQqc9Vhikn1W8hwZHSFqCkNBKNAcZa+k=; b=x3+WFAPpo08T3yXGkETrHtTPCKgvrd+PNb4DIv7CDKZcWKmC+oe+t3PYSq3LaRF5K2 2CsfoA5s1+gd7mqXXYqQF7XGIg8Kcu+I40i8ZoaSexPICgB3rOe7JiwZK1OlACqkknt+ wWBl0bokwXRVqjNP0QIH3uJ3UyW+kj1+x633+Ls9j7GvGEheRn8R2RvdzqkPAqCTbZQQ QGSlEZI9f7MYzZTxwamtMUhIpVQdEZWA1xTxUHI1VdL95DSoDtBGZ8KNNQbogA1dviq0 cKwndmNse9N8qtTzpbSISAWL7GGHIkekjQXZzxJ8EvZ6Nq2XpiGosPB3FLcRHi/Dk4+j z52g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UTRqcAon; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id o131sor1313629pfg.43.2021.06.29.18.34.33 for (Google Transport Security); Tue, 29 Jun 2021 18:34:33 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UTRqcAon; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f7FvesnFZtEkQqc9Vhikn1W8hwZHSFqCkNBKNAcZa+k=; b=UTRqcAonCOmEpLXYTxiIciOZgFW4bRHzMzBoD3+Gibvj2TTgv0sBO0MIt+YmnarrCe pycct/KBpz4kBF+0zrZPzrwnex23iy8TOlVH1Spg81GdJVh8KeDpkpjxdNxnDeXuxtSZ u8ws+zhm4OvLK7kDUtNlj1nT+tJEUKcyE1tUu/P0C86GHL0KHZQGx0nvoOH98C85c4fs O5aYnoV/Hy/FARfg9HRydjLNAtTDoi9gmHGnE5UQx+4yW6rb93y+YXllGCt9T33kD4yP qRlYDUwJniiMkOZnEVBHS7BB4sOroW6LY6pTcH57N54x+MQeIy4pATMnXrlilsDtpoe5 d3JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f7FvesnFZtEkQqc9Vhikn1W8hwZHSFqCkNBKNAcZa+k=; b=scZm23hIJg0X1OZIp9k1F+DbzLwCJiblBgLOuXoL5ofr0IyN6AwlD5VKD1bQfnTpGf C7k+HJuW4sqoxa+11q+WY7FurTDRH+C2AZSzMTM7Xz35a4gbk1hfzBSuQbLj25AMza4T 5gwqmEGzvKaIjTQZfQIzT7Xo4hKyO8diYBxa0vS5ibZYaM5Gu2Igvcss8DT20hpiMUvq C9BIPQ1VPDv7+yOypa2zbv0OO9arhOyrzG+kJQggo1sbxFDaKXQm84ec1CKnTea2wyxr UTxEQ6sjHKqxzIlE+2PDnMWJ7TTj8egIa8Y2OcRGz8Ipp5d/AxyULEutMTifKa1XUkjN ASwQ== X-Gm-Message-State: AOAM5336/vMTyZu6g1DFDcThLJ2inYi+npl5DWLBgJXvOa39jQ/nTuMI pB8CVGrvZXATzmg8rElm12/9Qhth X-Google-Smtp-Source: ABdhPJwwJ6FImXAWETVvS8aVQkTszK7LdVQHHAgqehKcfaUi5GcmNfbDKZyWguSJ+Eqv6cg/qVN1Zg== X-Received: by 2002:a62:5547:0:b029:2ec:8f20:4e2 with SMTP id j68-20020a6255470000b02902ec8f2004e2mr33097276pfb.71.1625016872939; Tue, 29 Jun 2021 18:34:32 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id g8sm20252901pja.14.2021.06.29.18.34.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 18:34:32 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v9 2/5] drm: ttm_pool: Rework ttm_pool to use drm_page_pool Date: Wed, 30 Jun 2021 01:34:18 +0000 Message-Id: <20210630013421.735092-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org> References: <20210630013421.735092-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch reworks the ttm_pool logic to utilize the recently added drm_page_pool code. This adds drm_page_pool structures to the ttm_pool_type structures, and then removes all the ttm_pool_type shrinker logic (as its handled in the drm_page_pool shrinker). NOTE: There is one mismatch in the interfaces I'm not totally happy with. The ttm_pool tracks all of its pooled pages across a number of different pools, and tries to keep this size under the specified page_pool_size value. With the drm_page_pool, there may other users, however there is still one global shrinker list of pools. So we can't easily reduce the ttm pool under the ttm specified size without potentially doing a lot of shrinking to other non-ttm pools. So either we can: 1) Try to split it so each user of drm_page_pools manages its own pool shrinking. 2) Push the max value into the drm_page_pool, and have it manage shrinking to fit under that global max. Then share those size/max values out so the ttm_pool debug output can have more context. I've taken the second path in this patch set, but wanted to call it out so folks could look closely. Thoughts would be greatly appreciated here! Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v7: * Major refactoring to use drm_page_pools inside the ttm_pool_type structure. This allows us to use container_of to get the needed context to free a page. This also means less code is changed overall. v8: * Reworked to use the new cleanly rewritten drm_page_pool logic v9: * Renamed functions, and dropped duplicative order tracking, as suggested by ChristianK * Use new *_(un)lock_shrinker() hooks to fix atomic calculations for debugfs --- drivers/gpu/drm/Kconfig | 1 + drivers/gpu/drm/ttm/ttm_pool.c | 167 ++++++--------------------------- include/drm/ttm/ttm_pool.h | 14 +-- 3 files changed, 33 insertions(+), 149 deletions(-) -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 52d9ba92b35e..6be5344c009c 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -183,6 +183,7 @@ config DRM_PAGE_POOL config DRM_TTM tristate depends on DRM && MMU + select DRM_PAGE_POOL help GPU memory management subsystem for devices with multiple GPU memory types. Will be enabled automatically if a device driver diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index cb38b1a17b09..7ae647bce551 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -40,6 +40,7 @@ #include #endif +#include #include #include #include @@ -70,10 +71,6 @@ static struct ttm_pool_type global_uncached[MAX_ORDER]; static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; -static struct mutex shrinker_lock; -static struct list_head shrinker_list; -static struct shrinker mm_shrinker; - /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, unsigned int order) @@ -158,6 +155,15 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, kfree(dma); } +static void ttm_pool_free_callback(struct drm_page_pool *subpool, + struct page *p) +{ + struct ttm_pool_type *pt; + + pt = container_of(subpool, struct ttm_pool_type, subpool); + return ttm_pool_free_page(pt->pool, pt->caching, subpool->order, p); +} + /* Apply a new caching to an array of pages */ static int ttm_pool_apply_caching(struct page **first, struct page **last, enum ttm_caching caching) @@ -219,66 +225,20 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr, DMA_BIDIRECTIONAL); } -/* Give pages into a specific pool_type */ -static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p) -{ - unsigned int i, num_pages = 1 << pt->order; - - for (i = 0; i < num_pages; ++i) { - if (PageHighMem(p)) - clear_highpage(p + i); - else - clear_page(page_address(p + i)); - } - - spin_lock(&pt->lock); - list_add(&p->lru, &pt->pages); - spin_unlock(&pt->lock); - atomic_long_add(1 << pt->order, &allocated_pages); -} - -/* Take pages from a specific pool_type, return NULL when nothing available */ -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt) -{ - struct page *p; - - spin_lock(&pt->lock); - p = list_first_entry_or_null(&pt->pages, typeof(*p), lru); - if (p) { - atomic_long_sub(1 << pt->order, &allocated_pages); - list_del(&p->lru); - } - spin_unlock(&pt->lock); - - return p; -} - /* Initialize and add a pool type to the global shrinker list */ static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool, enum ttm_caching caching, unsigned int order) { pt->pool = pool; pt->caching = caching; - pt->order = order; - spin_lock_init(&pt->lock); - INIT_LIST_HEAD(&pt->pages); - mutex_lock(&shrinker_lock); - list_add_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); + drm_page_pool_init(&pt->subpool, order, ttm_pool_free_callback); } /* Remove a pool_type from the global shrinker list and free all pages */ static void ttm_pool_type_fini(struct ttm_pool_type *pt) { - struct page *p; - - mutex_lock(&shrinker_lock); - list_del(&pt->shrinker_list); - mutex_unlock(&shrinker_lock); - - while ((p = ttm_pool_type_take(pt))) - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + drm_page_pool_fini(&pt->subpool); } /* Return the pool_type to use for the given caching and order */ @@ -309,30 +269,6 @@ static struct ttm_pool_type *ttm_pool_select_type(struct ttm_pool *pool, return NULL; } -/* Free pages using the global shrinker list */ -static unsigned int ttm_pool_shrink(void) -{ - struct ttm_pool_type *pt; - unsigned int num_freed; - struct page *p; - - mutex_lock(&shrinker_lock); - pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); - - p = ttm_pool_type_take(pt); - if (p) { - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); - num_freed = 1 << pt->order; - } else { - num_freed = 0; - } - - list_move_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); - - return num_freed; -} - /* Return the allocation order based for a page */ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { @@ -389,7 +325,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, struct ttm_pool_type *pt; pt = ttm_pool_select_type(pool, tt->caching, order); - p = pt ? ttm_pool_type_take(pt) : NULL; + p = pt ? drm_page_pool_remove(&pt->subpool) : NULL; if (p) { apply_caching = true; } else { @@ -471,16 +407,13 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) pt = ttm_pool_select_type(pool, tt->caching, order); if (pt) - ttm_pool_type_give(pt, tt->pages[i]); + drm_page_pool_add(&pt->subpool, tt->pages[i]); else ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); i += num_pages; } - - while (atomic_long_read(&allocated_pages) > page_pool_size) - ttm_pool_shrink(); } EXPORT_SYMBOL(ttm_pool_free); @@ -532,44 +465,7 @@ void ttm_pool_fini(struct ttm_pool *pool) } } -/* As long as pages are available make sure to release at least one */ -static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_freed = 0; - - do - num_freed += ttm_pool_shrink(); - while (!num_freed && atomic_long_read(&allocated_pages)); - - return num_freed; -} - -/* Return the number of pages available or SHRINK_EMPTY if we have none */ -static unsigned long ttm_pool_shrinker_count(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_pages = atomic_long_read(&allocated_pages); - - return num_pages ? num_pages : SHRINK_EMPTY; -} - #ifdef CONFIG_DEBUG_FS -/* Count the number of pages available in a pool_type */ -static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) -{ - unsigned int count = 0; - struct page *p; - - spin_lock(&pt->lock); - /* Only used for debugfs, the overhead doesn't matter */ - list_for_each_entry(p, &pt->pages, lru) - ++count; - spin_unlock(&pt->lock); - - return count; -} - /* Print a nice header for the order */ static void ttm_pool_debugfs_header(struct seq_file *m) { @@ -588,7 +484,8 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, unsigned int i; for (i = 0; i < MAX_ORDER; ++i) - seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); + seq_printf(m, " %8lu", + drm_page_pool_get_size(&pt[i].subpool)); seq_puts(m, "\n"); } @@ -596,7 +493,10 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, static void ttm_pool_debugfs_footer(struct seq_file *m) { seq_printf(m, "\ntotal\t: %8lu of %8lu\n", - atomic_long_read(&allocated_pages), page_pool_size); + atomic_long_read(&allocated_pages), + drm_page_pool_get_max()); + seq_printf(m, "(%8lu in non-ttm pools)\n", drm_page_pool_get_total() - + atomic_long_read(&allocated_pages)); } /* Dump the information for the global pools */ @@ -604,7 +504,7 @@ static int ttm_pool_debugfs_globals_show(struct seq_file *m, void *data) { ttm_pool_debugfs_header(m); - mutex_lock(&shrinker_lock); + dma_page_pool_lock_shrinker(); seq_puts(m, "wc\t:"); ttm_pool_debugfs_orders(global_write_combined, m); seq_puts(m, "uc\t:"); @@ -613,7 +513,7 @@ static int ttm_pool_debugfs_globals_show(struct seq_file *m, void *data) ttm_pool_debugfs_orders(global_dma32_write_combined, m); seq_puts(m, "uc 32\t:"); ttm_pool_debugfs_orders(global_dma32_uncached, m); - mutex_unlock(&shrinker_lock); + dma_page_pool_unlock_shrinker(); ttm_pool_debugfs_footer(m); @@ -640,7 +540,7 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) ttm_pool_debugfs_header(m); - mutex_lock(&shrinker_lock); + dma_page_pool_lock_shrinker(); for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { seq_puts(m, "DMA "); switch (i) { @@ -656,7 +556,7 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) } ttm_pool_debugfs_orders(pool->caching[i].orders, m); } - mutex_unlock(&shrinker_lock); + dma_page_pool_unlock_shrinker(); ttm_pool_debugfs_footer(m); return 0; @@ -666,13 +566,10 @@ EXPORT_SYMBOL(ttm_pool_debugfs); /* Test the shrinker functions and dump the result */ static int ttm_pool_debugfs_shrink_show(struct seq_file *m, void *data) { - struct shrink_control sc = { .gfp_mask = GFP_NOFS }; - fs_reclaim_acquire(GFP_KERNEL); - seq_printf(m, "%lu/%lu\n", ttm_pool_shrinker_count(&mm_shrinker, &sc), - ttm_pool_shrinker_scan(&mm_shrinker, &sc)); + seq_printf(m, "%lu/%lu\n", drm_page_pool_get_total(), + (unsigned long)drm_page_pool_shrink()); fs_reclaim_release(GFP_KERNEL); - return 0; } DEFINE_SHOW_ATTRIBUTE(ttm_pool_debugfs_shrink); @@ -693,8 +590,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) if (!page_pool_size) page_pool_size = num_pages; - mutex_init(&shrinker_lock); - INIT_LIST_HEAD(&shrinker_list); + drm_page_pool_set_max(page_pool_size); for (i = 0; i < MAX_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, @@ -713,11 +609,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) debugfs_create_file("page_pool_shrink", 0400, ttm_debugfs_root, NULL, &ttm_pool_debugfs_shrink_fops); #endif - - mm_shrinker.count_objects = ttm_pool_shrinker_count; - mm_shrinker.scan_objects = ttm_pool_shrinker_scan; - mm_shrinker.seeks = 1; - return register_shrinker(&mm_shrinker); + return 0; } /** @@ -736,7 +628,4 @@ void ttm_pool_mgr_fini(void) ttm_pool_type_fini(&global_dma32_write_combined[i]); ttm_pool_type_fini(&global_dma32_uncached[i]); } - - unregister_shrinker(&mm_shrinker); - WARN_ON(!list_empty(&shrinker_list)); } diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 4321728bdd11..c854a81491da 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -30,6 +30,7 @@ #include #include #include +#include struct device; struct ttm_tt; @@ -39,22 +40,15 @@ struct ttm_operation_ctx; /** * ttm_pool_type - Pool for a certain memory type * - * @pool: the pool we belong to, might be NULL for the global ones - * @order: the allocation order our pages have + * @pool: the ttm pool we belong to, might be NULL for the global ones * @caching: the caching type our pages have - * @shrinker_list: our place on the global shrinker list - * @lock: protection of the page list - * @pages: the list of pages in the pool + * @subpool: the dma_page_pool that we use to manage the pages */ struct ttm_pool_type { struct ttm_pool *pool; - unsigned int order; enum ttm_caching caching; - struct list_head shrinker_list; - - spinlock_t lock; - struct list_head pages; + struct drm_page_pool subpool; }; /** From patchwork Wed Jun 30 01:34:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 468675 Delivered-To: patches@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp5477407jao; Tue, 29 Jun 2021 18:34:34 -0700 (PDT) X-Received: by 2002:a17:90a:be0e:: with SMTP id a14mr1754369pjs.43.1625016874629; Tue, 29 Jun 2021 18:34:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625016874; cv=none; d=google.com; s=arc-20160816; b=lDOixCbsrXOxPSnNv/YV/R/6kZ2dsUiU20Q88vEcInftC2O79wc2VXgnGR77GyfA8E NigblEQjPMq6IWtf7BjVC4ZMSb+FoiRtL5tI3o6bdjyNaXFV0Sj3/prVgnWEDCSKeYCy Of/bbY2cunQWm8g3TzOhWkfZIc/0M4mkWZJbwddC1ucJFvSXdhiTj/n1BfTkxWiG1Yj3 8QRu7NVewyKv2vuxvIsQ98kdoHFiybW31PjxH6b5qEBt6gCyl7qNCwqAEfTueq3n/7cJ Urykg5Pt2aF2db/9n1H3xD1fttGS4FmBafQnV6ZhTrlHW+rS9sJngWEnuy/4oJPw/xun Ou4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=7lwmLoDhSeUynyw9WpwUoeXd78SHIQUhC9+ggRtjsFY=; b=eYlARfUVEtdt0gkmD7kb2DIWIcJ7A2JlijJrtWvNc1OeDx4/0iYXUj8xWtAmj6ocgp 2U/ra6SqbwXjMRKTQzw7De+78ltebXc/eEYMYUUPQV94uWBgLWmUhbooyFzBpDc5ZNVN 8r9ek1YuVQOtJntwbffseZKiIjkCA5TYR/6m4vVGP6Ok0j1rEbkF2Z70NACMqz7l5L+r sYvvpDpuMfbv+EOaMBO21EX1ZsuOMc1yVT9pEiyMQIGubgRd2Wx8iYAtL5KITkP3fFtG A8IyGok4KAltfbl/4yFz4s2c3o8eW6BmEdBkfy3Lx+/cv91vIyvFDPz76PiY6IZ9axMc TZIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HBlk9uLZ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id h3sor2404575pjg.16.2021.06.29.18.34.34 for (Google Transport Security); Tue, 29 Jun 2021 18:34:34 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HBlk9uLZ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7lwmLoDhSeUynyw9WpwUoeXd78SHIQUhC9+ggRtjsFY=; b=HBlk9uLZFBTPmMPZXa1DY3yBqmvzmoLq9Ga/hS6DV3Y3+0uu6Cm9LR/zISXJR6fOeZ 2MV8mIa2ielnvbs7nQz61soPcBst2ko+p169WKQAV9uGEe4EwPZ9oOpXACSwlTTk3NFQ O1vj4a+RurTq2Ka/FeTf/1mYe+EKjH3Jq+4//h3Wy3gMamx6gpGvk/jWd3dTZ/Ike12A bGHPqmy85MbuzF4v5xM+psqE8xDBcv0VTT2gs6X/Y23lHNgsX92PZV+dt3XuZmZXFP/i HqZ1uMOuBPYk+T8NWnglX/M0CeHZjgJEsocYYOa1vY6Fjat5gILcJ0r0gUdVM9GrsFB1 3ODA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7lwmLoDhSeUynyw9WpwUoeXd78SHIQUhC9+ggRtjsFY=; b=qERu3hGbVcuY3RoSBB2kl0VSL0PLY6vPZhtdWcnNjxNawts8HZtIuT69S1LHiyv38f hmteqtrIDMXOhIfvvWgAq9afWvVG4wdWdXhae2V6EYetACNtM9i74s5yPYGyLjCvUm6T BQRJHbavqS2q6di3vYb1qh2a4B1nAQZMloI6Z5jEMsI/lmCPhIi0ZqQCXDDaXAgQ3PnP gtnxveUc1WhnSQAUmwUFWWvEHt7RZNNLk3qHsNeQuGy4oi5jvTYTVKCCKsYflzPk8s94 pVp6fIBkt0cn0DsoUU1teLyM7UJ8oTkrHENhCUKZT6r1y6RV4YiblSzFCPZfsB/e0fvQ coFw== X-Gm-Message-State: AOAM530vLweyljc8Egu7Nr16VJ+3WiRy+CcPBDXeNLjAOJyWmEUrqT1s jkopXrCgoba2Zr3z3Ebl6Ajkza1o X-Google-Smtp-Source: ABdhPJxMWCmXP/0OjhKqFlMasVLKq7rLdoTzTd5HNImYqVgNkNOOmmOdq+gMPC9r1CKc6CFEJnq2fQ== X-Received: by 2002:a17:90a:5889:: with SMTP id j9mr36864843pji.234.1625016874317; Tue, 29 Jun 2021 18:34:34 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id g8sm20252901pja.14.2021.06.29.18.34.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 18:34:33 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v9 3/5] dma-buf: system_heap: Add drm pagepool support to system heap Date: Wed, 30 Jun 2021 01:34:19 +0000 Message-Id: <20210630013421.735092-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org> References: <20210630013421.735092-1-john.stultz@linaro.org> MIME-Version: 1.0 Utilize the drm pagepool code to speed up allocation performance. This is similar to the ION pagepool usage, but tries to utilize generic code instead of a custom implementation. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix build issue caused by selecting PAGE_POOL w/o NET as Reported-by: kernel test robot v3: * Simplify the page zeroing logic a bit by using kmap_atomic instead of vmap as suggested by Daniel Mentz v5: * Shift away from networking page pool completely to dmabuf page pool implementation v6: * Switch again to using the drm_page_pool code shared w/ ttm_pool v7: * Slight rework for drm_page_pool changes v8: * Rework to use the rewritten drm_page_pool logic * Drop explicit buffer zeroing, as the drm_page_pool handles that v9: * Fix compiler warning Reported-by: kernel test robot --- drivers/dma-buf/heaps/Kconfig | 1 + drivers/dma-buf/heaps/system_heap.c | 26 +++++++++++++++++++++++--- 2 files changed, 24 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..f19bf1f82bc2 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -1,6 +1,7 @@ config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS + select DRM_PAGE_POOL help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index f57a39ddd063..85ceca2ed61d 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -21,6 +21,8 @@ #include #include +#include + static struct dma_heap *sys_heap; struct system_heap_buffer { @@ -54,6 +56,7 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, MID_ORDER_GFP, LOW_ORDER_GFP}; */ static const unsigned int orders[] = {8, 4, 0}; #define NUM_ORDERS ARRAY_SIZE(orders) +struct drm_page_pool pools[NUM_ORDERS]; static struct sg_table *dup_sg_table(struct sg_table *table) { @@ -282,18 +285,27 @@ static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) dma_buf_map_clear(map); } +static void system_heap_free_pages(struct drm_page_pool *pool, struct page *p) +{ + __free_pages(p, pool->order); +} + static void system_heap_dma_buf_release(struct dma_buf *dmabuf) { struct system_heap_buffer *buffer = dmabuf->priv; struct sg_table *table; struct scatterlist *sg; - int i; + int i, j; table = &buffer->sg_table; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); - __free_pages(page, compound_order(page)); + for (j = 0; j < NUM_ORDERS; j++) { + if (compound_order(page) == orders[j]) + break; + } + drm_page_pool_add(&pools[j], page); } sg_free_table(table); kfree(buffer); @@ -324,7 +336,9 @@ static struct page *alloc_largest_available(unsigned long size, if (max_order < orders[i]) continue; - page = alloc_pages(order_flags[i], orders[i]); + page = drm_page_pool_remove(&pools[i]); + if (!page) + page = alloc_pages(order_flags[i], orders[i]); if (!page) continue; return page; @@ -425,6 +439,12 @@ static const struct dma_heap_ops system_heap_ops = { static int system_heap_create(void) { struct dma_heap_export_info exp_info; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + drm_page_pool_init(&pools[i], orders[i], + system_heap_free_pages); + } exp_info.name = "system"; exp_info.ops = &system_heap_ops; From patchwork Wed Jun 30 01:34:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 468676 Delivered-To: patches@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp5477419jao; Tue, 29 Jun 2021 18:34:36 -0700 (PDT) X-Received: by 2002:a65:478d:: with SMTP id e13mr31628169pgs.37.1625016876086; Tue, 29 Jun 2021 18:34:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625016876; cv=none; d=google.com; s=arc-20160816; b=zNV4DN8LD14eD3EjmxQ9X22/l0v6tD3vGSrdPVRbYdt6PZW1GaO2mMhWXh7c0dCp/5 IrkTWXX6WF2IEujOrWJr6hP2ilSgx4oM7fCogX4qrtDsCzwu/TaUI9XNue5la3DI8Ngp I4z2zN3dPnbWxqfg69C6lHdvOxxWpMJjNbqD11gogFfPK3xzA2riGN2Av8kDPZOnhJgC nMlqSrgzh2046ItdVmrxc3v9Kkj5e/N91ljoPce8yaaYyYXxvmguPeTA2fToW0yS7Tau c2PolXomqGWz2Y2xXqWQ0ECawpZHS8KyD3pPfeZ+C07DGIBDyM2xx/kQ/5Rcf8435frc DVfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=aWRozxEZbZiEAkUcrsi4H8KZcPOe7QSabHgZIf7aL7Y=; b=zsPGQc3mX7i1Mw4rNLjNkqZ4A8YLYzYo8SjSsDrSyTPMxYSCdwW0Kcb6clMJRnrykC oi77nMX18rEdREuLhoDTa3dj4DeLGYSlexsw/Vdca8bC4Mn8FDcyYnWahfacbaHmGCQ7 BSX1eP9GcMDEDgbd6T5htQSUzPz0OwALH98XrfmckO5uAUHXgLWuYBsFt/957u3zIl5w CKn7x0DuyWoEnA7iABgClmak9+8+Ce6j/bXEUTT0eEaO91J7WMiAVs4Fd5e02g8BLczo o1UO8hWmwjf4KbViSsFc4zu5mNRvISHhqeThkw/If2xAJQI5HdhBtcfv0EFNH6Br5rd+ 9pAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Da6F842a; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id x14sor2217678pff.71.2021.06.29.18.34.36 for (Google Transport Security); Tue, 29 Jun 2021 18:34:36 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Da6F842a; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aWRozxEZbZiEAkUcrsi4H8KZcPOe7QSabHgZIf7aL7Y=; b=Da6F842aFiiJPMknoXqgJWMwKzz5M/o97rrB+COxWjuTsD9IrcbHN6yM2wBE1zidOj PP4BqMx/KJSaXO69vhBiPMRp99ydGeIRCqky4kRx6lAgW9YEmee51H3uX6XBuZIi9wxt 0Umg0zdZPwvdBwfPYrdAsfgVfpAi+WOrB5+h23Rl61pBzEE70J8Wa/E7zfoJe18S/iZx MvGdMTXCsndySLPEu8e3v/RbASUn023YASvM5kX89tT3FAtsMrtBcDk7IWlzEhdpZCBn ovDSNnfTSWbmfxNyzX4kawYvweEHm1BXPPVSQz8U5BqQYPn8JNc8z4z54NWE+5vQ9EKJ xxew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aWRozxEZbZiEAkUcrsi4H8KZcPOe7QSabHgZIf7aL7Y=; b=MqonFm5CJit8zxFVsj8xTPpLkr3FcQCSW9YK37PFtqUbU4EwHl3sLU8BaH8L/VxxBf PgbXZexQWKG+3eUACLX6XpqGCvyvC6JkGB9e9z5JNfARNptSbWiceO5x7g8Wb9FvO9sf rq+a3JWyrNAzl+3FUfYg/IXebLbp/gRpkXIsjJhe1rLUS6VxD8xD65QeKZCta0N9V7qp B0zmq9EHI+ZqddgpcMA3X+1EOKoOQgPEc5MiOL69l4x7f70np68xBrlf4kh2YfLoeVm2 1u+Dh6sLDX9y3jTbfdCjGUHl641374/FQS1JNdr1AwDldVc9F+be/QBuQ0aQQ8pnsCv8 QqFg== X-Gm-Message-State: AOAM531J6p9BqgfEDNvGYpjgaENecGiQHptTPT5PGJo9zY2SzlKcmtIw UnJw9D3UGAASBRbX5lsOvtnarU+v X-Google-Smtp-Source: ABdhPJxoBxKKikgU3Z/Fu8P1ar70mlUUjSCGePfCookEsRkfPjRE/u3WGIlnshh7FdbV9iash/WBiA== X-Received: by 2002:aa7:8e19:0:b029:30c:3dbc:8d0f with SMTP id c25-20020aa78e190000b029030c3dbc8d0fmr14041514pfr.27.1625016875788; Tue, 29 Jun 2021 18:34:35 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id g8sm20252901pja.14.2021.06.29.18.34.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 18:34:35 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v9 4/5] dma-buf: heaps: Add deferred-free-helper library code Date: Wed, 30 Jun 2021 01:34:20 +0000 Message-Id: <20210630013421.735092-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org> References: <20210630013421.735092-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch provides infrastructure for deferring buffer frees. This is a feature ION provided which when used with some form of a page pool, provides a nice performance boost in an allocation microbenchmark. The reason it helps is it allows the page-zeroing to be done out of the normal allocation/free path, and pushed off to a kthread. As not all heaps will find this useful, its implemented as a optional helper library that heaps can utilize. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix sleep in atomic issue from using a mutex, by switching to a spinlock as Reported-by: kernel test robot * Cleanup API to use a reason enum for clarity and add some documentation comments as suggested by Suren Baghdasaryan. v3: * Minor tweaks so it can be built as a module * A few small fixups suggested by Daniel Mentz v4: * Tweak from Daniel Mentz to make sure the shrinker count/freed values are tracked in pages not bytes v5: * Fix up page count tracking as suggested by Suren Baghdasaryan v7: * Rework accounting to use nr_pages rather then size, as suggested by Suren Baghdasaryan --- drivers/dma-buf/heaps/Kconfig | 3 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/deferred-free-helper.c | 138 +++++++++++++++++++ drivers/dma-buf/heaps/deferred-free-helper.h | 55 ++++++++ 4 files changed, 197 insertions(+) create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index f19bf1f82bc2..7e28934e0def 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -1,3 +1,6 @@ +config DMABUF_HEAPS_DEFERRED_FREE + tristate + config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..4e7839875615 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_DMABUF_HEAPS_DEFERRED_FREE) += deferred-free-helper.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/deferred-free-helper.c b/drivers/dma-buf/heaps/deferred-free-helper.c new file mode 100644 index 000000000000..e19c8b68dfeb --- /dev/null +++ b/drivers/dma-buf/heaps/deferred-free-helper.c @@ -0,0 +1,138 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Deferred dmabuf freeing helper + * + * Copyright (C) 2020 Linaro, Ltd. + * + * Based on the ION page pool code + * Copyright (C) 2011 Google, Inc. + */ + +#include +#include +#include +#include +#include + +#include "deferred-free-helper.h" + +static LIST_HEAD(free_list); +static size_t list_nr_pages; +wait_queue_head_t freelist_waitqueue; +struct task_struct *freelist_task; +static DEFINE_SPINLOCK(free_list_lock); + +void deferred_free(struct deferred_freelist_item *item, + void (*free)(struct deferred_freelist_item*, + enum df_reason), + size_t nr_pages) +{ + unsigned long flags; + + INIT_LIST_HEAD(&item->list); + item->nr_pages = nr_pages; + item->free = free; + + spin_lock_irqsave(&free_list_lock, flags); + list_add(&item->list, &free_list); + list_nr_pages += nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + wake_up(&freelist_waitqueue); +} +EXPORT_SYMBOL_GPL(deferred_free); + +static size_t free_one_item(enum df_reason reason) +{ + unsigned long flags; + size_t nr_pages; + struct deferred_freelist_item *item; + + spin_lock_irqsave(&free_list_lock, flags); + if (list_empty(&free_list)) { + spin_unlock_irqrestore(&free_list_lock, flags); + return 0; + } + item = list_first_entry(&free_list, struct deferred_freelist_item, list); + list_del(&item->list); + nr_pages = item->nr_pages; + list_nr_pages -= nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + + item->free(item, reason); + return nr_pages; +} + +static unsigned long get_freelist_nr_pages(void) +{ + unsigned long nr_pages; + unsigned long flags; + + spin_lock_irqsave(&free_list_lock, flags); + nr_pages = list_nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + return nr_pages; +} + +static unsigned long freelist_shrink_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + return get_freelist_nr_pages(); +} + +static unsigned long freelist_shrink_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long total_freed = 0; + + if (sc->nr_to_scan == 0) + return 0; + + while (total_freed < sc->nr_to_scan) { + size_t pages_freed = free_one_item(DF_UNDER_PRESSURE); + + if (!pages_freed) + break; + + total_freed += pages_freed; + } + + return total_freed; +} + +static struct shrinker freelist_shrinker = { + .count_objects = freelist_shrink_count, + .scan_objects = freelist_shrink_scan, + .seeks = DEFAULT_SEEKS, + .batch = 0, +}; + +static int deferred_free_thread(void *data) +{ + while (true) { + wait_event_freezable(freelist_waitqueue, + get_freelist_nr_pages() > 0); + + free_one_item(DF_NORMAL); + } + + return 0; +} + +static int deferred_freelist_init(void) +{ + list_nr_pages = 0; + + init_waitqueue_head(&freelist_waitqueue); + freelist_task = kthread_run(deferred_free_thread, NULL, + "%s", "dmabuf-deferred-free-worker"); + if (IS_ERR(freelist_task)) { + pr_err("Creating thread for deferred free failed\n"); + return -1; + } + sched_set_normal(freelist_task, 19); + + return register_shrinker(&freelist_shrinker); +} +module_init(deferred_freelist_init); +MODULE_LICENSE("GPL v2"); + diff --git a/drivers/dma-buf/heaps/deferred-free-helper.h b/drivers/dma-buf/heaps/deferred-free-helper.h new file mode 100644 index 000000000000..11940328ce3f --- /dev/null +++ b/drivers/dma-buf/heaps/deferred-free-helper.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef DEFERRED_FREE_HELPER_H +#define DEFERRED_FREE_HELPER_H + +/** + * df_reason - enum for reason why item was freed + * + * This provides a reason for why the free function was called + * on the item. This is useful when deferred_free is used in + * combination with a pagepool, so under pressure the page can + * be immediately freed. + * + * DF_NORMAL: Normal deferred free + * + * DF_UNDER_PRESSURE: Free was called because the system + * is under memory pressure. Usually + * from a shrinker. Avoid allocating + * memory in the free call, as it may + * fail. + */ +enum df_reason { + DF_NORMAL, + DF_UNDER_PRESSURE, +}; + +/** + * deferred_freelist_item - item structure for deferred freelist + * + * This is to be added to the structure for whatever you want to + * defer freeing on. + * + * @nr_pages: number of pages used by item to be freed + * @free: function pointer to be called when freeing the item + * @list: list entry for the deferred list + */ +struct deferred_freelist_item { + size_t nr_pages; + void (*free)(struct deferred_freelist_item *i, + enum df_reason reason); + struct list_head list; +}; + +/** + * deferred_free - call to add item to the deferred free list + * + * @item: Pointer to deferred_freelist_item field of a structure + * @free: Function pointer to the free call + * @nr_pages: number of pages to be freed + */ +void deferred_free(struct deferred_freelist_item *item, + void (*free)(struct deferred_freelist_item *i, + enum df_reason reason), + size_t nr_pages); +#endif From patchwork Wed Jun 30 01:34:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 468677 Delivered-To: patches@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp5477435jao; Tue, 29 Jun 2021 18:34:37 -0700 (PDT) X-Received: by 2002:a17:90a:df10:: with SMTP id gp16mr1791635pjb.164.1625016877492; Tue, 29 Jun 2021 18:34:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625016877; cv=none; d=google.com; s=arc-20160816; b=DGaF+Q9zJ9pAOO8e3ZWHzlVG7MPkz6IZvdx8ZU+qdFRfiEFC7GK6yLN7CH/A/8lr2X 1Z6GrFGLAbaWcSaVE31lY29kx9AabrVvKc/Vs8GTUQxuIUgRpzrDlrXtAChCs83H6Pkl Oq/M89KlJXBF+c5POwsj1hWMybaIZPJ50xTAEwiljVOihpDaOcpilZDlZza69XXsl1Az 6sVEhurQzQwZOw7d7iMi0S/sh9wMedHQbtBGXio+IUyp9g2qyG7CJBvBjmD02ccTh3f4 AkmLyhHRiDF3qNN3UrNAVXWg0e+RaxO8fSHExyp8z12rn6vXSB7fW1C+cWwmvNWNwQY5 fXLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=DFvfw3znc9/iNgUeu9+epS+fTvKPZKLuhAUaTL0kY/M=; b=fmCJiNY4jcOhpScW7rmRp32twdsZxF47rx53p8Mk33ISYWSROs2k9LHXVgvVpXUiGc 7Dj71arjMeZntaHX4qhY8dF9rZOu/Eu+ZSlShMUxBAcwUb5THugJi+BX+atg+28JwDNC k8acKhat/PLEaDi0kgxS3BT/FuvWM5DzJQbLWHGHn9riS5Ssz9y2AKC/c05OTfIIxM5W O0+Xq+BAW8ax0WnToOFSM6PH8nWmZXX4y4debhf4BKpMXRUsBDgu7miuUqGC67qpVUD6 dG4J6pMklcEZkiPo9wIv9iyFwDljfFIUBqbk6/gBh8uLzXCUoISO5+d/s7TdOVZT/1Ow fPMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oJpnLzmE; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id q5sor9796319pls.37.2021.06.29.18.34.37 for (Google Transport Security); Tue, 29 Jun 2021 18:34:37 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oJpnLzmE; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DFvfw3znc9/iNgUeu9+epS+fTvKPZKLuhAUaTL0kY/M=; b=oJpnLzmEuM+/uMBXpQoc4X8AngeoGclUDsnVQ33jUTM2egEVhMJCBUDpXcH6nrLmLx 82TQkoyi4f0rkpVhElAKnsaSTOfD6OmughNSeXmHMm+j4LHIzeDaZawrx4TFdwTvkvrZ 0m6YLmyM3JPKZ8t0/wSpkHfYfoZ4A9e2eiQIAkun3gG80Bv2aYJYlKVh5V2HokFE8Byx TN0BFmtpEVrmCsBlOzxffG4lfxMs3c2pDf5aLQfsmtPizwrjdnpBOK0zpapq/F9k2KvW PYJ4xkQ78400cqdx6qVaZ5sZDY5J2AxtEr6nCwEa+0qA9T50kiR/CxbeBV30DR1XWWdt 6I/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DFvfw3znc9/iNgUeu9+epS+fTvKPZKLuhAUaTL0kY/M=; b=fd9MloNsnwzsNrnhWRKKOqWSiebSwBeD4ioRwKHypZ9du2YLJB2IITclcR9pSdAZoK 1I9tEHJBawcJyQy/+CyXK8xcaGVMkI3jgIvgsfuoTjTuTfhwzRuJMpB0LJAlMUrbVC6K vrusR9Ojm+e1iK1Cr4AG69oYh8DgnB9Ree8W+yg/QaIxIcib/zUIAEOMzODfT/owRAq7 nK4RE+wJmBlMt4oDh0Hw4td+AxzKJPR+Qj+6P23hW9yE4xH7k3DRIbFX2Ub4kt998YWS b2LhqZTz9/Iv5TOleo5Ww6Ol9vFIXagqTOqa0hZkQGA1Gd53XqyNgTmuZhxEmOp+GYCd r45Q== X-Gm-Message-State: AOAM531WqgkADwhpEkJo6xaeVGYWVoMrZOLvCkFL1x0+mWBbmHJsZ2gv KSHQ8uQz3sS7m+zcYV1ADfhwueR3 X-Google-Smtp-Source: ABdhPJye9DDg0sagcLOtuXFoJZtGmUVapq2S6vfxaIYk+1HENhJqEUJ3Rpiz4dsjZGx74LGwHaDkvw== X-Received: by 2002:a17:902:6b84:b029:ee:f966:1911 with SMTP id p4-20020a1709026b84b02900eef9661911mr30460530plk.69.1625016877203; Tue, 29 Jun 2021 18:34:37 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id g8sm20252901pja.14.2021.06.29.18.34.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 18:34:36 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v9 5/5] dma-buf: system_heap: Add deferred freeing to the system heap Date: Wed, 30 Jun 2021 01:34:21 +0000 Message-Id: <20210630013421.735092-6-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org> References: <20210630013421.735092-1-john.stultz@linaro.org> MIME-Version: 1.0 Utilize the deferred free helper library in the system heap. This provides a nice performance bump and puts the system heap performance on par with ION. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Rework deferred-free api to use reason enum as suggested by Suren Baghdasaryan * Rework for deferred-free api change to use nr_pages rather than size as suggsted by Suren Baghdasaryan v8: * Reworked to drop buffer zeroing logic, as the drm_page_pool now handles that. --- drivers/dma-buf/heaps/Kconfig | 1 + drivers/dma-buf/heaps/system_heap.c | 28 ++++++++++++++++++++++------ 2 files changed, 23 insertions(+), 6 deletions(-) -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 7e28934e0def..10632ccfb4a5 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -5,6 +5,7 @@ config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS select DRM_PAGE_POOL + select DMABUF_HEAPS_DEFERRED_FREE help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 85ceca2ed61d..8a0170b0427e 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -22,6 +22,7 @@ #include #include +#include "deferred-free-helper.h" static struct dma_heap *sys_heap; @@ -33,6 +34,7 @@ struct system_heap_buffer { struct sg_table sg_table; int vmap_cnt; void *vaddr; + struct deferred_freelist_item deferred_free; }; struct dma_heap_attachment { @@ -290,27 +292,41 @@ static void system_heap_free_pages(struct drm_page_pool *pool, struct page *p) __free_pages(p, pool->order); } -static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +static void system_heap_buf_free(struct deferred_freelist_item *item, + enum df_reason reason) { - struct system_heap_buffer *buffer = dmabuf->priv; + struct system_heap_buffer *buffer; struct sg_table *table; struct scatterlist *sg; int i, j; + buffer = container_of(item, struct system_heap_buffer, deferred_free); table = &buffer->sg_table; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); - for (j = 0; j < NUM_ORDERS; j++) { - if (compound_order(page) == orders[j]) - break; + if (reason == DF_UNDER_PRESSURE) { + __free_pages(page, compound_order(page)); + } else { + for (j = 0; j < NUM_ORDERS; j++) { + if (compound_order(page) == orders[j]) + break; + } + drm_page_pool_add(&pools[j], page); } - drm_page_pool_add(&pools[j], page); } sg_free_table(table); kfree(buffer); } +static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + + deferred_free(&buffer->deferred_free, system_heap_buf_free, npages); +} + static const struct dma_buf_ops system_heap_buf_ops = { .attach = system_heap_attach, .detach = system_heap_detach,