From patchwork Thu Mar 4 23:20:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 393457 Delivered-To: patches@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp386055jai; Thu, 4 Mar 2021 15:20:16 -0800 (PST) X-Received: by 2002:a17:902:562:b029:e5:e7cf:d749 with SMTP id 89-20020a1709020562b02900e5e7cfd749mr2077449plf.3.1614900016396; Thu, 04 Mar 2021 15:20:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614900016; cv=none; d=google.com; s=arc-20160816; b=vS1ECNxMsrvt6JHqpoFEDTQCGQurfSPTaervDnSO9BNX2xFMytJYOmik+INjDnSxKV hg6smNM/gE8vSM0i0CGIfzo4W9N0XBkWlkK5hWMNBFxriIR4Cnyk+vETmENGy/gczvud Vxc6CFoMtKqvp3kGEXH5Yi9c6lcaQoOrn6zpg/VThWGZMcPq4rnohBtCPuOyTYzWmcbS jhoTLDyOapN4wfOit2TaUxLbDVJfHO75rM6mwMDUp9D0DZy0fHzYByMACKIaYSbYwrCM ulAZOBBge6zieWFxmbz5uYi8ERqdMJwy/Z1u3LybJDbb9pvxFbO+Ewu6i4+BiPIfuuzZ uwlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=Fqm2n/WLZ4eT9r+Omo3xPTCh/D9fjyEDFpTAnWdfMxc=; b=ajcALd+GSH+INuqTXXv20NNALQwBHT8N22CozVOb0zf+Bjd05YM1UdhXIdMMqj1tod 4NR/Ftlht17EvGxk3gNfYU4FiO25X2RVo8252GpV2H+mKBK6G0esvzudLo8pOsZofKvh lSQt1A0jBRMr852ZtjqU6SsLGPn/EATLrQNNpBqI3o5Hg1eC1b6wwid0G6uy59GmnflR NPhiWUPf8+gZ1rToxj03mQAbVvGRDPQ8b51zbswjCqQwgVEwE1BAiosxlNhnXywYUw3J JlWKSMIpRRZL4K6RvDhxB7df1Fcz32s24iRdKS2qSIrjxI5yRKNYX4+c3VdAnD4IYY0k 7gjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F2XR2R9X; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id l13sor359762pjq.5.2021.03.04.15.20.16 for (Google Transport Security); Thu, 04 Mar 2021 15:20:16 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=F2XR2R9X; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Fqm2n/WLZ4eT9r+Omo3xPTCh/D9fjyEDFpTAnWdfMxc=; b=F2XR2R9XnG6Pl2ygAmc7iKRvSRVWJp1v3xUfsM+HsXSJJnvSvA5KzoLoUBj+OcGM74 lvNhdnut/cgMAmLI6drQIqgMB0mwCmNAQpKRepM+Dg/i13va+NkjlGilhzigUw4SH5Z5 HgCwZxMi3+A7OOHOQvkNxxgIwZnHZBgFXgzqmFIp47vE8gW5tRHP6XObrk/LXZrwtO5t 1jrbCcShLjWUcXdU+6wN+HJt64p7TjmGmO0BeZ8cKvEvtwePgaKCpe8oVR1dU3Ik0cQP WNQuDFFmNyaqsObiDmYCfOrErqmT5mP+svooLmVV4hjROFRY1fEhdkW9Xy/DrMWprhvb +3ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Fqm2n/WLZ4eT9r+Omo3xPTCh/D9fjyEDFpTAnWdfMxc=; b=roDMVeQ9ZNp1aqBvBYTbOmaN+CtuT7c4LRX0WFYyvNVXcLNx/vWxa+WKiDD2whyFfb fu2i63V/TYU2goGUZmczcu7tCyKkNmJJtH3p2bLbzywmV98nw8olngCu0Z4e8Vg/VDAU QnwF9EsbcL8VllyMLgSoEmsFYFgZWVgj7bO6Iv1Qkauto5z8IHNjrhEWq+j5YWS3/MAq r8c6335KJnlg70kDEELMarNvyvd3i2hFH6LxgEr9u4fAVjVr/mJkGDmbCqQhYnalpA8E MuDVR421hL1Q7LfQsIHuRVl1EFylvO+EiJFdB+vw9XnqUW8CJ8Bjn+SxYBpab/jLfZRz dHTw== X-Gm-Message-State: AOAM532UMKTgN7Upnlv3JSYs31dEXo2L2iHXYaTOtbJL8XXpFHz9AMh3 YbVLK+9q227eCboAzFr6MmuRyhqM X-Google-Smtp-Source: ABdhPJxP9czYFnGyxuCxK65IM0dMb7+TMQaJSX4C16Yezzu0bg5hD2R1LO24UOGgxEkMNX80XnkdVg== X-Received: by 2002:a17:90a:488f:: with SMTP id b15mr7010573pjh.53.1614900016079; Thu, 04 Mar 2021 15:20:16 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id u66sm429290pfc.72.2021.03.04.15.20.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Mar 2021 15:20:15 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v8 1/5] drm: Add a sharable drm page-pool implementation Date: Thu, 4 Mar 2021 23:20:07 +0000 Message-Id: <20210304232011.1479036-2-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210304232011.1479036-1-john.stultz@linaro.org> References: <20210304232011.1479036-1-john.stultz@linaro.org> MIME-Version: 1.0 This adds a shrinker controlled page pool, extracted out of the ttm_pool logic, and abstracted out a bit so it can be used by other non-ttm drivers. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v8: * Completely rewritten from scratch, using only the ttm_pool logic so it can be dual licensed. --- drivers/gpu/drm/Kconfig | 4 + drivers/gpu/drm/Makefile | 2 + drivers/gpu/drm/page_pool.c | 214 ++++++++++++++++++++++++++++++++++++ include/drm/page_pool.h | 65 +++++++++++ 4 files changed, 285 insertions(+) create mode 100644 drivers/gpu/drm/page_pool.c create mode 100644 include/drm/page_pool.h -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index e392a90ca687..7cbcecb8f7df 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -177,6 +177,10 @@ config DRM_DP_CEC Note: not all adapters support this feature, and even for those that do support this they often do not hook up the CEC pin. +config DRM_PAGE_POOL + bool + depends on DRM + config DRM_TTM tristate depends on DRM && MMU diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 926adef289db..2dc7b2fe3fe5 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -39,6 +39,8 @@ obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o drm_ttm_helper-y := drm_gem_ttm_helper.o obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o +drm-$(CONFIG_DRM_PAGE_POOL) += page_pool.o + drm_kms_helper-y := drm_bridge_connector.o drm_crtc_helper.o drm_dp_helper.o \ drm_dsc.o drm_probe_helper.o \ drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \ diff --git a/drivers/gpu/drm/page_pool.c b/drivers/gpu/drm/page_pool.c new file mode 100644 index 000000000000..a60b954cfe0f --- /dev/null +++ b/drivers/gpu/drm/page_pool.c @@ -0,0 +1,214 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Sharable page pool implementation + * + * Extracted from drivers/gpu/drm/ttm/ttm_pool.c + * Copyright 2020 Advanced Micro Devices, Inc. + * Copyright 2021 Linaro Ltd. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Christian König, John Stultz + */ + +#include +#include +#include +#include + +static unsigned long page_pool_size; + +MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA pool"); +module_param(page_pool_size, ulong, 0644); + +static atomic_long_t allocated_pages; + +static struct mutex shrinker_lock; +static struct list_head shrinker_list; +static struct shrinker mm_shrinker; + +void drm_page_pool_set_max(unsigned long max) +{ + if (!page_pool_size) + page_pool_size = max; +} + +unsigned long drm_page_pool_get_max(void) +{ + return page_pool_size; +} + +unsigned long drm_page_pool_get_total(void) +{ + return atomic_long_read(&allocated_pages); +} + +unsigned long drm_page_pool_get_size(struct drm_page_pool *pool) +{ + unsigned long size; + + spin_lock(&pool->lock); + size = pool->page_count; + spin_unlock(&pool->lock); + return size; +} + +/* Give pages into a specific pool */ +void drm_page_pool_add(struct drm_page_pool *pool, struct page *p) +{ + unsigned int i, num_pages = 1 << pool->order; + + for (i = 0; i < num_pages; ++i) { + if (PageHighMem(p)) + clear_highpage(p + i); + else + clear_page(page_address(p + i)); + } + + spin_lock(&pool->lock); + list_add(&p->lru, &pool->pages); + pool->page_count += 1 << pool->order; + spin_unlock(&pool->lock); + atomic_long_add(1 << pool->order, &allocated_pages); +} + +/* Take pages from a specific pool, return NULL when nothing available */ +struct page *drm_page_pool_remove(struct drm_page_pool *pool) +{ + struct page *p; + + spin_lock(&pool->lock); + p = list_first_entry_or_null(&pool->pages, typeof(*p), lru); + if (p) { + atomic_long_sub(1 << pool->order, &allocated_pages); + pool->page_count -= 1 << pool->order; + list_del(&p->lru); + } + spin_unlock(&pool->lock); + + return p; +} + +/* Initialize and add a pool type to the global shrinker list */ +void drm_page_pool_init(struct drm_page_pool *pool, unsigned int order, + unsigned long (*free_page)(struct drm_page_pool *pool, struct page *p)) +{ + pool->order = order; + spin_lock_init(&pool->lock); + INIT_LIST_HEAD(&pool->pages); + pool->free = free_page; + pool->page_count = 0; + + mutex_lock(&shrinker_lock); + list_add_tail(&pool->shrinker_list, &shrinker_list); + mutex_unlock(&shrinker_lock); +} + +/* Remove a pool_type from the global shrinker list and free all pages */ +void drm_page_pool_fini(struct drm_page_pool *pool) +{ + struct page *p, *tmp; + + mutex_lock(&shrinker_lock); + list_del(&pool->shrinker_list); + mutex_unlock(&shrinker_lock); + + list_for_each_entry_safe(p, tmp, &pool->pages, lru) + pool->free(pool, p); +} + +/* Free pages using the global shrinker list */ +static unsigned int drm_page_pool_shrink(void) +{ + struct drm_page_pool *pool; + unsigned int num_freed; + struct page *p; + + mutex_lock(&shrinker_lock); + pool = list_first_entry(&shrinker_list, typeof(*pool), shrinker_list); + + p = drm_page_pool_remove(pool); + + list_move_tail(&pool->shrinker_list, &shrinker_list); + mutex_unlock(&shrinker_lock); + + if (p) { + pool->free(pool, p); + num_freed = 1 << pool->order; + } else { + num_freed = 0; + } + + return num_freed; +} + +/* As long as pages are available make sure to release at least one */ +static unsigned long drm_page_pool_shrinker_scan(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long num_freed = 0; + + do + num_freed += drm_page_pool_shrink(); + while (!num_freed && atomic_long_read(&allocated_pages)); + + return num_freed; +} + +/* Return the number of pages available or SHRINK_EMPTY if we have none */ +static unsigned long drm_page_pool_shrinker_count(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long num_pages = atomic_long_read(&allocated_pages); + + return num_pages ? num_pages : SHRINK_EMPTY; +} + +/** + * drm_page_pool_shrinker_setup - Initialize globals + * + * @num_pages: default number of pages + * + * Initialize the global locks and lists for the shrinker. + */ +int drm_page_pool_shrinker_setup(void) +{ + mutex_init(&shrinker_lock); + INIT_LIST_HEAD(&shrinker_list); + + mm_shrinker.count_objects = drm_page_pool_shrinker_count; + mm_shrinker.scan_objects = drm_page_pool_shrinker_scan; + mm_shrinker.seeks = 1; + return register_shrinker(&mm_shrinker); +} + +/** + * drm_page_pool_shrinker_teardown - Finalize globals + * + * Unregister the shrinker. + */ +void drm_page_pool_shrinker_teardown(void) +{ + unregister_shrinker(&mm_shrinker); + WARN_ON(!list_empty(&shrinker_list)); +} + +module_init(drm_page_pool_shrinker_setup); +module_exit(drm_page_pool_shrinker_teardown); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/include/drm/page_pool.h b/include/drm/page_pool.h new file mode 100644 index 000000000000..d8b8a8415629 --- /dev/null +++ b/include/drm/page_pool.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Extracted from include/drm/ttm/ttm_pool.h + * Copyright 2020 Advanced Micro Devices, Inc. + * Copyright 2021 Linaro Ltd + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Christian König, John Stultz + */ + +#ifndef _DRM_PAGE_POOL_H_ +#define _DRM_PAGE_POOL_H_ + +#include +#include +#include + +/** + * drm_page_pool - Page Pool for a certain memory type + * + * @order: the allocation order our pages have + * @pages: the list of pages in the pool + * @shrinker_list: our place on the global shrinker list + * @lock: protection of the page list + * @page_count: number of pages currently in the pool + * @free: Function pointer to free the page + */ +struct drm_page_pool { + unsigned int order; + struct list_head pages; + struct list_head shrinker_list; + spinlock_t lock; + + unsigned long page_count; + unsigned long (*free)(struct drm_page_pool *pool, struct page *p); +}; + +void drm_page_pool_set_max(unsigned long max); +unsigned long drm_page_pool_get_max(void); +unsigned long drm_page_pool_get_total(void); +unsigned long drm_page_pool_get_size(struct drm_page_pool *pool); +void drm_page_pool_add(struct drm_page_pool *pool, struct page *p); +struct page *drm_page_pool_remove(struct drm_page_pool *pool); +void drm_page_pool_init(struct drm_page_pool *pool, unsigned int order, + unsigned long (*free_page)(struct drm_page_pool *pool, struct page *p)); +void drm_page_pool_fini(struct drm_page_pool *pool); + +#endif From patchwork Thu Mar 4 23:20:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 393458 Delivered-To: patches@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp386072jai; Thu, 4 Mar 2021 15:20:18 -0800 (PST) X-Received: by 2002:a17:90a:db51:: with SMTP id u17mr6870126pjx.194.1614900018317; Thu, 04 Mar 2021 15:20:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614900018; cv=none; d=google.com; s=arc-20160816; b=oOu27R3LZYtcJWjGqSQTKV2xglYpKxJpdJTGvpL4BRF/X3W+2l6pgykFXirvkmfbRD CG+m3/VVp3zz/vmc0APwTfdIe2D5MOJ2OocKJJH+VGeEZKTwe43PdF8rJV+eL3XIYB9m vBgbRGfTwRP+puqrSXX0V7kTCnIKPM8id8339nm5y7yuzrP5LbLVZVY1Sz6RvU5Abz4b Xqq8DOlWtqaPbpfSTU4qgEpFZ22y9iDrnCnXQycnf71kzVPXoeoDLbAlQb1fkBgNCbPT ZLJRE2fHCvs90qbB7qvFHIIGyg6LbtYjEb3TvjRAlBLzF3On4c7v/5Qry7XKV+SH/xOx cRBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=FiKbNF6sIsDFllVX3gZjMXu3m0TjzXNAhqpgm5/+cRY=; b=FNEHtYIAg2I4CrPh9c+Uh/2r3m21B4R0rHg55s9A4ZjlEJucL2Fq3xo+r7kPrX1EpS UOqL+ztOhsXqWOgZ4zpTY1a3YeXg22UY45B5UH6n0I+S03ObVkmULOJNUnkEMDUsnUWp 7BNI44hKUmMK2QcfqDNv1c5mdsggzYQ36dwy4ZYAa11522y5uW8i6ZQF6I5Pf1btGF0H sFMnFUnWQ1yf9FJNU94JwiplazWOry0VqKDutgVVtS75LM/SgY/iNd8t3OvuJT+HTGEH 1ADNo9TJ3Cq/i6fJJTo4Yd77N8Z0QxB8HTNaEsP6+3RIEAPx7IrelbDpsC74mm2Y7uLY Ff2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=vMzeNpoK; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id p9sor598896pll.8.2021.03.04.15.20.18 for (Google Transport Security); Thu, 04 Mar 2021 15:20:18 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=vMzeNpoK; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FiKbNF6sIsDFllVX3gZjMXu3m0TjzXNAhqpgm5/+cRY=; b=vMzeNpoKj+AWkNj1Sl0naujTZGOZ/Kd7M8pxVCcQ/geVFWfX9PusFkmpilFBZjEbhf /CmiI/RwQdWjgC1kRvbSjGQ8EdH9S6OXx8JzMfOuHnv+OJsy4nTtWLm4oViJJtwV/ykJ b71l7eMzLaiaFg6t5ebIYh3mgZJxZaVhGeG1tP73aW5gpCGPNuHcLWf7q/s+RvU2IQda uNi8HJnZ7vcyhNapadVbMcduwumQJf/OZ5Mjf4BiTU7LiNbNh+ZgyDamqaiecnvHT33y Ir4CH5oYx8yIDwHIWeW1gMW++1Qf3akAz+kNzQjspA5l/eK+rgMy57iFcl+0eqRRaSD4 BxbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FiKbNF6sIsDFllVX3gZjMXu3m0TjzXNAhqpgm5/+cRY=; b=JICNl9ZV+V/IAKZKCv2DJrodUa95V7TWU4Ug4dHB7VGBBcm15q4AIFlyyzw+YbopVV IgRc2jJYJ2WnBkbh1v25H3+MC7UDF0VPaaOUjkeYnZCmgmR3+u7Wg3I7pEika4R+YAcd SSny/I9LSqRR0ngLZauUizhCKUsT76BMXgJtDiwfAD0M58BSqGWVCvOnakSNskzlL5YV uLrwXO6LLNyFryCPIud+cKAFygdSCGffZFS89M7+U2+fMYFKS8US/HXB4E/CORB4qss1 6p/nx6ENTLVKGuPJyiRU6m4fF98AMY4rVWrFnDPuznwOY5yLEluxtqX4jh1aTtS30j4Y 0Lhg== X-Gm-Message-State: AOAM532/dnZiv4tZcc9b9HLtJvZrGkcu1s+a6XnF6R7UZTRpR48BXWxH PyyB5YMjXtz1K02J0ORvICxV6TGX X-Google-Smtp-Source: ABdhPJzaPXe0BX4p/OHpQUy/ewFyfYq+is8O0jHSW93qyBaAZlEvtzKM4hvHDE9/7CV7ZKdlubEr/g== X-Received: by 2002:a17:902:6b43:b029:df:fb48:aece with SMTP id g3-20020a1709026b43b02900dffb48aecemr5995966plt.59.1614900017933; Thu, 04 Mar 2021 15:20:17 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id u66sm429290pfc.72.2021.03.04.15.20.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Mar 2021 15:20:17 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v8 2/5] drm: ttm_pool: Rework ttm_pool to use drm_page_pool Date: Thu, 4 Mar 2021 23:20:08 +0000 Message-Id: <20210304232011.1479036-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210304232011.1479036-1-john.stultz@linaro.org> References: <20210304232011.1479036-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch reworks the ttm_pool logic to utilize the recently added drm_page_pool code. This adds drm_page_pool structures to the ttm_pool_type structures, and then removes all the ttm_pool_type shrinker logic (as its handled in the drm_page_pool shrinker). NOTE: There is one mismatch in the interfaces I'm not totally happy with. The ttm_pool tracks all of its pooled pages across a number of different pools, and tries to keep this size under the specified page_pool_size value. With the drm_page_pool, there may other users, however there is still one global shrinker list of pools. So we can't easily reduce the ttm pool under the ttm specified size without potentially doing a lot of shrinking to other non-ttm pools. So either we can: 1) Try to split it so each user of drm_page_pools manages its own pool shrinking. 2) Push the max value into the drm_page_pool, and have it manage shrinking to fit under that global max. Then share those size/max values out so the ttm_pool debug output can have more context. I've taken the second path in this patch set, but wanted to call it out so folks could look closely. Thoughts would be greatly appreciated here! Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v7: * Major refactoring to use drm_page_pools inside the ttm_pool_type structure. This allows us to use container_of to get the needed context to free a page. This also means less code is changed overall. v8: * Reworked to use the new cleanly rewritten drm_page_pool logic --- drivers/gpu/drm/Kconfig | 1 + drivers/gpu/drm/ttm/ttm_pool.c | 156 ++++++--------------------------- include/drm/ttm/ttm_pool.h | 6 +- 3 files changed, 31 insertions(+), 132 deletions(-) -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 7cbcecb8f7df..a6cbdb63f6c7 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -184,6 +184,7 @@ config DRM_PAGE_POOL config DRM_TTM tristate depends on DRM && MMU + select DRM_PAGE_POOL help GPU memory management subsystem for devices with multiple GPU memory types. Will be enabled automatically if a device driver diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 6e27cb1bf48b..f74ea801d7ab 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -39,6 +39,7 @@ #include #endif +#include #include #include #include @@ -68,8 +69,6 @@ static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; static struct mutex shrinker_lock; -static struct list_head shrinker_list; -static struct shrinker mm_shrinker; /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, @@ -125,8 +124,9 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, } /* Reset the caching and pages of size 1 << order */ -static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, - unsigned int order, struct page *p) +static unsigned long ttm_pool_free_page(struct ttm_pool *pool, + enum ttm_caching caching, + unsigned int order, struct page *p) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; struct ttm_pool_dma *dma; @@ -142,7 +142,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, if (!pool || !pool->use_dma_alloc) { __free_pages(p, order); - return; + return 1UL << order; } if (order) @@ -153,6 +153,16 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, dma_free_attrs(pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dma->addr, attr); kfree(dma); + return 1UL << order; +} + +static unsigned long ttm_subpool_free_page(struct drm_page_pool *subpool, + struct page *p) +{ + struct ttm_pool_type *pt; + + pt = container_of(subpool, struct ttm_pool_type, subpool); + return ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); } /* Apply a new caching to an array of pages */ @@ -216,40 +226,6 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr, DMA_BIDIRECTIONAL); } -/* Give pages into a specific pool_type */ -static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p) -{ - unsigned int i, num_pages = 1 << pt->order; - - for (i = 0; i < num_pages; ++i) { - if (PageHighMem(p)) - clear_highpage(p + i); - else - clear_page(page_address(p + i)); - } - - spin_lock(&pt->lock); - list_add(&p->lru, &pt->pages); - spin_unlock(&pt->lock); - atomic_long_add(1 << pt->order, &allocated_pages); -} - -/* Take pages from a specific pool_type, return NULL when nothing available */ -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt) -{ - struct page *p; - - spin_lock(&pt->lock); - p = list_first_entry_or_null(&pt->pages, typeof(*p), lru); - if (p) { - atomic_long_sub(1 << pt->order, &allocated_pages); - list_del(&p->lru); - } - spin_unlock(&pt->lock); - - return p; -} - /* Initialize and add a pool type to the global shrinker list */ static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool, enum ttm_caching caching, unsigned int order) @@ -257,25 +233,14 @@ static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool, pt->pool = pool; pt->caching = caching; pt->order = order; - spin_lock_init(&pt->lock); - INIT_LIST_HEAD(&pt->pages); - mutex_lock(&shrinker_lock); - list_add_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); + drm_page_pool_init(&pt->subpool, order, ttm_subpool_free_page); } /* Remove a pool_type from the global shrinker list and free all pages */ static void ttm_pool_type_fini(struct ttm_pool_type *pt) { - struct page *p, *tmp; - - mutex_lock(&shrinker_lock); - list_del(&pt->shrinker_list); - mutex_unlock(&shrinker_lock); - - list_for_each_entry_safe(p, tmp, &pt->pages, lru) - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + drm_page_pool_fini(&pt->subpool); } /* Return the pool_type to use for the given caching and order */ @@ -306,30 +271,6 @@ static struct ttm_pool_type *ttm_pool_select_type(struct ttm_pool *pool, return NULL; } -/* Free pages using the global shrinker list */ -static unsigned int ttm_pool_shrink(void) -{ - struct ttm_pool_type *pt; - unsigned int num_freed; - struct page *p; - - mutex_lock(&shrinker_lock); - pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); - - p = ttm_pool_type_take(pt); - if (p) { - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); - num_freed = 1 << pt->order; - } else { - num_freed = 0; - } - - list_move_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); - - return num_freed; -} - /* Return the allocation order based for a page */ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { @@ -386,7 +327,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, struct ttm_pool_type *pt; pt = ttm_pool_select_type(pool, tt->caching, order); - p = pt ? ttm_pool_type_take(pt) : NULL; + p = pt ? drm_page_pool_remove(&pt->subpool) : NULL; if (p) { apply_caching = true; } else { @@ -479,16 +420,13 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) pt = ttm_pool_select_type(pool, tt->caching, order); if (pt) - ttm_pool_type_give(pt, tt->pages[i]); + drm_page_pool_add(&pt->subpool, tt->pages[i]); else ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); i += num_pages; } - - while (atomic_long_read(&allocated_pages) > page_pool_size) - ttm_pool_shrink(); } EXPORT_SYMBOL(ttm_pool_free); @@ -537,21 +475,6 @@ void ttm_pool_fini(struct ttm_pool *pool) } #ifdef CONFIG_DEBUG_FS -/* Count the number of pages available in a pool_type */ -static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) -{ - unsigned int count = 0; - struct page *p; - - spin_lock(&pt->lock); - /* Only used for debugfs, the overhead doesn't matter */ - list_for_each_entry(p, &pt->pages, lru) - ++count; - spin_unlock(&pt->lock); - - return count; -} - /* Dump information about the different pool types */ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, struct seq_file *m) @@ -559,7 +482,8 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, unsigned int i; for (i = 0; i < MAX_ORDER; ++i) - seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); + seq_printf(m, " %8lu", + drm_page_pool_get_size(&pt[i].subpool)); seq_puts(m, "\n"); } @@ -609,7 +533,10 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) } seq_printf(m, "\ntotal\t: %8lu of %8lu\n", - atomic_long_read(&allocated_pages), page_pool_size); + atomic_long_read(&allocated_pages), + drm_page_pool_get_max()); + seq_printf(m, "(%8lu in non-ttm pools)\n", drm_page_pool_get_total() - + atomic_long_read(&allocated_pages)); mutex_unlock(&shrinker_lock); @@ -619,28 +546,6 @@ EXPORT_SYMBOL(ttm_pool_debugfs); #endif -/* As long as pages are available make sure to release at least one */ -static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_freed = 0; - - do - num_freed += ttm_pool_shrink(); - while (!num_freed && atomic_long_read(&allocated_pages)); - - return num_freed; -} - -/* Return the number of pages available or SHRINK_EMPTY if we have none */ -static unsigned long ttm_pool_shrinker_count(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_pages = atomic_long_read(&allocated_pages); - - return num_pages ? num_pages : SHRINK_EMPTY; -} - /** * ttm_pool_mgr_init - Initialize globals * @@ -655,8 +560,9 @@ int ttm_pool_mgr_init(unsigned long num_pages) if (!page_pool_size) page_pool_size = num_pages; + drm_page_pool_set_max(page_pool_size); + mutex_init(&shrinker_lock); - INIT_LIST_HEAD(&shrinker_list); for (i = 0; i < MAX_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, @@ -669,10 +575,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) ttm_uncached, i); } - mm_shrinker.count_objects = ttm_pool_shrinker_count; - mm_shrinker.scan_objects = ttm_pool_shrinker_scan; - mm_shrinker.seeks = 1; - return register_shrinker(&mm_shrinker); + return 0; } /** @@ -691,7 +594,4 @@ void ttm_pool_mgr_fini(void) ttm_pool_type_fini(&global_dma32_write_combined[i]); ttm_pool_type_fini(&global_dma32_uncached[i]); } - - unregister_shrinker(&mm_shrinker); - WARN_ON(!list_empty(&shrinker_list)); } diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 4321728bdd11..3d975888ce47 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -30,6 +30,7 @@ #include #include #include +#include struct device; struct ttm_tt; @@ -51,10 +52,7 @@ struct ttm_pool_type { unsigned int order; enum ttm_caching caching; - struct list_head shrinker_list; - - spinlock_t lock; - struct list_head pages; + struct drm_page_pool subpool; }; /** From patchwork Thu Mar 4 23:20:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 393459 Delivered-To: patches@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp386098jai; Thu, 4 Mar 2021 15:20:20 -0800 (PST) X-Received: by 2002:a17:90b:228c:: with SMTP id kx12mr5549628pjb.7.1614900019926; Thu, 04 Mar 2021 15:20:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614900019; cv=none; d=google.com; s=arc-20160816; b=yBwgUQcdgNefGrMiKoxToKpVMuz6KSTWceaiL0Kt14SlwdacvXO1rPwbdcKB0/Bo0U BLBmD1fKT2ZxsOggQmnMPDIMUgKQ+4+kpzHSYiNQtPB2DL0As2AhM/6sMLZtS0vf/As/ 36/5KZOvPe2d9ySVezqQ2KL9GJckyJQbibCM+Niq87+gvwsXzHnwZfZyVcNr9+XUyd9d Wrp6k6/JMruRBjuxei9xfng9SZiiSOgnnamXIF+CDakmO+aKTdKXv23lEO6tz7tvoDyh mN0ea7u9W5JkvT3n0wrMySfq87QZuNfhnmDSvM0iQAqRhQiOIbVGoZL2hbx44ry8Y3Jz aqkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=BDAhVL6wLP9xE6Ae/kz68nSPm1bB9OfmCVC762Iy0sQ=; b=nJD8/3zSFsmOYiLRjxZEfKw/RLhbrQ8bH1jOvmGE8yp7v4kl7gjy4hLoPBShdTT0JE wdcis+eXfm9V5M3QfE/gNrrXdwrrF2Tz/eQpQFGooeRUnpizU0oxwLG9qnvykhgXA+AE AX5zng1P6k9gVgHwEkN7ljGHQL4bAGHi5Ti+JBCM32c6ygqG5jpjen+u/hcqxPVJL/0k CFuTqVG47qOB8qmK3IIEAwYwROB3ZlaP7ootZkLc07n0h9gAKjQ1LZyH32/0CGiT9jOt HkT95qLW8zg28lMYoFTW5GL1Bb12uxhpqfAYDQx8Ttu+ZQDx/4dnLbOMDAvOHZAfPGVI 64yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VAF9dFmB; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id z1sor580737pln.19.2021.03.04.15.20.19 for (Google Transport Security); Thu, 04 Mar 2021 15:20:19 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VAF9dFmB; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BDAhVL6wLP9xE6Ae/kz68nSPm1bB9OfmCVC762Iy0sQ=; b=VAF9dFmBRjgsUgge9Qfdi/fx4I196F2rtVlBxmfloWQcKsFx0Ohhh86UkuQPTl85Sr Ph2eCaVFDXlVw6hshYuyKZcvffBhmbMWP3pMQQljJ13ucG9jIjOHXvW0+a0lJJyc4DpD j3G6QQyfD7On1IB6317wC8D/7pzX4jHMhKWJk2BYZcNv25VoeXL6A7r3pRgYp3o43dof UCMCzyyX9N8Lns5Ycep2tXdqulzOVCHx2wOaunUDySxEVjYiwoefivMpPz4cEnvKqJtr tKPNqeIYQH/jFBtq4eHvxMGIcwvOKgR/aGZHUOSWc/Ptrf1v5csFxb6GIziC+Vp5d58T zkWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BDAhVL6wLP9xE6Ae/kz68nSPm1bB9OfmCVC762Iy0sQ=; b=qowxfzF75aPdWbD1/vskFqm53QTqBDOrbMMQrle6HxgIR6JtmRxVcfWzZ/K0U7vD/b jOMXUmxKR8rDVS0G3Spq6mjOzUCHKMsXN84AP6q6jbZH0bKKrSEFXFZ3bAoS+KazbZl5 h/AM95lqR4mTjI7D88mGGJVVEluG4k/KQ5ZHKdza9jTeItLVmLk58mJPTs4/Vq3B356M Z/kyuxFNgq3ZntMczMcaezK+pJTIIkHnblj1Mx4MEdy1Sv9fcnqZP6pnQJtoahPLq9hE fmvezjhxwTQ324USgEWK4P8+5qr1Iix7eGy7bcrFX6XjFZVnpQx1XUy3qFl3BqGPdfLG W1Pw== X-Gm-Message-State: AOAM531/O4kpcLy/2dwYtllQUTWJTSGbEsEN1ZWWZ7+XU8e48TwP8yeN E5A/Btu64QyTjoIbOPd9ZB5CcdBI X-Google-Smtp-Source: ABdhPJwp99E/EUxAulMiCQS4Ao2Fiu1PPZ4ZTnj7wchTGh8r5Pg2AiqLSP40UXYFh40cR4tGdwEkRQ== X-Received: by 2002:a17:902:b48b:b029:e3:7808:aab4 with SMTP id y11-20020a170902b48bb02900e37808aab4mr5951588plr.54.1614900019600; Thu, 04 Mar 2021 15:20:19 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id u66sm429290pfc.72.2021.03.04.15.20.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Mar 2021 15:20:18 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v8 3/5] dma-buf: heaps: Add deferred-free-helper library code Date: Thu, 4 Mar 2021 23:20:09 +0000 Message-Id: <20210304232011.1479036-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210304232011.1479036-1-john.stultz@linaro.org> References: <20210304232011.1479036-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch provides infrastructure for deferring buffer frees. This is a feature ION provided which when used with some form of a page pool, provides a nice performance boost in an allocation microbenchmark. The reason it helps is it allows the page-zeroing to be done out of the normal allocation/free path, and pushed off to a kthread. As not all heaps will find this useful, its implemented as a optional helper library that heaps can utilize. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix sleep in atomic issue from using a mutex, by switching to a spinlock as Reported-by: kernel test robot * Cleanup API to use a reason enum for clarity and add some documentation comments as suggested by Suren Baghdasaryan. v3: * Minor tweaks so it can be built as a module * A few small fixups suggested by Daniel Mentz v4: * Tweak from Daniel Mentz to make sure the shrinker count/freed values are tracked in pages not bytes v5: * Fix up page count tracking as suggested by Suren Baghdasaryan v7: * Rework accounting to use nr_pages rather then size, as suggested by Suren Baghdasaryan --- drivers/dma-buf/heaps/Kconfig | 3 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/deferred-free-helper.c | 138 +++++++++++++++++++ drivers/dma-buf/heaps/deferred-free-helper.h | 55 ++++++++ 4 files changed, 197 insertions(+) create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..f7aef8bc7119 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -1,3 +1,6 @@ +config DMABUF_HEAPS_DEFERRED_FREE + tristate + config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..4e7839875615 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_DMABUF_HEAPS_DEFERRED_FREE) += deferred-free-helper.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/deferred-free-helper.c b/drivers/dma-buf/heaps/deferred-free-helper.c new file mode 100644 index 000000000000..e19c8b68dfeb --- /dev/null +++ b/drivers/dma-buf/heaps/deferred-free-helper.c @@ -0,0 +1,138 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Deferred dmabuf freeing helper + * + * Copyright (C) 2020 Linaro, Ltd. + * + * Based on the ION page pool code + * Copyright (C) 2011 Google, Inc. + */ + +#include +#include +#include +#include +#include + +#include "deferred-free-helper.h" + +static LIST_HEAD(free_list); +static size_t list_nr_pages; +wait_queue_head_t freelist_waitqueue; +struct task_struct *freelist_task; +static DEFINE_SPINLOCK(free_list_lock); + +void deferred_free(struct deferred_freelist_item *item, + void (*free)(struct deferred_freelist_item*, + enum df_reason), + size_t nr_pages) +{ + unsigned long flags; + + INIT_LIST_HEAD(&item->list); + item->nr_pages = nr_pages; + item->free = free; + + spin_lock_irqsave(&free_list_lock, flags); + list_add(&item->list, &free_list); + list_nr_pages += nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + wake_up(&freelist_waitqueue); +} +EXPORT_SYMBOL_GPL(deferred_free); + +static size_t free_one_item(enum df_reason reason) +{ + unsigned long flags; + size_t nr_pages; + struct deferred_freelist_item *item; + + spin_lock_irqsave(&free_list_lock, flags); + if (list_empty(&free_list)) { + spin_unlock_irqrestore(&free_list_lock, flags); + return 0; + } + item = list_first_entry(&free_list, struct deferred_freelist_item, list); + list_del(&item->list); + nr_pages = item->nr_pages; + list_nr_pages -= nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + + item->free(item, reason); + return nr_pages; +} + +static unsigned long get_freelist_nr_pages(void) +{ + unsigned long nr_pages; + unsigned long flags; + + spin_lock_irqsave(&free_list_lock, flags); + nr_pages = list_nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + return nr_pages; +} + +static unsigned long freelist_shrink_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + return get_freelist_nr_pages(); +} + +static unsigned long freelist_shrink_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long total_freed = 0; + + if (sc->nr_to_scan == 0) + return 0; + + while (total_freed < sc->nr_to_scan) { + size_t pages_freed = free_one_item(DF_UNDER_PRESSURE); + + if (!pages_freed) + break; + + total_freed += pages_freed; + } + + return total_freed; +} + +static struct shrinker freelist_shrinker = { + .count_objects = freelist_shrink_count, + .scan_objects = freelist_shrink_scan, + .seeks = DEFAULT_SEEKS, + .batch = 0, +}; + +static int deferred_free_thread(void *data) +{ + while (true) { + wait_event_freezable(freelist_waitqueue, + get_freelist_nr_pages() > 0); + + free_one_item(DF_NORMAL); + } + + return 0; +} + +static int deferred_freelist_init(void) +{ + list_nr_pages = 0; + + init_waitqueue_head(&freelist_waitqueue); + freelist_task = kthread_run(deferred_free_thread, NULL, + "%s", "dmabuf-deferred-free-worker"); + if (IS_ERR(freelist_task)) { + pr_err("Creating thread for deferred free failed\n"); + return -1; + } + sched_set_normal(freelist_task, 19); + + return register_shrinker(&freelist_shrinker); +} +module_init(deferred_freelist_init); +MODULE_LICENSE("GPL v2"); + diff --git a/drivers/dma-buf/heaps/deferred-free-helper.h b/drivers/dma-buf/heaps/deferred-free-helper.h new file mode 100644 index 000000000000..11940328ce3f --- /dev/null +++ b/drivers/dma-buf/heaps/deferred-free-helper.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef DEFERRED_FREE_HELPER_H +#define DEFERRED_FREE_HELPER_H + +/** + * df_reason - enum for reason why item was freed + * + * This provides a reason for why the free function was called + * on the item. This is useful when deferred_free is used in + * combination with a pagepool, so under pressure the page can + * be immediately freed. + * + * DF_NORMAL: Normal deferred free + * + * DF_UNDER_PRESSURE: Free was called because the system + * is under memory pressure. Usually + * from a shrinker. Avoid allocating + * memory in the free call, as it may + * fail. + */ +enum df_reason { + DF_NORMAL, + DF_UNDER_PRESSURE, +}; + +/** + * deferred_freelist_item - item structure for deferred freelist + * + * This is to be added to the structure for whatever you want to + * defer freeing on. + * + * @nr_pages: number of pages used by item to be freed + * @free: function pointer to be called when freeing the item + * @list: list entry for the deferred list + */ +struct deferred_freelist_item { + size_t nr_pages; + void (*free)(struct deferred_freelist_item *i, + enum df_reason reason); + struct list_head list; +}; + +/** + * deferred_free - call to add item to the deferred free list + * + * @item: Pointer to deferred_freelist_item field of a structure + * @free: Function pointer to the free call + * @nr_pages: number of pages to be freed + */ +void deferred_free(struct deferred_freelist_item *item, + void (*free)(struct deferred_freelist_item *i, + enum df_reason reason), + size_t nr_pages); +#endif From patchwork Thu Mar 4 23:20:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 393460 Delivered-To: patches@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp386119jai; Thu, 4 Mar 2021 15:20:21 -0800 (PST) X-Received: by 2002:a17:90a:a798:: with SMTP id f24mr6774486pjq.225.1614900021527; Thu, 04 Mar 2021 15:20:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614900021; cv=none; d=google.com; s=arc-20160816; b=hiEr3h0kfR10E/lDO5q8PQbFRwxJSEDUc3CIGdyyD82+STn4LPu/u8HbqPlHvmnKxW F0c7zOKOgF+fMnx16of74QmGbs01QGHQKE2xZsDXzSVL63rOg/JNscNLBsWIxLh3qoUj yIUSaQyIcw/oke2iELJhcpRrmcbYMuSf9oYFMdAEoWH8pfEl1ucVMzZDuxPQBVDEPVV+ 52ZCEv//pgcgCAiIqrnF6G10VoYdvzAOPtcYybNCr7o32RxGAQCLilo9kgjhCQB/eRFy q5knFHBVhEmjJQY4SEVNWXMBeSKpcCObH7P1ch9pcuc+5Ea2nnbwfJ9aBe/e6XRAIxfn z3qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=ii5pLXD2o0NGRI7J9va73Vq97pj3xACBVXZkTcA+ReU=; b=hLo8SfkguwDSH1+gs/Z0RvxWqH65r8CHoVYWvkqT/WeezUsRjgrn2FqdmMAV+6FjDd HEdwjq+HS8Gp+sDRQcrmx0UJUrQ54pxCrtbbwy0p2dcsDiG0+PUBFZYpEaIEkp0QU4ea w3BuekPBDCO0T7OiIl3QJuam1VFUDCb3lAbCpYZ64hgFqg/Mj5KASlKBZBX9026aeakV cg7ag7s/LqIOX9+7OPx1O4XR+ZzoFY34wo6PtWBcoRxJtqSfB1TuAAqMp/u8Ox/b0APz prWlbTbjeuBbKyGojGhHGrj6wtWEVkm4tCyax+1Cxkdm7SDq28FAZzVj5a4MBPFcXIRA BM0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=yQjXwBLc; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id v9sor588598plo.15.2021.03.04.15.20.21 for (Google Transport Security); Thu, 04 Mar 2021 15:20:21 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=yQjXwBLc; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ii5pLXD2o0NGRI7J9va73Vq97pj3xACBVXZkTcA+ReU=; b=yQjXwBLc0O4odFqDIudXA0hwporccTe8AyopoQfYsvVJKuGq0mcsgQWYmabazWIQgG fP9zW5mFbccE6zRbj9xVs2DO5SqdaZ4tWxIihuCpm3T02yFIC6xtJsHWKU9qe1q0aNJy atfP525A+hA7ukkDjpvlx3MFGyYIPjd2UDjxR8drxMoxEiu4szP6fhvtbQZ4XXM/qWok nwKOVKMx6n6okVWrD8xOkcLwPFb7d4Kl9FMnqVBo+2ov44mVwftnF9HW0Bkbt8727Wod LqQcFwdx95lajJOBgJHUpi3CiDIvQ1BFWmKGvrElNBpnql8K/lucVrDh4Ad5mF9hgdp8 fQWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ii5pLXD2o0NGRI7J9va73Vq97pj3xACBVXZkTcA+ReU=; b=m0QgX2hw5ELdLiBwZzP7EEfhLupxfor6syRgrwpcbfnwHhsIf1KMSC+blBY7rHTHA3 pEvTIc1eVqKEbbYJLd6bqb1qHEr+VMH4mZXgRl9zlSajVTWm9OSs62J8xTFpA2XVRaxt qQg3NyL/HuOf0OtbKRw3W3ioMCy5Z2UJkbSKu01YckDIJXY9sfZARJZJnnyPOGhQ1K8B M/lYwmSRQ5ndXeg9RfUWllJTeRVNjLAEuTsF3At/DUkxoyeDcGHnUGqDGaEiv7DKghzX q6JmRNK8qkrLLYn21rSyGZN/TlK3MHIZ3CEVMnoiyFdw6m0Yqwb7ODfj4vXCaOjmjItD c+lg== X-Gm-Message-State: AOAM533tQk1GL2onko3jYGHGsnislv+0zCWLoR1+U773drLSWIDfMdWv d+kDLiJmbbjT4QPP5clTkSdYMrGC X-Google-Smtp-Source: ABdhPJzJwH4qXNcw/4c+SGoQp8DrK+YcikTPLI9do6sETSRiExO0Vk6pM5rfAx78p9/2e+ugdXohQw== X-Received: by 2002:a17:902:124:b029:e3:fe16:fdf5 with SMTP id 33-20020a1709020124b02900e3fe16fdf5mr6361806plb.1.1614900021199; Thu, 04 Mar 2021 15:20:21 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id u66sm429290pfc.72.2021.03.04.15.20.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Mar 2021 15:20:20 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v8 4/5] dma-buf: system_heap: Add drm pagepool support to system heap Date: Thu, 4 Mar 2021 23:20:10 +0000 Message-Id: <20210304232011.1479036-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210304232011.1479036-1-john.stultz@linaro.org> References: <20210304232011.1479036-1-john.stultz@linaro.org> MIME-Version: 1.0 Utilize the drm pagepool code to speed up allocation performance. This is similar to the ION pagepool usage, but tries to utilize generic code instead of a custom implementation. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix build issue caused by selecting PAGE_POOL w/o NET as Reported-by: kernel test robot v3: * Simplify the page zeroing logic a bit by using kmap_atomic instead of vmap as suggested by Daniel Mentz v5: * Shift away from networking page pool completely to dmabuf page pool implementation v6: * Switch again to using the drm_page_pool code shared w/ ttm_pool v7: * Slight rework for drm_page_pool changes v8: * Rework to use the rewritten drm_page_pool logic * Drop explicit buffer zeroing, as the drm_page_pool handles that --- drivers/dma-buf/heaps/Kconfig | 1 + drivers/dma-buf/heaps/system_heap.c | 27 ++++++++++++++++++++++++--- 2 files changed, 25 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index f7aef8bc7119..7e28934e0def 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -4,6 +4,7 @@ config DMABUF_HEAPS_DEFERRED_FREE config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS + select DRM_PAGE_POOL help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 29e49ac17251..006271881d85 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -21,6 +21,8 @@ #include #include +#include + static struct dma_heap *sys_heap; struct system_heap_buffer { @@ -53,6 +55,7 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, LOW_ORDER_GFP, LOW_ORDER_GFP}; */ static const unsigned int orders[] = {8, 4, 0}; #define NUM_ORDERS ARRAY_SIZE(orders) +struct drm_page_pool pools[NUM_ORDERS]; static struct sg_table *dup_sg_table(struct sg_table *table) { @@ -281,18 +284,28 @@ static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) dma_buf_map_clear(map); } +static unsigned long system_heap_free_pages(struct drm_page_pool *pool, struct page *p) +{ + __free_pages(p, pool->order); + return 1 << pool->order; +} + static void system_heap_dma_buf_release(struct dma_buf *dmabuf) { struct system_heap_buffer *buffer = dmabuf->priv; struct sg_table *table; struct scatterlist *sg; - int i; + int i, j; table = &buffer->sg_table; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); - __free_pages(page, compound_order(page)); + for (j = 0; j < NUM_ORDERS; j++) { + if (compound_order(page) == orders[j]) + break; + } + drm_page_pool_add(&pools[j], page); } sg_free_table(table); kfree(buffer); @@ -323,7 +336,9 @@ static struct page *alloc_largest_available(unsigned long size, if (max_order < orders[i]) continue; - page = alloc_pages(order_flags[i], orders[i]); + page = drm_page_pool_remove(&pools[i]); + if (!page) + page = alloc_pages(order_flags[i], orders[i]); if (!page) continue; return page; @@ -423,6 +438,12 @@ static const struct dma_heap_ops system_heap_ops = { static int system_heap_create(void) { struct dma_heap_export_info exp_info; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + drm_page_pool_init(&pools[i], orders[i], + system_heap_free_pages); + } exp_info.name = "system"; exp_info.ops = &system_heap_ops; From patchwork Thu Mar 4 23:20:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 393461 Delivered-To: patches@linaro.org Received: by 2002:a02:8562:0:0:0:0:0 with SMTP id g89csp386131jai; Thu, 4 Mar 2021 15:20:23 -0800 (PST) X-Received: by 2002:a17:90a:3489:: with SMTP id p9mr7381904pjb.68.1614900023402; Thu, 04 Mar 2021 15:20:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614900023; cv=none; d=google.com; s=arc-20160816; b=meWdH8f1yT3r+FGWWWhW+7yKWPp3t9wjvt2frDkjyEBihL++yxrkcISmI7AU5hc8wZ HQRX9ZYSBDCn80hEXdJjrDPIq72ZgWxBH0/lxjiP7qgsrvaxMCtKg+oFs2fYPsv7j7Aa 7Q0YKRqK1RUwn+++4DTGON0nlCrTORG2FrvlRmpf6HR3B5XcZEEsZsOEzUC8nibApPMb 5ssedC3d6aUV7YeLH3pxQM3IiZVVoQ0wXqzkUHjTz1AxjcW0hVqABw9xHdW2SInXWkMu /2QXMzRTf1gw+EShvCWcXGWCi63LZdB1gZqdrLDpWapWUHzEwisL3yO8DsUN8Z3wsld2 0uEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=1viYL6LhK+5nIQ13ju++XlarHcY6osyk1cCjbOPG1F0=; b=f/oJcXCDX6niqLr4K6FXNZgWIiX8LIzfMkQhECfCzvbpFdlQKBlrz6TTCS58YQRLrj x9XZ+dxR65UvdduiqxsBa6Sj6FadImxNkLZsnWXK1DKUTa++d2V94pfRG5H954BSfLlm CzmHhMfsjN9eVEJdgyrDAHtNyNelHw47PmTNIZmjJQKeofJY+GbR1e6bRv/1F1aa6FHo tq0XAgKr0NHvq43otn9bmRAQNfif1QYPMpzKVfIdNPfsdH0T1R+c1C0Kdskplc12KIuF keVPg4P9SN3hxoFuQr49LJ13NW9w1pr5+r2SvZBlQvT9Y+AEizqqr5SRporaDhe9x7xS Lfzw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KcWKRMcT; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id n8sor391032pfq.2.2021.03.04.15.20.23 for (Google Transport Security); Thu, 04 Mar 2021 15:20:23 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KcWKRMcT; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1viYL6LhK+5nIQ13ju++XlarHcY6osyk1cCjbOPG1F0=; b=KcWKRMcTqPHZF1q05Ce0JCzTme39nwPKHDDiT/YIqKCsscOzC9PyUbaxoEvglXFwrK 1w32NYPCKD9OTcIgZBeCWhzZDLKez5Sl3ufwPZjZCA9VnxgZhu0jIuSF7RZQwDPMX1lw Z5a47mYCKuV+OVShTvu9KWHrGgTAPKj4pQIvMwsZNm24ziWIXPihJMDYtexedzbB1N5u nyE1nYLI9TW/aaWhK7GA7d3xngcaHhqA/h6acr3mB5GPKA7G2GqQUQQXj+4luvgel37m P4UBL+C8Ll6L9gUFrsn3ZL/qjxkKcuBQ2s/cwt2cj3mDi8uyuMqv3qGrJM83UhMY0Tty Fa4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1viYL6LhK+5nIQ13ju++XlarHcY6osyk1cCjbOPG1F0=; b=ehhIY9JOrFCPJ2INfUyO/P6BOdrpJ1l5nlbzI3nTqTZecKpGr08QKI5bNo4GGru+n3 JQr7XseCTy8yABNW41hoh7fyhSx2xpnC66sHv8CH5V7af/MeKCriPGfDRBbFt5KPi6yb UttjBLROCslgtbw36zaJLQDXRUYhSm9OmbhgaJMhqfPrio0e/v6T8ySBB94tK+lwA0mS sHnzQiWrniOb4ZiupklK0d99yIqNrPSm1jvdzkpP6imfOiuZ9hQkNfQFhLqLX0haELfa dFyO+72hYlom4b95yc1odSYrwkERImANjM+B7qXxtF6sn2DaJMs93J42zN55M2fMEDwc mhxw== X-Gm-Message-State: AOAM533ps1rBj90eaZRpoF8fqUzSScgGVydXV0d1UFflNsUsHXEvig+L Vdogbyy1E1czO8oHXAqwKC7dkuPF X-Google-Smtp-Source: ABdhPJw2jFY8Rq+iS4tDrmeKtrXjB9IUAoh4ZkSvegh37qen93VKtQdaq2WlbCBS2hJuKS1jzQ4vsA== X-Received: by 2002:aa7:8d05:0:b029:1ec:b460:f21d with SMTP id j5-20020aa78d050000b02901ecb460f21dmr6027582pfe.29.1614900023086; Thu, 04 Mar 2021 15:20:23 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id u66sm429290pfc.72.2021.03.04.15.20.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Mar 2021 15:20:22 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v8 5/5] dma-buf: system_heap: Add deferred freeing to the system heap Date: Thu, 4 Mar 2021 23:20:11 +0000 Message-Id: <20210304232011.1479036-6-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210304232011.1479036-1-john.stultz@linaro.org> References: <20210304232011.1479036-1-john.stultz@linaro.org> MIME-Version: 1.0 Utilize the deferred free helper library in the system heap. This provides a nice performance bump and puts the system heap performance on par with ION. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Rework deferred-free api to use reason enum as suggested by Suren Baghdasaryan * Rework for deferred-free api change to use nr_pages rather than size as suggsted by Suren Baghdasaryan v8: * Reworked to drop buffer zeroing logic, as the drm_page_pool now handles that. --- drivers/dma-buf/heaps/Kconfig | 1 + drivers/dma-buf/heaps/system_heap.c | 28 ++++++++++++++++++++++------ 2 files changed, 23 insertions(+), 6 deletions(-) -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 7e28934e0def..10632ccfb4a5 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -5,6 +5,7 @@ config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS select DRM_PAGE_POOL + select DMABUF_HEAPS_DEFERRED_FREE help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 006271881d85..c753c82fd9f1 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -22,6 +22,7 @@ #include #include +#include "deferred-free-helper.h" static struct dma_heap *sys_heap; @@ -33,6 +34,7 @@ struct system_heap_buffer { struct sg_table sg_table; int vmap_cnt; void *vaddr; + struct deferred_freelist_item deferred_free; }; struct dma_heap_attachment { @@ -290,27 +292,41 @@ static unsigned long system_heap_free_pages(struct drm_page_pool *pool, struct p return 1 << pool->order; } -static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +static void system_heap_buf_free(struct deferred_freelist_item *item, + enum df_reason reason) { - struct system_heap_buffer *buffer = dmabuf->priv; + struct system_heap_buffer *buffer; struct sg_table *table; struct scatterlist *sg; int i, j; + buffer = container_of(item, struct system_heap_buffer, deferred_free); table = &buffer->sg_table; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); - for (j = 0; j < NUM_ORDERS; j++) { - if (compound_order(page) == orders[j]) - break; + if (reason == DF_UNDER_PRESSURE) { + __free_pages(page, compound_order(page)); + } else { + for (j = 0; j < NUM_ORDERS; j++) { + if (compound_order(page) == orders[j]) + break; + } + drm_page_pool_add(&pools[j], page); } - drm_page_pool_add(&pools[j], page); } sg_free_table(table); kfree(buffer); } +static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + + deferred_free(&buffer->deferred_free, system_heap_buf_free, npages); +} + static const struct dma_buf_ops system_heap_buf_ops = { .attach = system_heap_attach, .detach = system_heap_detach,