From patchwork Fri Feb 5 08:06:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376886 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008275jah; Fri, 5 Feb 2021 00:06:27 -0800 (PST) X-Received: by 2002:a63:4082:: with SMTP id n124mr3284286pga.340.1612512387609; Fri, 05 Feb 2021 00:06:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512387; cv=none; d=google.com; s=arc-20160816; b=tIp01uVvC+V985P2F7B/BrDhQTHoXG3/eTrGAWoASO4nT+ZL4PABzJzBC0eBfPx4wb cIQKu9BkTA/lLiKmfwdFU/5HfrP3xU3ck5XKextPSIoPspz1ZjAz7xfzIYz9TmP77K5M 1VgFqqImreGewECP8V6jUkD8b6OpJKOjuyy7z6z7HwZd1YkxxPYyeehttsIWoqLxARcm wGVLwl8ZqT2lhUk+MnCPrZHN+i//xmFOPOEaef1vsoYiH+aGGrx2TMqK0J1A4LrOManh 6FzxN5tM24AQoMs8UllDlini30XSuHFf89hVueUYA0zfCXxsO/bBSWgmOoL2uUkQOogQ bO+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=vyfEY4onehbYsafXLmYmoPdU8uLUNcWKOMLwNwKoKkY=; b=ulp9QmQkPGLoK6B4trn52EaCqVVYQL1VGXBxWio4PD4qMIpRk0C8qtcjIB5T68A/o2 5+T1txrRXhzpaiktKUS+ONrUJ2onBvMfNuietJ9XtRdBJ0rvHMY+TzvN7x6Rg7YehGa3 cMv00FdShhE13ZwVIZmV+eELm1B346f/V9EURinvJk7xDvrU/E2GsP7pd517gcRq0Dum /gXTHg1X7flvmbgJjwYNNUDHr2KANdwNO23g52SWdezqmWDPAPjcK3FCZ8ZQ7A+UH4uq C8waLUEGFaUwkCR5v0wLaqUDNruwkv5X71lyi/9AYNG3FbYjHItfM3dEGIcmRVTnc6Zc i3LA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kfBKmBc2; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id s7sor4862840pgl.23.2021.02.05.00.06.27 for (Google Transport Security); Fri, 05 Feb 2021 00:06:27 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kfBKmBc2; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vyfEY4onehbYsafXLmYmoPdU8uLUNcWKOMLwNwKoKkY=; b=kfBKmBc2V0gCtxWa0NO7oYwzFwIGrntrUmxp3VWtIk9vVbZiIidKiPuJA8fq913qm6 y/TLIbZegnoQtgZWXnoY0F50XKk+gmOZBOwtXXODAsfVg6j2csvgf5rnxg3kvcEij+Ww iA5FM3Ee09ZD6RBVdpGH/37PXPYY7mc9loOttqO8p5Kpw0YabwufzJtMCqY7VcFk+oAq YvAiZsgXhttv4oltgliHjQ1eb+C68HNj8UhmQEkTcS00U71i/PuTJSsaCPgh+0YnQ3Ya Y10M/IVlNiVkfY9tmMF4EnfFLAG+Ffi0ChQ48uNWk5lSk1S2yOklIIh9QB1yixCp6BMu 8Evg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vyfEY4onehbYsafXLmYmoPdU8uLUNcWKOMLwNwKoKkY=; b=omvw8BNgXwLSGkKSjq/M1J+xHU1TRWCyxkdcI7RoeK0Dsag+Clrob3ugSaRFSgYMQ3 i706jtkRDKR5B1zZ3y2+lSvJHY96Jmo51NhibzPDdvbOfagGw+No9kqdwo+wyvNqXRz6 X+YSx/3NZiZvLO4TPfdqXDNv4SEh4C9P4syz9z2KdpOl7S7jPmXvdmNK/+tm/F1nrvuh xmPCzlINCHFjspMb41cRHL6i9hVqZ0NHjF+M/rT9ou8ugJaFKzlpLc2YBC4u9586j55m dNFdcYp74hWJMm1MNDPkhKq+qXNrwwPzvR0trY1oKjYj0DoUYXp86knLqP2hwiNhJWCS 0pIw== X-Gm-Message-State: AOAM531aHu5PxcS6d1hDksS/nRK7OjNYHQXaLWkeoK96/sSJemZIdcnf aIUe277a2pBUZkhGTvlH5uVfDmo9 X-Google-Smtp-Source: ABdhPJxFjQUH9qw9ZOiSQdFgimdjCJe9nf79dM7toVaJnYzL9q5QJkTgttrX+W6LBLzy/l57Ue2JIA== X-Received: by 2002:a63:f19:: with SMTP id e25mr3205563pgl.220.1612512387276; Fri, 05 Feb 2021 00:06:27 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:26 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool implementation Date: Fri, 5 Feb 2021 08:06:15 +0000 Message-Id: <20210205080621.3102035-2-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 This adds a shrinker controlled page pool, closely following the ttm_pool logic, which is abstracted out a bit so it can be used by other non-ttm drivers. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/gpu/drm/Kconfig | 4 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/page_pool.c | 220 ++++++++++++++++++++++++++++++++++++ include/drm/page_pool.h | 54 +++++++++ 4 files changed, 279 insertions(+) create mode 100644 drivers/gpu/drm/page_pool.c create mode 100644 include/drm/page_pool.h -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 0973f408d75f..d16bf340ed2e 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -174,6 +174,10 @@ config DRM_DP_CEC Note: not all adapters support this feature, and even for those that do support this they often do not hook up the CEC pin. +config DRM_PAGE_POOL + bool + depends on DRM + config DRM_TTM tristate depends on DRM && MMU diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index fefaff4c832d..877e0111ed34 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -32,6 +32,7 @@ drm-$(CONFIG_AGP) += drm_agpsupport.o drm-$(CONFIG_PCI) += drm_pci.o drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o +drm-$(CONFIG_DRM_PAGE_POOL) += page_pool.o drm_vram_helper-y := drm_gem_vram_helper.o obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o diff --git a/drivers/gpu/drm/page_pool.c b/drivers/gpu/drm/page_pool.c new file mode 100644 index 000000000000..2139f86e6ca7 --- /dev/null +++ b/drivers/gpu/drm/page_pool.c @@ -0,0 +1,220 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DRM page pool system + * + * Copyright (C) 2020 Linaro Ltd. + * + * Based on the ION page pool code + * Copyright (C) 2011 Google, Inc. + * As well as the ttm_pool code + * Copyright (C) 2020 Advanced Micro Devices, Inc. + */ + +#include +#include +#include +#include +#include +#include + +static LIST_HEAD(pool_list); +static DEFINE_MUTEX(pool_list_lock); +static atomic_long_t total_pages; +static unsigned long page_pool_max; +MODULE_PARM_DESC(page_pool_max, "Number of pages in the WC/UC/DMA pool"); +module_param(page_pool_max, ulong, 0644); + +void drm_page_pool_set_max(unsigned long max) +{ + /* only write once */ + if (!page_pool_max) + page_pool_max = max; +} + +unsigned long drm_page_pool_get_max(void) +{ + return page_pool_max; +} + +unsigned long drm_page_pool_get_total(void) +{ + return atomic_long_read(&total_pages); +} + +int drm_page_pool_get_size(struct drm_page_pool *pool) +{ + int ret; + + spin_lock(&pool->lock); + ret = pool->count; + spin_unlock(&pool->lock); + return ret; +} + +static inline unsigned int drm_page_pool_free_pages(struct drm_page_pool *pool, + struct page *page) +{ + return pool->free(page, pool->order); +} + +static int drm_page_pool_shrink_one(void); + +void drm_page_pool_add(struct drm_page_pool *pool, struct page *page) +{ + spin_lock(&pool->lock); + list_add_tail(&page->lru, &pool->items); + pool->count++; + atomic_long_add(1 << pool->order, &total_pages); + spin_unlock(&pool->lock); + + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + 1 << pool->order); + + /* make sure we don't grow too large */ + while (page_pool_max && atomic_long_read(&total_pages) > page_pool_max) + drm_page_pool_shrink_one(); +} +EXPORT_SYMBOL_GPL(drm_page_pool_add); + +static struct page *drm_page_pool_remove(struct drm_page_pool *pool) +{ + struct page *page; + + if (!pool->count) + return NULL; + + page = list_first_entry(&pool->items, struct page, lru); + pool->count--; + atomic_long_sub(1 << pool->order, &total_pages); + + list_del(&page->lru); + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + -(1 << pool->order)); + return page; +} + +struct page *drm_page_pool_fetch(struct drm_page_pool *pool) +{ + struct page *page = NULL; + + if (!pool) { + WARN_ON(!pool); + return NULL; + } + + spin_lock(&pool->lock); + page = drm_page_pool_remove(pool); + spin_unlock(&pool->lock); + + return page; +} +EXPORT_SYMBOL_GPL(drm_page_pool_fetch); + +struct drm_page_pool *drm_page_pool_create(unsigned int order, + int (*free_page)(struct page *p, unsigned int order)) +{ + struct drm_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL); + + if (!pool) + return NULL; + + pool->count = 0; + INIT_LIST_HEAD(&pool->items); + pool->order = order; + pool->free = free_page; + spin_lock_init(&pool->lock); + INIT_LIST_HEAD(&pool->list); + + mutex_lock(&pool_list_lock); + list_add(&pool->list, &pool_list); + mutex_unlock(&pool_list_lock); + + return pool; +} +EXPORT_SYMBOL_GPL(drm_page_pool_create); + +void drm_page_pool_destroy(struct drm_page_pool *pool) +{ + struct page *page; + + /* Remove us from the pool list */ + mutex_lock(&pool_list_lock); + list_del(&pool->list); + mutex_unlock(&pool_list_lock); + + /* Free any remaining pages in the pool */ + spin_lock(&pool->lock); + while (pool->count) { + page = drm_page_pool_remove(pool); + spin_unlock(&pool->lock); + drm_page_pool_free_pages(pool, page); + spin_lock(&pool->lock); + } + spin_unlock(&pool->lock); + + kfree(pool); +} +EXPORT_SYMBOL_GPL(drm_page_pool_destroy); + +static int drm_page_pool_shrink_one(void) +{ + struct drm_page_pool *pool; + struct page *page; + int nr_freed = 0; + + mutex_lock(&pool_list_lock); + pool = list_first_entry(&pool_list, typeof(*pool), list); + + spin_lock(&pool->lock); + page = drm_page_pool_remove(pool); + spin_unlock(&pool->lock); + + if (page) + nr_freed = drm_page_pool_free_pages(pool, page); + + list_move_tail(&pool->list, &pool_list); + mutex_unlock(&pool_list_lock); + + return nr_freed; +} + +static unsigned long drm_page_pool_shrink_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long count = atomic_long_read(&total_pages); + + return count ? count : SHRINK_EMPTY; +} + +static unsigned long drm_page_pool_shrink_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + int to_scan = sc->nr_to_scan; + int nr_total = 0; + + if (to_scan == 0) + return 0; + + do { + int nr_freed = drm_page_pool_shrink_one(); + + to_scan -= nr_freed; + nr_total += nr_freed; + } while (to_scan >= 0 && atomic_long_read(&total_pages)); + + return nr_total; +} + +static struct shrinker pool_shrinker = { + .count_objects = drm_page_pool_shrink_count, + .scan_objects = drm_page_pool_shrink_scan, + .seeks = 1, + .batch = 0, +}; + +int drm_page_pool_init_shrinker(void) +{ + return register_shrinker(&pool_shrinker); +} +module_init(drm_page_pool_init_shrinker); +MODULE_LICENSE("GPL v2"); diff --git a/include/drm/page_pool.h b/include/drm/page_pool.h new file mode 100644 index 000000000000..47e240b2bc69 --- /dev/null +++ b/include/drm/page_pool.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Copyright 2020 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: Christian König + */ + +#ifndef _DRM_PAGE_POOL_H_ +#define _DRM_PAGE_POOL_H_ + +#include +#include +#include + +struct drm_page_pool { + int count; + struct list_head items; + + int order; + int (*free)(struct page *p, unsigned int order); + + spinlock_t lock; + struct list_head list; +}; + +void drm_page_pool_set_max(unsigned long max); +unsigned long drm_page_pool_get_max(void); +unsigned long drm_page_pool_get_total(void); +int drm_page_pool_get_size(struct drm_page_pool *pool); +struct page *drm_page_pool_fetch(struct drm_page_pool *pool); +void drm_page_pool_add(struct drm_page_pool *pool, struct page *page); +struct drm_page_pool *drm_page_pool_create(unsigned int order, + int (*free_page)(struct page *p, unsigned int order)); +void drm_page_pool_destroy(struct drm_page_pool *pool); + +#endif From patchwork Fri Feb 5 08:06:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376887 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008318jah; Fri, 5 Feb 2021 00:06:29 -0800 (PST) X-Received: by 2002:a17:90a:fb83:: with SMTP id cp3mr2919915pjb.168.1612512389605; Fri, 05 Feb 2021 00:06:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512389; cv=none; d=google.com; s=arc-20160816; b=g/NW99FzXWLJQDKrB4aZhl6zS6QYUsvUwqXD5KuJizE6h2/sjg7ScYZwdxO9020qMf hYIf/h7Cx9lmplVGFqiVeb5mSpPzcv75TeBJYlkVbI7WrcZqkqbACjLssuGVhIai1a5p hF8iKbHab/kcIiIuYixzb1ZRsQKBQq7WHfDQEq3/6fWlWLo4D0+9zOu8u7zAWozUJWbX ZHe1TP8Bfwdyn4qz1gqOzITCKLzinKJzHDbNYjs6UHibbr+51u8BT6rz1j+grpqZJZQ6 odYbQPQQfmH3of3RTteSp+0y2hbNpyh53ECQa9UyIIAFTxlUX/eeaCe7u7vfoVwb5x4I 3PkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=bGslspb5ZylZ8O2t/OXWAgcgjW5wijqWH0egOe8ik7I=; b=IKD0bWd8WVy9mQ6hrczySHWDUd7MZhrNdHjaxv0Qxt/mGU6Wy2qxyIGTsjiMCOWJEO rtNVqEcaSGQAPq2zmrWhyyE1T30tP4Um3cuaWiCTm7WlxJ1OspK4AWcmJmWgUN5BwxBU hV3ohHdaS3abWc/K8ifcKSrg5klmwubkXK96GkNQWBrkPp277g1CgxnzR9mQmSeJM6V0 ZI5MUBC0mx2vSgDxpS7UuFG4q6IcQ4s8oAZdcSlgRzppRGo2ArL95RsDu1IiKtYD3EWn BA3FxUti0ae1SqYWPwd23cMc+UQ9sjAuEKBI17vIAmZDwbgOA35URlBVhMeqBZ8jpzq/ UlRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=myIEhqRz; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id y69sor3814555pfb.93.2021.02.05.00.06.29 for (Google Transport Security); Fri, 05 Feb 2021 00:06:29 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=myIEhqRz; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bGslspb5ZylZ8O2t/OXWAgcgjW5wijqWH0egOe8ik7I=; b=myIEhqRzrTikD8EItrnbeyf5x9l+O7Db+kYOX/uF/5YLTldD3qEiitvk/zymRNh1nf UdidOIpR8pxH9oBWs0ZimTUW2/nA4Nd2oNrtZSb+ybMj+Ee5VVbJ2S0PDYSJgV+7V+ij Y1xRIiFT8GDSh9M1dOyDVMcimak6xUOHE5hI3n6Ul8DMPdzK46MqqDIHXXS6d+IjwXDg yVOSYtiDkxpppCcR273ZVLICDotzauJ/m5GpvMWT6IHnqo1q9RmfxhgKE+xVvjqB+OrX iLbsppKzlvpOQ90V4hwOjKAwsWppvN+0Fra8i9u0M5rjaVbuPAjZWTSugG32ObW6kkoy zb5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bGslspb5ZylZ8O2t/OXWAgcgjW5wijqWH0egOe8ik7I=; b=LIgx3J89he8+I2HfZALuAlC3XkonNs9aohYRJ+ywc23MIiyBbfaJOnWVv2/EythQIb D0tnqHcJiV9CnuGP7V4uNxdQPDsdolGrgq9W2ASsEWj+MpIelk54hqRsE62JqvJQqCEk 82TCFkIK/pWeewM5tdHLoRj9TTiz6YULwXyvE9XRuziuBjKTve6DpquWdFQGqgEYTXlg 5iTUnazNy3BkooGXBMcimo1oJ5CF01FD69ZLmSY5TLZ/+9IN8soIl08MJBp+QELTw6pk cwEZEXPxtraAxMJINtn5Zqa83AunIsEKyALlxPjAZZsQMu8ugaiiP9UFGLnIt83FKpws i1hw== X-Gm-Message-State: AOAM533GYgrDwWEKGhZ7FARYAU47FA/7m1xemg5S/QwAFFplzugrnERp JBXzm6OkhKyxDUF0sRvaDSnHYLkV X-Google-Smtp-Source: ABdhPJzU4JdgWWY9tKHhmR96s6DGwgFRranOd/IzqxxC3jYaykmK4MxZFD94Gt6pxtBeA1R3QeiLbA== X-Received: by 2002:a62:19d1:0:b029:1d7:f8f2:7b04 with SMTP id 200-20020a6219d10000b02901d7f8f27b04mr1403660pfz.10.1612512389229; Fri, 05 Feb 2021 00:06:29 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:28 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 2/7] drm: ttm_pool: Rename the ttm_pool_dma structure to ttm_pool_page_dat Date: Fri, 5 Feb 2021 08:06:16 +0000 Message-Id: <20210205080621.3102035-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch simply renames the ttm_pool_dma structure to ttm_pool_page_dat, as we will extend it to store more then just dma related info in it. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/gpu/drm/ttm/ttm_pool.c | 37 +++++++++++++++++----------------- 1 file changed, 18 insertions(+), 19 deletions(-) -- 2.25.1 diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 11e0313db0ea..c0274e256be3 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -37,18 +37,17 @@ #ifdef CONFIG_X86 #include #endif - #include #include #include /** - * struct ttm_pool_dma - Helper object for coherent DMA mappings + * struct ttm_pool_page_dat - Helper object for coherent DMA mappings * * @addr: original DMA address returned for the mapping * @vaddr: original vaddr return for the mapping and order in the lower bits */ -struct ttm_pool_dma { +struct ttm_pool_page_dat { dma_addr_t addr; unsigned long vaddr; }; @@ -75,7 +74,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, unsigned int order) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; - struct ttm_pool_dma *dma; + struct ttm_pool_page_dat *dat; struct page *p; void *vaddr; @@ -94,15 +93,15 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, return p; } - dma = kmalloc(sizeof(*dma), GFP_KERNEL); - if (!dma) + dat = kmalloc(sizeof(*dat), GFP_KERNEL); + if (!dat) return NULL; if (order) attr |= DMA_ATTR_NO_WARN; vaddr = dma_alloc_attrs(pool->dev, (1ULL << order) * PAGE_SIZE, - &dma->addr, gfp_flags, attr); + &dat->addr, gfp_flags, attr); if (!vaddr) goto error_free; @@ -114,12 +113,12 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, else p = virt_to_page(vaddr); - dma->vaddr = (unsigned long)vaddr | order; - p->private = (unsigned long)dma; + dat->vaddr = (unsigned long)vaddr | order; + p->private = (unsigned long)dat; return p; error_free: - kfree(dma); + kfree(dat); return NULL; } @@ -128,7 +127,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, unsigned int order, struct page *p) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; - struct ttm_pool_dma *dma; + struct ttm_pool_page_dat *dat; void *vaddr; #ifdef CONFIG_X86 @@ -147,11 +146,11 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, if (order) attr |= DMA_ATTR_NO_WARN; - dma = (void *)p->private; - vaddr = (void *)(dma->vaddr & PAGE_MASK); - dma_free_attrs(pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dma->addr, + dat = (void *)p->private; + vaddr = (void *)(dat->vaddr & PAGE_MASK); + dma_free_attrs(pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dat->addr, attr); - kfree(dma); + kfree(dat); } /* Apply a new caching to an array of pages */ @@ -184,9 +183,9 @@ static int ttm_pool_map(struct ttm_pool *pool, unsigned int order, unsigned int i; if (pool->use_dma_alloc) { - struct ttm_pool_dma *dma = (void *)p->private; + struct ttm_pool_page_dat *dat = (void *)p->private; - addr = dma->addr; + addr = dat->addr; } else { size_t size = (1ULL << order) * PAGE_SIZE; @@ -324,9 +323,9 @@ static unsigned int ttm_pool_shrink(void) static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { if (pool->use_dma_alloc) { - struct ttm_pool_dma *dma = (void *)p->private; + struct ttm_pool_page_dat *dat = (void *)p->private; - return dma->vaddr & ~PAGE_MASK; + return dat->vaddr & ~PAGE_MASK; } return p->private; From patchwork Fri Feb 5 08:06:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376888 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008349jah; Fri, 5 Feb 2021 00:06:31 -0800 (PST) X-Received: by 2002:a17:902:ce8b:b029:e2:9905:db5a with SMTP id f11-20020a170902ce8bb02900e29905db5amr3093433plg.77.1612512391321; Fri, 05 Feb 2021 00:06:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512391; cv=none; d=google.com; s=arc-20160816; b=eJvgTKe3wjixkNbOWWNtd6GWazD++oTQ1twIjJmqYXpPuN0jVens7GB+uC1YDO13cx /UdZ8z3ZxGTg/ds4lA7apBNXV9HEW/KPqaqVQWZHS8yG1i3EqPjXA7fyMki+UNjVTDO8 O9XxzVBEjPhTfOhzQiy4IjJi17giDwUXXB170ivDuSCbkFAJqrmBYzvnH+pYGCUIKtSb X/nEuPeWgnqeXZuIAw6sWR9YFGBBL+fZ+34+QySrrfVYzP5WmPPUOt7u1FNXRPArfZC+ EosJLLM03Qnh72OUEwEihghgrWHI9jjFuHxR/zItpsJHGOjofeNinwwNZ/H/A3CN725g 4lMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=Eppkc7cbhBhkKsBh0TVfEHFBhagPwlMZuH5kYXNg9Sk=; b=fyGh/A3SJh4Q0X6OINRSwBIuGm0ZJ9ObtyI0F9sJJlaXZfhIYSfUTNl8xpQ8j/Ed7A d2aHEZlQ9886DEHCkETvmCBPsJDyFbX4bsZiKiiNbXasasJUBXI0QKf5StkDQvbjdUCR 0o2cHQHCEwSe0gjyFI2GFiHl+6X76uidX4u8J1GwrQ2yP6pNDYov1aNyvDbZTx5jple2 PqlpLTHo6mJy9jk6RX3tl7HoVW6m76Lh1kaGvtkxx8Pxdlgv+iXpq2bSB+Mj67D5gMfx 40gr1efc7gEDyb0CwPY3oU6wC0LavyeQukZSNtH2PAcRW1AQRhdWDP0XtiQELUfBiErK 0WKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LvkxxuZD; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id v2sor4202820pfv.4.2021.02.05.00.06.31 for (Google Transport Security); Fri, 05 Feb 2021 00:06:31 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LvkxxuZD; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Eppkc7cbhBhkKsBh0TVfEHFBhagPwlMZuH5kYXNg9Sk=; b=LvkxxuZDAt3g61EzURtwZNN7B+c9BI9CcoYDAacfHmWWZaJmA/Q8ueMntoQL87InPz x3cic234jsbfsSUbgOlH9EphjhLyv75y5ljVcj96lD8GWTzU94A237kWBn6zXhPaZSVv Qxl5gI8MIE7AZ+8ju558OSalzhBAC+kamKS43qb/4rZZJCsnWqBeGMaKqEO4z6XB3UBy NICgSOqMmB85OIVfSU/BHZ31T9K5ZA/WU4F1VuEGj27Tm0IsEqPG/qmh1F+9RQIJlF/+ 72ItAQdnjOo4oeXkb2xUerZye6TyguzkprgbaF9p5TRXlRe/6wuok2G0jnFH6JqwoP9I KBsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Eppkc7cbhBhkKsBh0TVfEHFBhagPwlMZuH5kYXNg9Sk=; b=Xayicngt6dBY1f41edvQOb2sGbS9JMlCs/EWRQa5+1IuD5Myat4Qi1A1A5HLo4oAxL mcHPHDlbOWDmtbdb1HpzVS6EYyVqLBPR3miSDk2kfNL94nXGfytsUxs6WWLuvi60OJLh d/mvQIgwz4AZe9hO1TBHODsF04/pkEKFK005ZDtT5A1zZhqHMzy7H0qW+rcOsCRPcOfg nnv22yiocyMmo6mZpCECLTICj6uqhdZtmJaxDiV2pfVXVhGGI+qpRz0ewPlVzBdJBtbv D5A6wuiwSzqYN5GfMrt2CmecSc3bbuZtobAv+OYnBzhkirOuj6U1OoMHa1vxfnC4GNy6 xilA== X-Gm-Message-State: AOAM533JHbtnwFLoNajz8iQQHHly/YNXTB6dNFrd3WJVKSh8W/nEdmfR Nuu0mdfMrld33liegdZa3n2Bsmxu X-Google-Smtp-Source: ABdhPJx48bhhs1D3gknefKfUqkMUbp2y+z4at6TszkoS7X+/fr9erWj+jlUYkxFBhUUgi39EnSaBGA== X-Received: by 2002:aa7:946c:0:b029:1ce:3f04:3f67 with SMTP id t12-20020aa7946c0000b02901ce3f043f67mr3589630pfq.6.1612512390927; Fri, 05 Feb 2021 00:06:30 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:30 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 3/7] drm: ttm_pool: Rework ttm_pool_free_page to allow us to use it as a function pointer Date: Fri, 5 Feb 2021 08:06:17 +0000 Message-Id: <20210205080621.3102035-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 This refactors ttm_pool_free_page(), and by adding extra entries to ttm_pool_page_dat, we then use it for all allocations, which allows us to simplify the arguments needed to be passed to ttm_pool_free_page(). This is critical for allowing the free function to be called by the sharable drm_page_pool logic. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/gpu/drm/ttm/ttm_pool.c | 60 ++++++++++++++++++---------------- 1 file changed, 32 insertions(+), 28 deletions(-) -- 2.25.1 diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index c0274e256be3..eca36678f967 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -44,10 +44,14 @@ /** * struct ttm_pool_page_dat - Helper object for coherent DMA mappings * + * @pool: ttm_pool pointer the page was allocated by + * @caching: the caching value the allocated page was configured for * @addr: original DMA address returned for the mapping * @vaddr: original vaddr return for the mapping and order in the lower bits */ struct ttm_pool_page_dat { + struct ttm_pool *pool; + enum ttm_caching caching; dma_addr_t addr; unsigned long vaddr; }; @@ -71,13 +75,20 @@ static struct shrinker mm_shrinker; /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, - unsigned int order) + unsigned int order, enum ttm_caching caching) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; struct ttm_pool_page_dat *dat; struct page *p; void *vaddr; + dat = kmalloc(sizeof(*dat), GFP_KERNEL); + if (!dat) + return NULL; + + dat->pool = pool; + dat->caching = caching; + /* Don't set the __GFP_COMP flag for higher order allocations. * Mapping pages directly into an userspace process and calling * put_page() on a TTM allocated page is illegal. @@ -88,15 +99,13 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, if (!pool->use_dma_alloc) { p = alloc_pages(gfp_flags, order); - if (p) - p->private = order; + if (!p) + goto error_free; + dat->vaddr = order; + p->private = (unsigned long)dat; return p; } - dat = kmalloc(sizeof(*dat), GFP_KERNEL); - if (!dat) - return NULL; - if (order) attr |= DMA_ATTR_NO_WARN; @@ -123,34 +132,34 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, } /* Reset the caching and pages of size 1 << order */ -static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, - unsigned int order, struct page *p) +static int ttm_pool_free_page(struct page *p, unsigned int order) { unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; - struct ttm_pool_page_dat *dat; + struct ttm_pool_page_dat *dat = (void *)p->private; void *vaddr; #ifdef CONFIG_X86 /* We don't care that set_pages_wb is inefficient here. This is only * used when we have to shrink and CPU overhead is irrelevant then. */ - if (caching != ttm_cached && !PageHighMem(p)) + if (dat->caching != ttm_cached && !PageHighMem(p)) set_pages_wb(p, 1 << order); #endif - if (!pool || !pool->use_dma_alloc) { + if (!dat->pool || !dat->pool->use_dma_alloc) { __free_pages(p, order); - return; + goto out; } if (order) attr |= DMA_ATTR_NO_WARN; - dat = (void *)p->private; vaddr = (void *)(dat->vaddr & PAGE_MASK); - dma_free_attrs(pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dat->addr, + dma_free_attrs(dat->pool->dev, (1UL << order) * PAGE_SIZE, vaddr, dat->addr, attr); +out: kfree(dat); + return 1 << order; } /* Apply a new caching to an array of pages */ @@ -264,7 +273,7 @@ static void ttm_pool_type_fini(struct ttm_pool_type *pt) mutex_unlock(&shrinker_lock); list_for_each_entry_safe(p, tmp, &pt->pages, lru) - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + ttm_pool_free_page(p, pt->order); } /* Return the pool_type to use for the given caching and order */ @@ -307,7 +316,7 @@ static unsigned int ttm_pool_shrink(void) p = ttm_pool_type_take(pt); if (p) { - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + ttm_pool_free_page(p, pt->order); num_freed = 1 << pt->order; } else { num_freed = 0; @@ -322,13 +331,9 @@ static unsigned int ttm_pool_shrink(void) /* Return the allocation order based for a page */ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { - if (pool->use_dma_alloc) { - struct ttm_pool_page_dat *dat = (void *)p->private; - - return dat->vaddr & ~PAGE_MASK; - } + struct ttm_pool_page_dat *dat = (void *)p->private; - return p->private; + return dat->vaddr & ~PAGE_MASK; } /** @@ -379,7 +384,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (p) { apply_caching = true; } else { - p = ttm_pool_alloc_page(pool, gfp_flags, order); + p = ttm_pool_alloc_page(pool, gfp_flags, order, tt->caching); if (p && PageHighMem(p)) apply_caching = true; } @@ -428,13 +433,13 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, ttm_mem_global_free_page(&ttm_mem_glob, p, (1 << order) * PAGE_SIZE); error_free_page: - ttm_pool_free_page(pool, tt->caching, order, p); + ttm_pool_free_page(p, order); error_free_all: num_pages = tt->num_pages - num_pages; for (i = 0; i < num_pages; ) { order = ttm_pool_page_order(pool, tt->pages[i]); - ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); + ttm_pool_free_page(tt->pages[i], order); i += 1 << order; } @@ -470,8 +475,7 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) if (pt) ttm_pool_type_give(pt, tt->pages[i]); else - ttm_pool_free_page(pool, tt->caching, order, - tt->pages[i]); + ttm_pool_free_page(tt->pages[i], order); i += num_pages; } From patchwork Fri Feb 5 08:06:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376889 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008365jah; Fri, 5 Feb 2021 00:06:33 -0800 (PST) X-Received: by 2002:a05:6a00:1748:b029:1c8:8139:288f with SMTP id j8-20020a056a001748b02901c88139288fmr3401073pfc.13.1612512392978; Fri, 05 Feb 2021 00:06:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512392; cv=none; d=google.com; s=arc-20160816; b=IwT6Wr+4iqqFsKQD51biuPWABHljHP1d+cAW8uhKowPLVOz10NflJFDAOm/ee75FMP eYVOy/u3qukiamvFmdF3ghrlEvOFboMpCBE9IdRxrVtQo8ayJNnydyTdk3PTcl/w5sBd LONlHdFt51yhNf7ZMCMIhiZ5OEGjxAHiQyyFtIOKjrfPvg/mSPFH6zhUUUlZudKqAj9g R4oEaiiGcexNQNH87SDC0USxTnhYNWa2DzxgR3gRyoThTtaQ1qhlKUEfis+ww4momH4A 9OBNx1VKyQL0U1QeFGlJtQMuHBfXZrTYW+LxpVuSuuckmkLcYK5STFIATzB3URPaY2Yu /G/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=p+kRePWB5W5trhue+IG8eMT/JEK8NosX0hZykvGBiDc=; b=QdZEsq6ylAM3ryOi0VpBEHyxsB1AyNryM1DqQwr3FGNVXap3puNbchF6ImX9ZtAgZx QQmDUzxzofEo7rvIpBQ7O1NnrhuedsMJSkzWDJtQlc1NA9kBl4tt5BgmtKy2NcFynou2 U4Py0zA8oTDQaLBWS6/2Z9kTKzShY9GQipCADuGuxIJoDq93xe/AmEpd1ei8EI4PDv6M GusBX2wlvGMmsrKIIx9TPsncuvDdk7SWWNg3b5HRZg/dh+PXmy8P2ShM/bAU5kF6wv6N GiV/MUMFfQ39hNYwpiHv/1IgERll10WV1lIs88GjFfFCujBdGhAJBoFlgpeJNOaZc7kp 9pGQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lX9KDrFO; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id x1sor4157399pfd.76.2021.02.05.00.06.32 for (Google Transport Security); Fri, 05 Feb 2021 00:06:32 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=lX9KDrFO; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=p+kRePWB5W5trhue+IG8eMT/JEK8NosX0hZykvGBiDc=; b=lX9KDrFOyf4mmKHoTR9pFnv7VxYUXhGbaO7HhCnhN0vp0Wy7JNRAq2lhCPA0I0plyl oG2LLiKjbJzE8kwWpuR0LD/wsJPKlxioHcbeZk/j9agRSkccHUUgkfBFWoruy4XouWIK otHDtW/6Qq5P9KYmk+aHJ59x0iWTOwqNaBHfRDvHYkDxqhPs1s92ReMvamLYlL3U07z5 CzV95v/X2y7uvQpRf4t0lAiy1oqgNnnS1smzYz8JK8zLp5fEJypZXPnbNDW+hX+aMqKj ENzM+umRBm2zpBwkBXKxAi3fRhCxqCaA32t6AU/bvQ+XXN+5nJQspxeaSWtBb3ZFOXLc g0qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p+kRePWB5W5trhue+IG8eMT/JEK8NosX0hZykvGBiDc=; b=m9HJFLanrqnPU+RpFsGJCzhdyh7lkG+qXxEV5Dc7rwzV0NkKclr0LHl2gF0ALeJ2qa pT4+pLJcVSaa5gaMG2/fmLdodo/FeNvvUSpsXKoNAK9kQz2y33CFrFQ5BCwr4Zu9wgEg hdgBCyS1dk6kdYcj75g77SC1Qd3Ougv+VCqzjcta4R/4H3/lp1/plybM3yvfh3Nfi8PP I/QE40lC2Rn7T5iZrX4c9ROevk2l3C3Y5kYIC/IfOoMzU/x/QmWhekqADhB6d53X0eC7 3epusHzItLry7XHPZq9HT+ZDpLMRC1Qs41cQ7hAs2sWpfIzXdD82M4aTbpLiVc+aauew U55w== X-Gm-Message-State: AOAM530S09mWNMbVSHVzBAhfHANXE7GL7l9rRXf9sqrAKpngW5KehjY4 1V1gZ5Wj/5LGazU/63R2FfFWRTNT X-Google-Smtp-Source: ABdhPJz6EdUhticEW2ZVaijZbkearTjvVyxpZU18tPWPNOV5zZTvDaAg2fub0hYD8CmJA7ggEkdcxA== X-Received: by 2002:aa7:9981:0:b029:1d4:2f7b:e0d with SMTP id k1-20020aa799810000b02901d42f7b0e0dmr3344895pfh.10.1612512392648; Fri, 05 Feb 2021 00:06:32 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:31 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 4/7] drm: ttm_pool: Rework ttm_pool to use drm_page_pool Date: Fri, 5 Feb 2021 08:06:18 +0000 Message-Id: <20210205080621.3102035-5-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch reworks the ttm_pool logic to utilize the recently added drm_page_pool code. Basically all the ttm_pool_type structures are replaced with drm_page_pool pointers, and since the drm_page_pool logic has its own shrinker, we can remove all of the ttm_pool shrinker logic. NOTE: There is one mismatch in the interfaces I'm not totally happy with. The ttm_pool tracks all of its pooled pages across a number of different pools, and tries to keep this size under the specified page_pool_size value. With the drm_page_pool, there may other users, however there is still one global shrinker list of pools. So we can't easily reduce the ttm pool under the ttm specified size without potentially doing a lot of shrinking to other non-ttm pools. So either we can: 1) Try to split it so each user of drm_page_pools manages its own pool shrinking. 2) Push the max value into the drm_page_pool, and have it manage shrinking to fit under that global max. Then share those size/max values out so the ttm_pool debug output can have more context. I've taken the second path in this patch set, but wanted to call it out so folks could look closely. Thoughts would be greatly appreciated here! Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- drivers/gpu/drm/Kconfig | 1 + drivers/gpu/drm/ttm/ttm_pool.c | 199 +++++++-------------------------- include/drm/ttm/ttm_pool.h | 23 +--- 3 files changed, 41 insertions(+), 182 deletions(-) -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index d16bf340ed2e..d427abefabfb 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -181,6 +181,7 @@ config DRM_PAGE_POOL config DRM_TTM tristate depends on DRM && MMU + select DRM_PAGE_POOL help GPU memory management subsystem for devices with multiple GPU memory types. Will be enabled automatically if a device driver diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index eca36678f967..dbbaf55ca5df 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -37,6 +37,7 @@ #ifdef CONFIG_X86 #include #endif +#include #include #include #include @@ -63,15 +64,13 @@ module_param(page_pool_size, ulong, 0644); static atomic_long_t allocated_pages; -static struct ttm_pool_type global_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_uncached[MAX_ORDER]; +static struct drm_page_pool *global_write_combined[MAX_ORDER]; +static struct drm_page_pool *global_uncached[MAX_ORDER]; -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; +static struct drm_page_pool *global_dma32_write_combined[MAX_ORDER]; +static struct drm_page_pool *global_dma32_uncached[MAX_ORDER]; static struct mutex shrinker_lock; -static struct list_head shrinker_list; -static struct shrinker mm_shrinker; /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, @@ -223,79 +222,26 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr, DMA_BIDIRECTIONAL); } -/* Give pages into a specific pool_type */ -static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p) -{ - spin_lock(&pt->lock); - list_add(&p->lru, &pt->pages); - spin_unlock(&pt->lock); - atomic_long_add(1 << pt->order, &allocated_pages); -} - -/* Take pages from a specific pool_type, return NULL when nothing available */ -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt) -{ - struct page *p; - - spin_lock(&pt->lock); - p = list_first_entry_or_null(&pt->pages, typeof(*p), lru); - if (p) { - atomic_long_sub(1 << pt->order, &allocated_pages); - list_del(&p->lru); - } - spin_unlock(&pt->lock); - - return p; -} - -/* Initialize and add a pool type to the global shrinker list */ -static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool, - enum ttm_caching caching, unsigned int order) -{ - pt->pool = pool; - pt->caching = caching; - pt->order = order; - spin_lock_init(&pt->lock); - INIT_LIST_HEAD(&pt->pages); - - mutex_lock(&shrinker_lock); - list_add_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); -} - -/* Remove a pool_type from the global shrinker list and free all pages */ -static void ttm_pool_type_fini(struct ttm_pool_type *pt) -{ - struct page *p, *tmp; - - mutex_lock(&shrinker_lock); - list_del(&pt->shrinker_list); - mutex_unlock(&shrinker_lock); - - list_for_each_entry_safe(p, tmp, &pt->pages, lru) - ttm_pool_free_page(p, pt->order); -} - /* Return the pool_type to use for the given caching and order */ -static struct ttm_pool_type *ttm_pool_select_type(struct ttm_pool *pool, +static struct drm_page_pool *ttm_pool_select_type(struct ttm_pool *pool, enum ttm_caching caching, unsigned int order) { if (pool->use_dma_alloc) - return &pool->caching[caching].orders[order]; + return pool->caching[caching].orders[order]; #ifdef CONFIG_X86 switch (caching) { case ttm_write_combined: if (pool->use_dma32) - return &global_dma32_write_combined[order]; + return global_dma32_write_combined[order]; - return &global_write_combined[order]; + return global_write_combined[order]; case ttm_uncached: if (pool->use_dma32) - return &global_dma32_uncached[order]; + return global_dma32_uncached[order]; - return &global_uncached[order]; + return global_uncached[order]; default: break; } @@ -304,30 +250,6 @@ static struct ttm_pool_type *ttm_pool_select_type(struct ttm_pool *pool, return NULL; } -/* Free pages using the global shrinker list */ -static unsigned int ttm_pool_shrink(void) -{ - struct ttm_pool_type *pt; - unsigned int num_freed; - struct page *p; - - mutex_lock(&shrinker_lock); - pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); - - p = ttm_pool_type_take(pt); - if (p) { - ttm_pool_free_page(p, pt->order); - num_freed = 1 << pt->order; - } else { - num_freed = 0; - } - - list_move_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); - - return num_freed; -} - /* Return the allocation order based for a page */ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { @@ -377,10 +299,10 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, for (order = min(MAX_ORDER - 1UL, __fls(num_pages)); num_pages; order = min_t(unsigned int, order, __fls(num_pages))) { bool apply_caching = false; - struct ttm_pool_type *pt; + struct drm_page_pool *pt; pt = ttm_pool_select_type(pool, tt->caching, order); - p = pt ? ttm_pool_type_take(pt) : NULL; + p = pt ? drm_page_pool_fetch(pt) : NULL; if (p) { apply_caching = true; } else { @@ -462,7 +384,7 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) for (i = 0; i < tt->num_pages; ) { struct page *p = tt->pages[i]; unsigned int order, num_pages; - struct ttm_pool_type *pt; + struct drm_page_pool *pt; order = ttm_pool_page_order(pool, p); num_pages = 1ULL << order; @@ -473,15 +395,12 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) pt = ttm_pool_select_type(pool, tt->caching, order); if (pt) - ttm_pool_type_give(pt, tt->pages[i]); + drm_page_pool_add(pt, tt->pages[i]); else ttm_pool_free_page(tt->pages[i], order); i += num_pages; } - - while (atomic_long_read(&allocated_pages) > page_pool_size) - ttm_pool_shrink(); } EXPORT_SYMBOL(ttm_pool_free); @@ -508,8 +427,8 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) for (j = 0; j < MAX_ORDER; ++j) - ttm_pool_type_init(&pool->caching[i].orders[j], - pool, i, j); + pool->caching[i].orders[j] = drm_page_pool_create(j, + ttm_pool_free_page); } /** @@ -526,33 +445,18 @@ void ttm_pool_fini(struct ttm_pool *pool) for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) for (j = 0; j < MAX_ORDER; ++j) - ttm_pool_type_fini(&pool->caching[i].orders[j]); + drm_page_pool_destroy(pool->caching[i].orders[j]); } #ifdef CONFIG_DEBUG_FS -/* Count the number of pages available in a pool_type */ -static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) -{ - unsigned int count = 0; - struct page *p; - - spin_lock(&pt->lock); - /* Only used for debugfs, the overhead doesn't matter */ - list_for_each_entry(p, &pt->pages, lru) - ++count; - spin_unlock(&pt->lock); - - return count; -} - /* Dump information about the different pool types */ -static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, +static void ttm_pool_debugfs_orders(struct drm_page_pool **pt, struct seq_file *m) { unsigned int i; for (i = 0; i < MAX_ORDER; ++i) - seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); + seq_printf(m, " %8u", drm_page_pool_get_size(pt[i])); seq_puts(m, "\n"); } @@ -602,7 +506,10 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) } seq_printf(m, "\ntotal\t: %8lu of %8lu\n", - atomic_long_read(&allocated_pages), page_pool_size); + atomic_long_read(&allocated_pages), + drm_page_pool_get_max()); + seq_printf(m, "(%8lu in non-ttm pools)\n", drm_page_pool_get_total() - + atomic_long_read(&allocated_pages)); mutex_unlock(&shrinker_lock); @@ -612,28 +519,6 @@ EXPORT_SYMBOL(ttm_pool_debugfs); #endif -/* As long as pages are available make sure to release at least one */ -static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_freed = 0; - - do - num_freed += ttm_pool_shrink(); - while (!num_freed && atomic_long_read(&allocated_pages)); - - return num_freed; -} - -/* Return the number of pages available or SHRINK_EMPTY if we have none */ -static unsigned long ttm_pool_shrinker_count(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_pages = atomic_long_read(&allocated_pages); - - return num_pages ? num_pages : SHRINK_EMPTY; -} - /** * ttm_pool_mgr_init - Initialize globals * @@ -648,24 +533,21 @@ int ttm_pool_mgr_init(unsigned long num_pages) if (!page_pool_size) page_pool_size = num_pages; + drm_page_pool_set_max(page_pool_size); + mutex_init(&shrinker_lock); - INIT_LIST_HEAD(&shrinker_list); for (i = 0; i < MAX_ORDER; ++i) { - ttm_pool_type_init(&global_write_combined[i], NULL, - ttm_write_combined, i); - ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i); - - ttm_pool_type_init(&global_dma32_write_combined[i], NULL, - ttm_write_combined, i); - ttm_pool_type_init(&global_dma32_uncached[i], NULL, - ttm_uncached, i); + global_write_combined[i] = drm_page_pool_create(i, + ttm_pool_free_page); + global_uncached[i] = drm_page_pool_create(i, + ttm_pool_free_page); + global_dma32_write_combined[i] = drm_page_pool_create(i, + ttm_pool_free_page); + global_dma32_uncached[i] = drm_page_pool_create(i, + ttm_pool_free_page); } - - mm_shrinker.count_objects = ttm_pool_shrinker_count; - mm_shrinker.scan_objects = ttm_pool_shrinker_scan; - mm_shrinker.seeks = 1; - return register_shrinker(&mm_shrinker); + return 0; } /** @@ -678,13 +560,10 @@ void ttm_pool_mgr_fini(void) unsigned int i; for (i = 0; i < MAX_ORDER; ++i) { - ttm_pool_type_fini(&global_write_combined[i]); - ttm_pool_type_fini(&global_uncached[i]); + drm_page_pool_destroy(global_write_combined[i]); + drm_page_pool_destroy(global_uncached[i]); - ttm_pool_type_fini(&global_dma32_write_combined[i]); - ttm_pool_type_fini(&global_dma32_uncached[i]); + drm_page_pool_destroy(global_dma32_write_combined[i]); + drm_page_pool_destroy(global_dma32_uncached[i]); } - - unregister_shrinker(&mm_shrinker); - WARN_ON(!list_empty(&shrinker_list)); } diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 4321728bdd11..460ab232fd22 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -36,27 +36,6 @@ struct ttm_tt; struct ttm_pool; struct ttm_operation_ctx; -/** - * ttm_pool_type - Pool for a certain memory type - * - * @pool: the pool we belong to, might be NULL for the global ones - * @order: the allocation order our pages have - * @caching: the caching type our pages have - * @shrinker_list: our place on the global shrinker list - * @lock: protection of the page list - * @pages: the list of pages in the pool - */ -struct ttm_pool_type { - struct ttm_pool *pool; - unsigned int order; - enum ttm_caching caching; - - struct list_head shrinker_list; - - spinlock_t lock; - struct list_head pages; -}; - /** * ttm_pool - Pool for all caching and orders * @@ -71,7 +50,7 @@ struct ttm_pool { bool use_dma32; struct { - struct ttm_pool_type orders[MAX_ORDER]; + struct drm_page_pool *orders[MAX_ORDER]; } caching[TTM_NUM_CACHING_TYPES]; }; From patchwork Fri Feb 5 08:06:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376890 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008392jah; Fri, 5 Feb 2021 00:06:35 -0800 (PST) X-Received: by 2002:aa7:84da:0:b029:1bd:5f0e:b1f2 with SMTP id x26-20020aa784da0000b02901bd5f0eb1f2mr3425546pfn.41.1612512395444; Fri, 05 Feb 2021 00:06:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512395; cv=none; d=google.com; s=arc-20160816; b=qLnrpkDCtHHi6C44xPX9W/97jzVaovssv2wQL6eYrIlKw9F/aBzzEutF5gVGrx7BLR p1IyzMYqXvXaiNqfX1ACGeUfBkyiRdo70qDw5Ub4SRSS3P/XHe69mGd7OKmdi8R1pI6b v2YQNBLUrwSaYIQgE4sgts5HaqTQhmqK8c9OTHUBLGmYxdYdvXIYf/bkkFfHG3DpffG+ vx2Bxp1S0cCx3ixMQ3dVDrKUtH1McZe7x4stUrf5YO/6u+zyMBjp7Rf0cC9xAGS632A2 KLS+k7aZCRYndlq2TSwvTQDFZ0uvtYU9/Mq+ubUHO99hv06kMEXeDSo2qgb9CbpC48Ch naoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=yMZdJ0QWB5DNKnFHigbppJEmTH0pjOtMvlOhSROm3JY=; b=xVoKOQhpmsfdnUlVFjppHvKsMyRGOqdhU4PRDHxcUv+y6qgOx+1w1DfLhLV+pUs1sB aR5J+x6lG7rY23WMpYivsWIjMavUqlePsz5A/8c1gKzOxZBsYB5AXjgmI4SYGOIbbWRH Q81szwqaztRv/GOB9aN2KBgFhjQfL7cH/RDYftS/llUybGscuA/MiTWWXjJ4pfvkzvDq wBTznqyfF2YHnNcy1bxTUX6Wh7nNMjklyFbtU+iz7EQ+JTtEONRFf5XLBXieE0BGbC/e ZbJ/Oh78sTd1TZ5hXEn+1yqF53sJHSUmhmkR1a0BwXi+4SvqNBfRcA6cHr92+yjqeYZ0 Lqdg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nfLDuNHQ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id m126sor4315298pfm.28.2021.02.05.00.06.35 for (Google Transport Security); Fri, 05 Feb 2021 00:06:35 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nfLDuNHQ; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yMZdJ0QWB5DNKnFHigbppJEmTH0pjOtMvlOhSROm3JY=; b=nfLDuNHQ7ZcZflve1/8UmLWBj8jyZiNGQlPhmo+1M1jX2NTBI5msB5R/I8HxVPvfEI WKRNzypQ5JSVIarCR0s4fQN/IatdOed8ouHCkOv5gnJM7RYP9W8B47g7dv5V4KZg+kWu rvjx1QjHKBK6rKaqoDNT0MIqxOA2I2egeRQ6X1Um7QegAUGtC599EjWM8q2plzF+nc0X gIs1aKuT1Oov7b+3Gs5c+egn5SRmRYjkoygmqGB0FQf6MWRb1piXXBO1+X8LuBjehmw/ fJY7hqdHEsT4ATXu44P8446oCj5ppp3yvb4LVoLe/6MU3tjtB6oe6XCYkUA7kD91vnMl dsUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yMZdJ0QWB5DNKnFHigbppJEmTH0pjOtMvlOhSROm3JY=; b=fNRVMK9Q412oNRTbZXqpBeUHUvrp0hhjBSpmb7lKsBrHBEUSo1pVnZnWh+mbtwUt46 3dvX37p9/5m5m3BL/uYW6Bn+H6tChNfZpa7YxARKQZju8EW69OgeA6ZAcFuq9d/+BGBN XKgGs8UBePiF6XCx+I5JRixTCOpshEtWRrLZBfhi2V4PSAWOUaWZyHrIR6t7UgXib+yW Qn4yKinz60Y4X5s2C59mqdfugTOu9zxX7VjudE8mUcmcdX4LCobXwXvn1enBYB0PuYjM cFa9GZuiM4hm1IwFmTMc/Gtd26CewyWHTF2Lx8eLeRt80nfvUQZBPmD7gKVHgSTAyi9y P1jg== X-Gm-Message-State: AOAM530cy+aKIMAdgkazKTuoP0Qf8Pue+SbMHnXyEAXKye/hdi4IPugW DG+EmMPYV5U6/1gEK0oOElFuAju2 X-Google-Smtp-Source: ABdhPJy1ZrVvZuAN8XXMihWVOe0E6t1BYjetV+cMvLtvHpEEF+0vQTnbj3TeYmWqXnNUyDsbUz3aMg== X-Received: by 2002:a62:7b90:0:b029:1be:9e89:1db5 with SMTP id w138-20020a627b900000b02901be9e891db5mr3367319pfc.35.1612512395009; Fri, 05 Feb 2021 00:06:35 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:34 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 5/7] dma-buf: heaps: Add deferred-free-helper library code Date: Fri, 5 Feb 2021 08:06:19 +0000 Message-Id: <20210205080621.3102035-6-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch provides infrastructure for deferring buffer frees. This is a feature ION provided which when used with some form of a page pool, provides a nice performance boost in an allocation microbenchmark. The reason it helps is it allows the page-zeroing to be done out of the normal allocation/free path, and pushed off to a kthread. As not all heaps will find this useful, its implemented as a optional helper library that heaps can utilize. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix sleep in atomic issue from using a mutex, by switching to a spinlock as Reported-by: kernel test robot * Cleanup API to use a reason enum for clarity and add some documentation comments as suggested by Suren Baghdasaryan. v3: * Minor tweaks so it can be built as a module * A few small fixups suggested by Daniel Mentz v4: * Tweak from Daniel Mentz to make sure the shrinker count/freed values are tracked in pages not bytes v5: * Fix up page count tracking as suggested by Suren Baghdasaryan --- drivers/dma-buf/heaps/Kconfig | 3 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/deferred-free-helper.c | 145 +++++++++++++++++++ drivers/dma-buf/heaps/deferred-free-helper.h | 55 +++++++ 4 files changed, 204 insertions(+) create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..f7aef8bc7119 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -1,3 +1,6 @@ +config DMABUF_HEAPS_DEFERRED_FREE + tristate + config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..4e7839875615 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_DMABUF_HEAPS_DEFERRED_FREE) += deferred-free-helper.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o diff --git a/drivers/dma-buf/heaps/deferred-free-helper.c b/drivers/dma-buf/heaps/deferred-free-helper.c new file mode 100644 index 000000000000..672c3d5872e9 --- /dev/null +++ b/drivers/dma-buf/heaps/deferred-free-helper.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Deferred dmabuf freeing helper + * + * Copyright (C) 2020 Linaro, Ltd. + * + * Based on the ION page pool code + * Copyright (C) 2011 Google, Inc. + */ + +#include +#include +#include +#include +#include + +#include "deferred-free-helper.h" + +static LIST_HEAD(free_list); +static size_t list_size_pages; +wait_queue_head_t freelist_waitqueue; +struct task_struct *freelist_task; +static DEFINE_SPINLOCK(free_list_lock); + +static inline size_t size_to_pages(size_t size) +{ + if (!size) + return 0; + return ((size - 1) >> PAGE_SHIFT) + 1; +} + +void deferred_free(struct deferred_freelist_item *item, + void (*free)(struct deferred_freelist_item*, + enum df_reason), + size_t size) +{ + unsigned long flags; + + INIT_LIST_HEAD(&item->list); + item->size = size; + item->free = free; + + spin_lock_irqsave(&free_list_lock, flags); + list_add(&item->list, &free_list); + list_size_pages += size_to_pages(size); + spin_unlock_irqrestore(&free_list_lock, flags); + wake_up(&freelist_waitqueue); +} +EXPORT_SYMBOL_GPL(deferred_free); + +static size_t free_one_item(enum df_reason reason) +{ + unsigned long flags; + size_t nr_pages; + struct deferred_freelist_item *item; + + spin_lock_irqsave(&free_list_lock, flags); + if (list_empty(&free_list)) { + spin_unlock_irqrestore(&free_list_lock, flags); + return 0; + } + item = list_first_entry(&free_list, struct deferred_freelist_item, list); + list_del(&item->list); + nr_pages = size_to_pages(item->size); + list_size_pages -= nr_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + + item->free(item, reason); + return nr_pages; +} + +static unsigned long get_freelist_size_pages(void) +{ + unsigned long size; + unsigned long flags; + + spin_lock_irqsave(&free_list_lock, flags); + size = list_size_pages; + spin_unlock_irqrestore(&free_list_lock, flags); + return size; +} + +static unsigned long freelist_shrink_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + return get_freelist_size_pages(); +} + +static unsigned long freelist_shrink_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long total_freed = 0; + + if (sc->nr_to_scan == 0) + return 0; + + while (total_freed < sc->nr_to_scan) { + size_t pages_freed = free_one_item(DF_UNDER_PRESSURE); + + if (!pages_freed) + break; + + total_freed += pages_freed; + } + + return total_freed; +} + +static struct shrinker freelist_shrinker = { + .count_objects = freelist_shrink_count, + .scan_objects = freelist_shrink_scan, + .seeks = DEFAULT_SEEKS, + .batch = 0, +}; + +static int deferred_free_thread(void *data) +{ + while (true) { + wait_event_freezable(freelist_waitqueue, + get_freelist_size_pages() > 0); + + free_one_item(DF_NORMAL); + } + + return 0; +} + +static int deferred_freelist_init(void) +{ + list_size_pages = 0; + + init_waitqueue_head(&freelist_waitqueue); + freelist_task = kthread_run(deferred_free_thread, NULL, + "%s", "dmabuf-deferred-free-worker"); + if (IS_ERR(freelist_task)) { + pr_err("Creating thread for deferred free failed\n"); + return -1; + } + sched_set_normal(freelist_task, 19); + + return register_shrinker(&freelist_shrinker); +} +module_init(deferred_freelist_init); +MODULE_LICENSE("GPL v2"); + diff --git a/drivers/dma-buf/heaps/deferred-free-helper.h b/drivers/dma-buf/heaps/deferred-free-helper.h new file mode 100644 index 000000000000..18b44ac86ef6 --- /dev/null +++ b/drivers/dma-buf/heaps/deferred-free-helper.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef DEFERRED_FREE_HELPER_H +#define DEFERRED_FREE_HELPER_H + +/** + * df_reason - enum for reason why item was freed + * + * This provides a reason for why the free function was called + * on the item. This is useful when deferred_free is used in + * combination with a pagepool, so under pressure the page can + * be immediately freed. + * + * DF_NORMAL: Normal deferred free + * + * DF_UNDER_PRESSURE: Free was called because the system + * is under memory pressure. Usually + * from a shrinker. Avoid allocating + * memory in the free call, as it may + * fail. + */ +enum df_reason { + DF_NORMAL, + DF_UNDER_PRESSURE, +}; + +/** + * deferred_freelist_item - item structure for deferred freelist + * + * This is to be added to the structure for whatever you want to + * defer freeing on. + * + * @size: size of the item to be freed + * @free: function pointer to be called when freeing the item + * @list: list entry for the deferred list + */ +struct deferred_freelist_item { + size_t size; + void (*free)(struct deferred_freelist_item *i, + enum df_reason reason); + struct list_head list; +}; + +/** + * deferred_free - call to add item to the deferred free list + * + * @item: Pointer to deferred_freelist_item field of a structure + * @free: Function pointer to the free call + * @size: Size of the item to be freed + */ +void deferred_free(struct deferred_freelist_item *item, + void (*free)(struct deferred_freelist_item *i, + enum df_reason reason), + size_t size); +#endif From patchwork Fri Feb 5 08:06:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376891 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008423jah; Fri, 5 Feb 2021 00:06:38 -0800 (PST) X-Received: by 2002:a65:6886:: with SMTP id e6mr3154648pgt.73.1612512397971; Fri, 05 Feb 2021 00:06:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512397; cv=none; d=google.com; s=arc-20160816; b=w7pb55SY2TajtN9vwqa0U2V2a9bgTCaFvIQa3+0HxyGXx9bX+MMDZIfU79p4cny/ou FyVwjNPbyy55hj5+gpLdvBIpKom2R6wsZqg+7W1cEcRmxQFJ/7B/OOdo4NVj1xIbRVp5 K3SncTuQztQdolAxtLbTThnrN+dKE4SYHtCiogqY4KlKcf5tVsPsStRuiz2NF5JY8ZBJ QSLETpRdbf31wyK9+kOaaau9o9Ct6KJzdOPnDEQdYlZtIUAr8WljVlFbTp1etwV07yB6 7L3FonerIzsw1nHSxzdCKfb2+pUHa/FlDdRIkJjZi/fG5UgKHNzGSJn9IAdDnGlxs1vf E6Gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=0h7G7Mj4IDj8rnVWNvWrWgFawKxEQo6DQAh97xBXxA8=; b=0TpZYXyXiuDL1JBrLOoTtNOPIzngW5xmI638FevmO9Pubyn/3//Ur51J+XBo10ysD7 nwvM1UEKpwTT4tFkYdniHyBAhIydTJIovFOoJTLfib5PN7c5k4lpS5NkAz+z6gL0djPr nGl9bFJ6ykbncXgT8E33orYAIWxGGjDSODk1MJLqOMKXo8BxD5kTFyvAIS6+lfKUadS0 DZ/e3/GpuZjoYw7Xgknx75mx78NnsPPkX5sI8qlgGIbSgquGnqGL6qbKvQJgREWxxxnk D00/kD/kqMpiGImxRHy6XdAVYpPXVtZ3VaufsnR9UyXaAxyrP0MPTxEAfl5k8PxV+Gqm YqEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="P4T3dsp/"; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id c15sor4913241plg.1.2021.02.05.00.06.37 for (Google Transport Security); Fri, 05 Feb 2021 00:06:37 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="P4T3dsp/"; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0h7G7Mj4IDj8rnVWNvWrWgFawKxEQo6DQAh97xBXxA8=; b=P4T3dsp/zg51lMl4zVUf/FfHVV74HZD0vbohhptRcrdr0GN+iZ8l7eE+Lh0lnG36oT +ff+ITPYx4UU/QAaW/aRUrQgtrd/SUj3jzRJRe+66kwapWq268jPw/MJ5Wivjs5d4WP+ 64CdRi6IhGvJrhthmEfquhDnx/BK83ex+AKJq2529nm44TPxgZpzVArlsZh8Cvj9FxyW v/ha42GRHLrv1AOqdfnf872X+O/4k8oQ56mDhZvKsLMVUovAwYIHnFozqYRqrSA3R1Q/ c19J3l2jnK7WsyUKR26mT71wU+FLRMTRkYPcgS8Or2OPtfLmbimHg7ESI1H160PH00jX 8jsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0h7G7Mj4IDj8rnVWNvWrWgFawKxEQo6DQAh97xBXxA8=; b=b460TvpxJwHfDDwqqZsTX8xcn27NEQN5LZpg0WQQstStWthujSLZ2IyJpa/oGPXvOD PkrdS57MhMoCGqEtszxdjuIxPmMf1uTAMa6DYna7S9GG9vXJowWWasUmA6dAyDSYPM3z CwEbExaG16opfgJv/GZQPl0PfwQRGUCBKa9McjhCSdJ9APU9Fkbw/UmxLOQXhO/+W/e0 xAGb+zWTxTyOjvz/bqSIOrPTNAlFKwW0zTFqowgGeupYOJpxCUHXp4UwZ08TcY8HNdSp zGvh3CjEOCqBnI9tsbk1ANbm3JXwMKycM/yr+Ol9oJ0zlXSbfv9vyTqPJa56Ed9utHwV Lx1Q== X-Gm-Message-State: AOAM533EhAtslKWZ1VR3VnEqRbXEJLLJR6mlkitsFanRNhFDRKjsKHEX r9njQYfBTWwPCdB3Fug8Bf1Jyate X-Google-Smtp-Source: ABdhPJwmeSiEy8q5ju3zjwrXliGZIHlzYK+gBb189/+0rR+aqgg3u0rJgcGBcFQZL55Ljr3xsdHWqg== X-Received: by 2002:a17:902:e844:b029:de:5abb:7df1 with SMTP id t4-20020a170902e844b02900de5abb7df1mr3247903plg.55.1612512397486; Fri, 05 Feb 2021 00:06:37 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:36 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 6/7] dma-buf: system_heap: Add drm pagepool support to system heap Date: Fri, 5 Feb 2021 08:06:20 +0000 Message-Id: <20210205080621.3102035-7-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 Utilize the drm pagepool code to speed up allocation performance. This is similar to the ION pagepool usage, but tries to utilize generic code instead of a custom implementation. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Fix build issue caused by selecting PAGE_POOL w/o NET as Reported-by: kernel test robot v3: * Simplify the page zeroing logic a bit by using kmap_atomic instead of vmap as suggested by Daniel Mentz v5: * Shift away from networking page pool completely to dmabuf page pool implementation v6: * Switch again to using the drm_page_pool code shared w/ ttm_pool --- drivers/dma-buf/heaps/Kconfig | 1 + drivers/dma-buf/heaps/system_heap.c | 56 +++++++++++++++++++++++++++-- 2 files changed, 54 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index f7aef8bc7119..7e28934e0def 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -4,6 +4,7 @@ config DMABUF_HEAPS_DEFERRED_FREE config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS + select DRM_PAGE_POOL help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 17e0e9a68baf..6d39e9f32e36 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -21,6 +21,8 @@ #include #include +#include + static struct dma_heap *sys_heap; struct system_heap_buffer { @@ -53,6 +55,7 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, LOW_ORDER_GFP, LOW_ORDER_GFP}; */ static const unsigned int orders[] = {8, 4, 0}; #define NUM_ORDERS ARRAY_SIZE(orders) +struct drm_page_pool *pools[NUM_ORDERS]; static struct sg_table *dup_sg_table(struct sg_table *table) { @@ -281,18 +284,49 @@ static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) dma_buf_map_clear(map); } +static int system_heap_free_pages(struct page *p, unsigned int order) +{ + __free_pages(p, order); + return 1 << order; +} + +static int system_heap_zero_buffer(struct system_heap_buffer *buffer) +{ + struct sg_table *sgt = &buffer->sg_table; + struct sg_page_iter piter; + struct page *p; + void *vaddr; + int ret = 0; + + for_each_sgtable_page(sgt, &piter, 0) { + p = sg_page_iter_page(&piter); + vaddr = kmap_atomic(p); + memset(vaddr, 0, PAGE_SIZE); + kunmap_atomic(vaddr); + } + + return ret; +} + static void system_heap_dma_buf_release(struct dma_buf *dmabuf) { struct system_heap_buffer *buffer = dmabuf->priv; struct sg_table *table; struct scatterlist *sg; - int i; + int i, j; + + /* Zero the buffer pages before adding back to the pool */ + system_heap_zero_buffer(buffer); table = &buffer->sg_table; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); - __free_pages(page, compound_order(page)); + for (j = 0; j < NUM_ORDERS; j++) { + if (compound_order(page) == orders[j]) + break; + } + drm_page_pool_add(pools[j], page); } sg_free_table(table); kfree(buffer); @@ -323,7 +357,9 @@ static struct page *alloc_largest_available(unsigned long size, if (max_order < orders[i]) continue; - page = alloc_pages(order_flags[i], orders[i]); + page = drm_page_pool_fetch(pools[i]); + if (!page) + page = alloc_pages(order_flags[i], orders[i]); if (!page) continue; return page; @@ -428,6 +464,20 @@ static const struct dma_heap_ops system_heap_ops = { static int system_heap_create(void) { struct dma_heap_export_info exp_info; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + pools[i] = drm_page_pool_create(orders[i], + system_heap_free_pages); + if (IS_ERR(pools[i])) { + int j; + + pr_err("%s: page pool creation failed!\n", __func__); + for (j = 0; j < i; j++) + drm_page_pool_destroy(pools[j]); + return PTR_ERR(pools[i]); + } + } exp_info.name = "system"; exp_info.ops = &system_heap_ops; From patchwork Fri Feb 5 08:06:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 376892 Delivered-To: patches@linaro.org Received: by 2002:a02:b18a:0:0:0:0:0 with SMTP id t10csp2008441jah; Fri, 5 Feb 2021 00:06:40 -0800 (PST) X-Received: by 2002:aa7:9585:0:b029:1c5:10ce:ba7f with SMTP id z5-20020aa795850000b02901c510ceba7fmr3531110pfj.79.1612512399873; Fri, 05 Feb 2021 00:06:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612512399; cv=none; d=google.com; s=arc-20160816; b=YfkZsrP+Dmlqgox38AOFbnCpa7a+H/ZcNwJnqPyevcPAoPJWQ00yrLnnIhybqZC5PA 2Jm2x+vxoVIaGQFsfu+9tKEQOs6eusjdY7iUQKVEwv19mCgdGYMRbEAUn/zfLjgKPGTg YOxcnggFu9GzzNfCe2wUq8hHN1sdBumL2EP6cWil/c+R8BK5CDIO0FLHNsIfnhhln6ST hZ9/hg/7WoIuOuBCXMJiMkEJ3IiWW+gdjASY3ipDd1r942sxVHScQkesifIGlRFJTVf8 5vA9OEkjt4wRghcr0GrpgMooaZ/A9pPq7QhKRZTP75sryE84cdrdMf8uyz7o9+8s3RgE sdEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=GOndOP8F+BfXNLjrSX9CtfpObuYsc9aU1JNxbRzKav4=; b=lgi4pW+BoiyWJ71ByEUEEgKwaWVS/qJYdm4Q5uiFMdna7TUl6H9PHg2VCMU6Tqr6cW 0+kE+y1YqgmV1KlbQHAaAq2XbwHIIXfz2QvxUtPzxiItW7m87K6iyPe4ukFd7ohvKkSk 137LwS40m3ZheivRpN7/Lw8CGCsYeFH9tId6x4nnPQPKsnmkt1yXwbcYgDkcfpTQuiLT 3VXON2elN0zFVu+V8FExpSITl3Kj9jifls9xync6ansbrTcGJ+0DB3pVayOvC8mjJTdZ oh7USmHm0pmDmH3wb6qyZd4qDpBzgTx62bnLh80v7XyCNcpr83UljTXfBADC63I+pszs duTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Nn+MlC5t; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id s15sor4187596pfe.24.2021.02.05.00.06.39 for (Google Transport Security); Fri, 05 Feb 2021 00:06:39 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Nn+MlC5t; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GOndOP8F+BfXNLjrSX9CtfpObuYsc9aU1JNxbRzKav4=; b=Nn+MlC5tKBHafRdbSa7S8DmObCjHIWlNYVXwMR2RRmFpY/1YFN/JejT9MseDX0khO/ Piwh7OqAi92p946kQ+P8IdpWiNbpooftChpefls8Lmv9U8Q7rleVcIwXuOnqu/4BQzC+ goprUTNEMoRlSmp6To1gKsxWw6vM72Yr5P7HEGj5Yi9M5drA9jSPHpz65qDMqWz+R6RX v/NGL7oFcwvXJOVn5S/LHz+ARYcnN5bywATDFLipVqQsms2hfi5V5SGE6jH3LGHk0hhg /aqgi5o6SNZzoeZBdlWnAv3EE3rtQ2IxAYedQIia3hiO+oABFCiB/SacNV9a2foTJouk GRDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GOndOP8F+BfXNLjrSX9CtfpObuYsc9aU1JNxbRzKav4=; b=O599IuJwCcFCV7pkpwSwhrhKP2vY8uRBQ7TkUOqB9qye4nPqfuJCAIZ6HXrIv9KsV1 9NPpfixKKKzzp7Kxd1XoNjcGLbKWWzqlZCgWQVF9UkYECElETmPaEnEIPpIY3R1zvelI cGcWVjhtJiqIpjT0zNnWQYPf0IHfGgjJcvi9OYWCfnIEg87JWsYzdrTonDs7GBA8yXmh zEctWZg2NbZFebCzrXxTC70G21K5s/Y4gQUp8tBStKmLMFpU45hhiKl9moWIiB5zIp2I PziFcwrFN/hpZGYxHSDJ9jNwoLV11BwbHAji4/KnCTUwcjHXpk+F0FPWVxhXqFN2HXux 4Dxw== X-Gm-Message-State: AOAM532hBc+/BNnPJiX1Tnc2ah8qcESw8JdK4YDGpq9ooQpvc4rZd5L1 IuLQJEPiCu6vk1UJUZRunjvftFal X-Google-Smtp-Source: ABdhPJx+FpGulW/NuF4+mtl7ogJq0jqebiVtB3k4/SZDAmxf9F1Xx8HSTD5aCBfLoqE9Pbnctuk3TQ== X-Received: by 2002:aa7:9538:0:b029:1d6:ccef:72ad with SMTP id c24-20020aa795380000b02901d6ccef72admr3319521pfp.64.1612512399559; Fri, 05 Feb 2021 00:06:39 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id 32sm9520070pgq.80.2021.02.05.00.06.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 00:06:38 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [RFC][PATCH v6 7/7] dma-buf: system_heap: Add deferred freeing to the system heap Date: Fri, 5 Feb 2021 08:06:21 +0000 Message-Id: <20210205080621.3102035-8-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210205080621.3102035-1-john.stultz@linaro.org> References: <20210205080621.3102035-1-john.stultz@linaro.org> MIME-Version: 1.0 Utilize the deferred free helper library in the system heap. This provides a nice performance bump and puts the system heap performance on par with ION. Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v2: * Rework deferred-free api to use reason enum as suggested by Suren Baghdasaryan --- drivers/dma-buf/heaps/Kconfig | 1 + drivers/dma-buf/heaps/system_heap.c | 31 ++++++++++++++++++++++------- 2 files changed, 25 insertions(+), 7 deletions(-) -- 2.25.1 diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 7e28934e0def..10632ccfb4a5 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -5,6 +5,7 @@ config DMABUF_HEAPS_SYSTEM bool "DMA-BUF System Heap" depends on DMABUF_HEAPS select DRM_PAGE_POOL + select DMABUF_HEAPS_DEFERRED_FREE help Choose this option to enable the system dmabuf heap. The system heap is backed by pages from the buddy allocator. If in doubt, say Y. diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 6d39e9f32e36..042244407db5 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -22,6 +22,7 @@ #include #include +#include "deferred-free-helper.h" static struct dma_heap *sys_heap; @@ -33,6 +34,7 @@ struct system_heap_buffer { struct sg_table sg_table; int vmap_cnt; void *vaddr; + struct deferred_freelist_item deferred_free; }; struct dma_heap_attachment { @@ -308,30 +310,45 @@ static int system_heap_zero_buffer(struct system_heap_buffer *buffer) return ret; } -static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +static void system_heap_buf_free(struct deferred_freelist_item *item, + enum df_reason reason) { - struct system_heap_buffer *buffer = dmabuf->priv; + struct system_heap_buffer *buffer; struct sg_table *table; struct scatterlist *sg; int i, j; + buffer = container_of(item, struct system_heap_buffer, deferred_free); /* Zero the buffer pages before adding back to the pool */ - system_heap_zero_buffer(buffer); + if (reason == DF_NORMAL) + if (system_heap_zero_buffer(buffer)) + reason = DF_UNDER_PRESSURE; // On failure, just free table = &buffer->sg_table; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page = sg_page(sg); - for (j = 0; j < NUM_ORDERS; j++) { - if (compound_order(page) == orders[j]) - break; + if (reason == DF_UNDER_PRESSURE) { + __free_pages(page, compound_order(page)); + } else { + for (j = 0; j < NUM_ORDERS; j++) { + if (compound_order(page) == orders[j]) + break; + } + drm_page_pool_add(pools[j], page); } - drm_page_pool_add(pools[j], page); } sg_free_table(table); kfree(buffer); } +static void system_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct system_heap_buffer *buffer = dmabuf->priv; + + deferred_free(&buffer->deferred_free, system_heap_buf_free, buffer->len); +} + static const struct dma_buf_ops system_heap_buf_ops = { .attach = system_heap_attach, .detach = system_heap_detach,