From patchwork Wed Jun 30 01:34:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 468674 Delivered-To: patches@linaro.org Received: by 2002:a02:c94a:0:0:0:0:0 with SMTP id u10csp5477389jao; Tue, 29 Jun 2021 18:34:33 -0700 (PDT) X-Received: by 2002:a17:90a:6394:: with SMTP id f20mr1773076pjj.80.1625016873258; Tue, 29 Jun 2021 18:34:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625016873; cv=none; d=google.com; s=arc-20160816; b=Tq5u6CEcxXmjY1WP/h7BJb9gkitrwwRRQwAEwm++5F2jk+n27aNkcyENLD7ybxmwxP eVXghe6iNekZCuSv4+StN1yvox3RVieOBEVY3Ln2jk1Fx4ipf8SyTOIwAJoy+U6tnGhe zsk3Y7bEHjIyhGa3/71YDNjknz8asb9IZIZo58XCQmJI6f4p5wimHuEgEaQoEJigM7KF 333deku/3bVqV2Ux16A5LMaZ1ZqGoDdAvfuhOW+NOoH9v7Kzs0L43JdGGLnqYXeLvkea mDeGWcxXvj3Gc1XbefhVi8j+SKGAalWimsk7lQqSGIibWkC6BAKTGyJ56Gw6U3f6UEtD RwkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=f7FvesnFZtEkQqc9Vhikn1W8hwZHSFqCkNBKNAcZa+k=; b=x3+WFAPpo08T3yXGkETrHtTPCKgvrd+PNb4DIv7CDKZcWKmC+oe+t3PYSq3LaRF5K2 2CsfoA5s1+gd7mqXXYqQF7XGIg8Kcu+I40i8ZoaSexPICgB3rOe7JiwZK1OlACqkknt+ wWBl0bokwXRVqjNP0QIH3uJ3UyW+kj1+x633+Ls9j7GvGEheRn8R2RvdzqkPAqCTbZQQ QGSlEZI9f7MYzZTxwamtMUhIpVQdEZWA1xTxUHI1VdL95DSoDtBGZ8KNNQbogA1dviq0 cKwndmNse9N8qtTzpbSISAWL7GGHIkekjQXZzxJ8EvZ6Nq2XpiGosPB3FLcRHi/Dk4+j z52g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UTRqcAon; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id o131sor1313629pfg.43.2021.06.29.18.34.33 for (Google Transport Security); Tue, 29 Jun 2021 18:34:33 -0700 (PDT) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=UTRqcAon; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f7FvesnFZtEkQqc9Vhikn1W8hwZHSFqCkNBKNAcZa+k=; b=UTRqcAonCOmEpLXYTxiIciOZgFW4bRHzMzBoD3+Gibvj2TTgv0sBO0MIt+YmnarrCe pycct/KBpz4kBF+0zrZPzrwnex23iy8TOlVH1Spg81GdJVh8KeDpkpjxdNxnDeXuxtSZ u8ws+zhm4OvLK7kDUtNlj1nT+tJEUKcyE1tUu/P0C86GHL0KHZQGx0nvoOH98C85c4fs O5aYnoV/Hy/FARfg9HRydjLNAtTDoi9gmHGnE5UQx+4yW6rb93y+YXllGCt9T33kD4yP qRlYDUwJniiMkOZnEVBHS7BB4sOroW6LY6pTcH57N54x+MQeIy4pATMnXrlilsDtpoe5 d3JA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f7FvesnFZtEkQqc9Vhikn1W8hwZHSFqCkNBKNAcZa+k=; b=scZm23hIJg0X1OZIp9k1F+DbzLwCJiblBgLOuXoL5ofr0IyN6AwlD5VKD1bQfnTpGf C7k+HJuW4sqoxa+11q+WY7FurTDRH+C2AZSzMTM7Xz35a4gbk1hfzBSuQbLj25AMza4T 5gwqmEGzvKaIjTQZfQIzT7Xo4hKyO8diYBxa0vS5ibZYaM5Gu2Igvcss8DT20hpiMUvq C9BIPQ1VPDv7+yOypa2zbv0OO9arhOyrzG+kJQggo1sbxFDaKXQm84ec1CKnTea2wyxr UTxEQ6sjHKqxzIlE+2PDnMWJ7TTj8egIa8Y2OcRGz8Ipp5d/AxyULEutMTifKa1XUkjN ASwQ== X-Gm-Message-State: AOAM5336/vMTyZu6g1DFDcThLJ2inYi+npl5DWLBgJXvOa39jQ/nTuMI pB8CVGrvZXATzmg8rElm12/9Qhth X-Google-Smtp-Source: ABdhPJwwJ6FImXAWETVvS8aVQkTszK7LdVQHHAgqehKcfaUi5GcmNfbDKZyWguSJ+Eqv6cg/qVN1Zg== X-Received: by 2002:a62:5547:0:b029:2ec:8f20:4e2 with SMTP id j68-20020a6255470000b02902ec8f2004e2mr33097276pfb.71.1625016872939; Tue, 29 Jun 2021 18:34:32 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id g8sm20252901pja.14.2021.06.29.18.34.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 18:34:32 -0700 (PDT) From: John Stultz To: lkml Cc: John Stultz , Daniel Vetter , Christian Koenig , Sumit Semwal , Liam Mark , Chris Goldsworthy , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Subject: [PATCH v9 2/5] drm: ttm_pool: Rework ttm_pool to use drm_page_pool Date: Wed, 30 Jun 2021 01:34:18 +0000 Message-Id: <20210630013421.735092-3-john.stultz@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org> References: <20210630013421.735092-1-john.stultz@linaro.org> MIME-Version: 1.0 This patch reworks the ttm_pool logic to utilize the recently added drm_page_pool code. This adds drm_page_pool structures to the ttm_pool_type structures, and then removes all the ttm_pool_type shrinker logic (as its handled in the drm_page_pool shrinker). NOTE: There is one mismatch in the interfaces I'm not totally happy with. The ttm_pool tracks all of its pooled pages across a number of different pools, and tries to keep this size under the specified page_pool_size value. With the drm_page_pool, there may other users, however there is still one global shrinker list of pools. So we can't easily reduce the ttm pool under the ttm specified size without potentially doing a lot of shrinking to other non-ttm pools. So either we can: 1) Try to split it so each user of drm_page_pools manages its own pool shrinking. 2) Push the max value into the drm_page_pool, and have it manage shrinking to fit under that global max. Then share those size/max values out so the ttm_pool debug output can have more context. I've taken the second path in this patch set, but wanted to call it out so folks could look closely. Thoughts would be greatly appreciated here! Cc: Daniel Vetter Cc: Christian Koenig Cc: Sumit Semwal Cc: Liam Mark Cc: Chris Goldsworthy Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Stultz --- v7: * Major refactoring to use drm_page_pools inside the ttm_pool_type structure. This allows us to use container_of to get the needed context to free a page. This also means less code is changed overall. v8: * Reworked to use the new cleanly rewritten drm_page_pool logic v9: * Renamed functions, and dropped duplicative order tracking, as suggested by ChristianK * Use new *_(un)lock_shrinker() hooks to fix atomic calculations for debugfs --- drivers/gpu/drm/Kconfig | 1 + drivers/gpu/drm/ttm/ttm_pool.c | 167 ++++++--------------------------- include/drm/ttm/ttm_pool.h | 14 +-- 3 files changed, 33 insertions(+), 149 deletions(-) -- 2.25.1 diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 52d9ba92b35e..6be5344c009c 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -183,6 +183,7 @@ config DRM_PAGE_POOL config DRM_TTM tristate depends on DRM && MMU + select DRM_PAGE_POOL help GPU memory management subsystem for devices with multiple GPU memory types. Will be enabled automatically if a device driver diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index cb38b1a17b09..7ae647bce551 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -40,6 +40,7 @@ #include #endif +#include #include #include #include @@ -70,10 +71,6 @@ static struct ttm_pool_type global_uncached[MAX_ORDER]; static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; -static struct mutex shrinker_lock; -static struct list_head shrinker_list; -static struct shrinker mm_shrinker; - /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, unsigned int order) @@ -158,6 +155,15 @@ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching caching, kfree(dma); } +static void ttm_pool_free_callback(struct drm_page_pool *subpool, + struct page *p) +{ + struct ttm_pool_type *pt; + + pt = container_of(subpool, struct ttm_pool_type, subpool); + return ttm_pool_free_page(pt->pool, pt->caching, subpool->order, p); +} + /* Apply a new caching to an array of pages */ static int ttm_pool_apply_caching(struct page **first, struct page **last, enum ttm_caching caching) @@ -219,66 +225,20 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr, DMA_BIDIRECTIONAL); } -/* Give pages into a specific pool_type */ -static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p) -{ - unsigned int i, num_pages = 1 << pt->order; - - for (i = 0; i < num_pages; ++i) { - if (PageHighMem(p)) - clear_highpage(p + i); - else - clear_page(page_address(p + i)); - } - - spin_lock(&pt->lock); - list_add(&p->lru, &pt->pages); - spin_unlock(&pt->lock); - atomic_long_add(1 << pt->order, &allocated_pages); -} - -/* Take pages from a specific pool_type, return NULL when nothing available */ -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt) -{ - struct page *p; - - spin_lock(&pt->lock); - p = list_first_entry_or_null(&pt->pages, typeof(*p), lru); - if (p) { - atomic_long_sub(1 << pt->order, &allocated_pages); - list_del(&p->lru); - } - spin_unlock(&pt->lock); - - return p; -} - /* Initialize and add a pool type to the global shrinker list */ static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool, enum ttm_caching caching, unsigned int order) { pt->pool = pool; pt->caching = caching; - pt->order = order; - spin_lock_init(&pt->lock); - INIT_LIST_HEAD(&pt->pages); - mutex_lock(&shrinker_lock); - list_add_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); + drm_page_pool_init(&pt->subpool, order, ttm_pool_free_callback); } /* Remove a pool_type from the global shrinker list and free all pages */ static void ttm_pool_type_fini(struct ttm_pool_type *pt) { - struct page *p; - - mutex_lock(&shrinker_lock); - list_del(&pt->shrinker_list); - mutex_unlock(&shrinker_lock); - - while ((p = ttm_pool_type_take(pt))) - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); + drm_page_pool_fini(&pt->subpool); } /* Return the pool_type to use for the given caching and order */ @@ -309,30 +269,6 @@ static struct ttm_pool_type *ttm_pool_select_type(struct ttm_pool *pool, return NULL; } -/* Free pages using the global shrinker list */ -static unsigned int ttm_pool_shrink(void) -{ - struct ttm_pool_type *pt; - unsigned int num_freed; - struct page *p; - - mutex_lock(&shrinker_lock); - pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); - - p = ttm_pool_type_take(pt); - if (p) { - ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); - num_freed = 1 << pt->order; - } else { - num_freed = 0; - } - - list_move_tail(&pt->shrinker_list, &shrinker_list); - mutex_unlock(&shrinker_lock); - - return num_freed; -} - /* Return the allocation order based for a page */ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) { @@ -389,7 +325,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, struct ttm_pool_type *pt; pt = ttm_pool_select_type(pool, tt->caching, order); - p = pt ? ttm_pool_type_take(pt) : NULL; + p = pt ? drm_page_pool_remove(&pt->subpool) : NULL; if (p) { apply_caching = true; } else { @@ -471,16 +407,13 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) pt = ttm_pool_select_type(pool, tt->caching, order); if (pt) - ttm_pool_type_give(pt, tt->pages[i]); + drm_page_pool_add(&pt->subpool, tt->pages[i]); else ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); i += num_pages; } - - while (atomic_long_read(&allocated_pages) > page_pool_size) - ttm_pool_shrink(); } EXPORT_SYMBOL(ttm_pool_free); @@ -532,44 +465,7 @@ void ttm_pool_fini(struct ttm_pool *pool) } } -/* As long as pages are available make sure to release at least one */ -static unsigned long ttm_pool_shrinker_scan(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_freed = 0; - - do - num_freed += ttm_pool_shrink(); - while (!num_freed && atomic_long_read(&allocated_pages)); - - return num_freed; -} - -/* Return the number of pages available or SHRINK_EMPTY if we have none */ -static unsigned long ttm_pool_shrinker_count(struct shrinker *shrink, - struct shrink_control *sc) -{ - unsigned long num_pages = atomic_long_read(&allocated_pages); - - return num_pages ? num_pages : SHRINK_EMPTY; -} - #ifdef CONFIG_DEBUG_FS -/* Count the number of pages available in a pool_type */ -static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) -{ - unsigned int count = 0; - struct page *p; - - spin_lock(&pt->lock); - /* Only used for debugfs, the overhead doesn't matter */ - list_for_each_entry(p, &pt->pages, lru) - ++count; - spin_unlock(&pt->lock); - - return count; -} - /* Print a nice header for the order */ static void ttm_pool_debugfs_header(struct seq_file *m) { @@ -588,7 +484,8 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, unsigned int i; for (i = 0; i < MAX_ORDER; ++i) - seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); + seq_printf(m, " %8lu", + drm_page_pool_get_size(&pt[i].subpool)); seq_puts(m, "\n"); } @@ -596,7 +493,10 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, static void ttm_pool_debugfs_footer(struct seq_file *m) { seq_printf(m, "\ntotal\t: %8lu of %8lu\n", - atomic_long_read(&allocated_pages), page_pool_size); + atomic_long_read(&allocated_pages), + drm_page_pool_get_max()); + seq_printf(m, "(%8lu in non-ttm pools)\n", drm_page_pool_get_total() - + atomic_long_read(&allocated_pages)); } /* Dump the information for the global pools */ @@ -604,7 +504,7 @@ static int ttm_pool_debugfs_globals_show(struct seq_file *m, void *data) { ttm_pool_debugfs_header(m); - mutex_lock(&shrinker_lock); + dma_page_pool_lock_shrinker(); seq_puts(m, "wc\t:"); ttm_pool_debugfs_orders(global_write_combined, m); seq_puts(m, "uc\t:"); @@ -613,7 +513,7 @@ static int ttm_pool_debugfs_globals_show(struct seq_file *m, void *data) ttm_pool_debugfs_orders(global_dma32_write_combined, m); seq_puts(m, "uc 32\t:"); ttm_pool_debugfs_orders(global_dma32_uncached, m); - mutex_unlock(&shrinker_lock); + dma_page_pool_unlock_shrinker(); ttm_pool_debugfs_footer(m); @@ -640,7 +540,7 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) ttm_pool_debugfs_header(m); - mutex_lock(&shrinker_lock); + dma_page_pool_lock_shrinker(); for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { seq_puts(m, "DMA "); switch (i) { @@ -656,7 +556,7 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) } ttm_pool_debugfs_orders(pool->caching[i].orders, m); } - mutex_unlock(&shrinker_lock); + dma_page_pool_unlock_shrinker(); ttm_pool_debugfs_footer(m); return 0; @@ -666,13 +566,10 @@ EXPORT_SYMBOL(ttm_pool_debugfs); /* Test the shrinker functions and dump the result */ static int ttm_pool_debugfs_shrink_show(struct seq_file *m, void *data) { - struct shrink_control sc = { .gfp_mask = GFP_NOFS }; - fs_reclaim_acquire(GFP_KERNEL); - seq_printf(m, "%lu/%lu\n", ttm_pool_shrinker_count(&mm_shrinker, &sc), - ttm_pool_shrinker_scan(&mm_shrinker, &sc)); + seq_printf(m, "%lu/%lu\n", drm_page_pool_get_total(), + (unsigned long)drm_page_pool_shrink()); fs_reclaim_release(GFP_KERNEL); - return 0; } DEFINE_SHOW_ATTRIBUTE(ttm_pool_debugfs_shrink); @@ -693,8 +590,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) if (!page_pool_size) page_pool_size = num_pages; - mutex_init(&shrinker_lock); - INIT_LIST_HEAD(&shrinker_list); + drm_page_pool_set_max(page_pool_size); for (i = 0; i < MAX_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, @@ -713,11 +609,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) debugfs_create_file("page_pool_shrink", 0400, ttm_debugfs_root, NULL, &ttm_pool_debugfs_shrink_fops); #endif - - mm_shrinker.count_objects = ttm_pool_shrinker_count; - mm_shrinker.scan_objects = ttm_pool_shrinker_scan; - mm_shrinker.seeks = 1; - return register_shrinker(&mm_shrinker); + return 0; } /** @@ -736,7 +628,4 @@ void ttm_pool_mgr_fini(void) ttm_pool_type_fini(&global_dma32_write_combined[i]); ttm_pool_type_fini(&global_dma32_uncached[i]); } - - unregister_shrinker(&mm_shrinker); - WARN_ON(!list_empty(&shrinker_list)); } diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 4321728bdd11..c854a81491da 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -30,6 +30,7 @@ #include #include #include +#include struct device; struct ttm_tt; @@ -39,22 +40,15 @@ struct ttm_operation_ctx; /** * ttm_pool_type - Pool for a certain memory type * - * @pool: the pool we belong to, might be NULL for the global ones - * @order: the allocation order our pages have + * @pool: the ttm pool we belong to, might be NULL for the global ones * @caching: the caching type our pages have - * @shrinker_list: our place on the global shrinker list - * @lock: protection of the page list - * @pages: the list of pages in the pool + * @subpool: the dma_page_pool that we use to manage the pages */ struct ttm_pool_type { struct ttm_pool *pool; - unsigned int order; enum ttm_caching caching; - struct list_head shrinker_list; - - spinlock_t lock; - struct list_head pages; + struct drm_page_pool subpool; }; /**