diff mbox series

[RT,1/2] mm: slub: Don't resize the location tracking cache on PREEMPT_RT

Message ID 53a3ad9181bcdb62d4be6d521d6aeb490eb77e7f.1617821301.git.zanussi@kernel.org
State New
Headers show
Series [RT,1/2] mm: slub: Don't resize the location tracking cache on PREEMPT_RT | expand

Commit Message

Tom Zanussi April 7, 2021, 6:48 p.m. UTC
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

v5.4.109-rt56-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


[ Upstream commit 87bd0bf324f4c5468ea3d1de0482589f491f3145 ]

The location tracking cache has a size of a page and is resized if its
current size is too small.
This allocation happens with disabled interrupts and can't happen on
PREEMPT_RT.
Should one page be too small, then we have to allocate more at the
beginning. The only downside is that less callers will be visible.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <zanussi@kernel.org>
---
 mm/slub.c | 3 +++
 1 file changed, 3 insertions(+)
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 1815e28852fe..0d78368d149a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4647,6 +4647,9 @@  static int alloc_loc_track(struct loc_track *t, unsigned long max, gfp_t flags)
 	struct location *l;
 	int order;
 
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && flags == GFP_ATOMIC)
+		return 0;
+
 	order = get_order(sizeof(struct location) * max);
 
 	l = (void *)__get_free_pages(flags, order);