diff mbox series

mm/slub: Fix kmem_cache_alloc_bulk() error path

Message ID 966f27f7999acb1db8d60e241a73dfde3344345c.camel@gmx.de
State New
Headers show
Series mm/slub: Fix kmem_cache_alloc_bulk() error path | expand

Commit Message

Mike Galbraith July 9, 2021, 5:20 a.m. UTC
The kmem_cache_alloc_bulk() error exit path double unlocks cpu_slab->lock
instead of making the required slub_put_cpu_ptr() call.  Fix that.

Boring details:
1. 12c69bab1ece ("mm, slub: move disabling/enabling irqs to ___slab_alloc())")
adds local_irq_enable() above goto error, leaving the one at error: intact.
2. 2180da7ea70a0 ("mm, slub: use migrate_disable() on PREEMPT_RT") adds
slub_get/put_cpu_ptr() calls, missing the already broken error path,
creating unpaired slub_put_cpu_ptr()/slub_put_cpu_ptr() calls.
3. 340e7c4136c3 ("mm, slub: convert kmem_cpu_slab protection to local_lock")
converts local_irq_enable() to local_unlock_irq(), culminating in a
double unlock and unpaired slub_put_cpu_ptr()/slub_put_cpu_ptr().

Signed-off-by: Mike Galbraith <efault@gmx.de>
---
 mm/slub.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3605,7 +3605,7 @@  int kmem_cache_alloc_bulk(struct kmem_ca
 				slab_want_init_on_alloc(flags, s));
 	return i;
 error:
-	local_unlock_irq(&s->cpu_slab->lock);
+	slub_put_cpu_ptr(s->cpu_slab);
 	slab_post_alloc_hook(s, objcg, flags, i, p, false);
 	__kmem_cache_free_bulk(s, i, p);
 	return 0;